A method executed within a virtual machine (VM) host computer system. The method includes, during a pre-execution phase of a VM, mapping a physical address (PA)-backed range of the VM’s guest-physical memory pages to host-physical addresses (HPAs) within the system memory of the VM host and mapping a virtual-address (VA)-backed range of the VM’s guest-physical memory pages to host-virtual addresses (HVAs). The method also includes, during an execution phase of the VM, performing intercept processing when the VM accesses a guest-physical address (GPA) within the VA-backed range to map a corresponding HVA to an HPA. This method enables memory oversubscription of the VM by the VM host via the VA-backed memory range. At the same time, the method provides the VM with access to a PA-backed memory range that can be utilized without the potential performance penalties associated with using the VA-backed range.
Some embodiments determine digital content to compose a software development environment and generate an executable image accordingly. In examples this includes obtaining a code artifact identification of a software repository, commit level, project, file extension, library, package, source code pattern, textual description, or other code artifact, and getting development environment component identifications which identify respective components, such as a software development tool, tool setting, tool extension, tool extension setting, security key or token or secret, runtime, kernel, driver, shell, or environment variable. These embodiments acquire correlation values that are calculated from historic data about artifact-component relationships. In response to results of comparing correlation values to thresholds, embodiments place components in a deployment set as default components of the environment being composed. Then these embodiments generate a deployable development environment executable image from at least the deployment set.
A computing device assembly is presented. The computing device assembly comprises a rack and a plurality of compute units that are horizontally oriented and mounted within the rack in first and second vertical stacks. A plurality of switches are vertically oriented and mounted along the rack in a first bank of switches and a second bank of switches. A first set of horizontal cable backplanes are mounted in a first vertical assembly along an interior side of the first vertical stack. A second set of horizontal cable backplanes are mounted in a first vertical assembly along an interior side of the second vertical stack. A central switch distribution column is communicatively coupled to the first and second sets of horizontal cable backplanes and to the first and second banks of switches, such that each of the plurality of compute units is connected to each of the plurality of switches.
An original email text is evaluated against a set of properties to identify shortcomings in the natural language, format, and structure used in the original email text. The properties represent characteristics needed to enhance the original mail text to a professionally-written style. A first machine learning model identifies the missing properties or characteristics in the original email that exist given a few-shot context. The few-shot context includes labeled email samples containing the desired properties. A second machine learning model is used to generate an enhanced email given the original email, personal data relevant to the content of the original data, the missing properties and the few-shot context.
The description relates to hinged devices, such as hinged computing devices that include a pop-up function. One example can include a first portion and a second portion that are rotatably secured through a range of rotation from an open orientation to a closed orientation. This example can also include a selective isolation assembly configured to convert rotational torque associated with rotating the first and second portions toward the closed orientation to a compressive force that compresses a spring. The selective isolation assembly is configured to disconnect the first and second portions and the compressed spring as the first and second portions approach the closed orientation.
This document relates to generation of new images from input images depicting different objects. For instance, the disclosed techniques can generate a layout that specifies locations of the objects. Then, a canvas can be generated depicting the objects in the specified locations. A generative image model can be employed to inpaint areas of the canvas around the depicted objects, while the objects themselves retain their original appearance from the input images.
A method, computer program product, and computing system for processing a dataset of query-answer pairs including generating synthetic variations of queries from a dataset of query-answer pairs, generating an embedding dataset by transforming the synthetic variations of queries into synthetic query embeddings and queries in the dataset of query-answer pairs into query embeddings, storing at least a portion of the synthetic query embeddings and query embeddings in a semantic cache, wherein each synthetic query embedding and query embedding stored in the semantic cache is associated with a respective distance threshold determined based at least in part on a measure of semantic similarity between the synthetic query and the query used to generate a particular query embedding, and processing a subsequent query using the synthetic variations of queries from the semantic cache and the distance thresholds.
A case (100, 200, 260) for a computing device (102) comprises a rotatable platform (150A, 150B, 150C, 150D, 250A, 250B, 250C, and 250D), a piezoelectric air mover (122A, 122B, 122C, 122D) mounted to the rotatable platform (150A, 150B, 150C, 150D, 250A, 250B, 250C, 250D), and a sensor (142). A processor (138) is configured to execute instructions stored in memory (140) to receive data from the sensor (142) and, based at least on the data from the sensor (142), cause the rotatable platform (150A, 150B, 150C, 150D, 250A, 250B, 250C, 250D) to rotate the piezoelectric air mover (122A, 122B, 122C, 122D).
An image processing unit may select a pixel in the line of pixels by enabling a first connection between a source driver and an input of refresh circuitry of the pixel along a content data line, based at least in part on receipt of a pixel select signal corresponding to the pixel, wherein at least one of the pixels in the line of pixels is not selected. An image processing unit may refresh, based at least in part on receiving a pixel refresh signal, the selected pixel with content data provided using the content data line by enabling a second connection between the input of the refresh circuitry and the pixel along the content data line, wherein refreshing the selected pixel includes providing content data to the pixel using the content data line and not providing content data to the at least one of the pixels.
G09G 3/30 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels
A computer-implemented method includes dividing a data file into a plurality of data segments and assigning segment identifiers to the data segments, respectively. The data segments, if combined using a first sequence of the segment identifiers, form the data file. The method also includes shuffling, according to a reordering pattern, the segment identifiers into a second sequence that is different from the first sequence. The reordering pattern indicates a mapping between the first sequence and the second sequence. The method further includes representing the reordering pattern in metadata, and providing the plurality of data segments and the metadata to an interface for an implementation of one or more networking protocols, which implements at least a transport protocol. The plurality of data segments are provided to the interface in order of the second sequence. Related systems and software are also disclosed.
A technique for enhancing video representation in network-based meetings dynamically replaces low-quality video feeds with animated avatars. The system evaluates individual video feeds against quality thresholds related to head pose, facial feature visibility, and image clarity. When a feed fails to meet these thresholds, an animation of the participant is generated using a previously captured image. Speech context analysis enables the application of realistic facial expressions and lip movements to the animation. The animated avatar, synchronized with the speech of the participant, is then displayed in place of the original video feed, within the user interface of the network-based meeting. This approach maintains visual engagement for remote participants, even when in-room attendees are partially occluded, poorly captured by the camera, or have suboptimal head poses.
Back-to-back stacked silicon-based capacitors in a package substrate for a system-on-chip (SoC) and methods of forming the same are described. An example system includes a package substrate comprising a core layer including plated-through holes. The system further includes at least one die mounted on top of the package substrate, where the at least one die includes at least one voltage domain. The system further includes a set of back-to-back stacked silicon-based capacitors formed within the core layer of the package substrate. The set of back-to-back stacked silicon-based capacitors may be formed in slots within the core layer in regions excluding the plated-through holes. A subset of the set of back-to-back stacked silicon-based capacitors may be coupled to components within the at least one voltage domain to manage an impedance associated with the at least one voltage domain.
H01L 23/50 - Arrangements for conducting electric current to or from the solid state body in operation, e.g. leads or terminal arrangements for integrated circuit devices
The described technology provides a method including determining, based on a system physical address, a cluster of L3 cache nodes that are linked to a group of memory controller nodes, determining, based on the system physical address, an L3 cache node tied to a component hub in a SoC mesh, determining a memory controller node in the SoC mesh that maps to the system physical address, generating a deinterleaved address by relocating low DRAM space of the system physical address and removing the cache cluster bits from the system physical address, mapping the deinterleaved physical address to a DRAM address by assigning bits to DRAM address components, and storing the bit assignments of the DRAM address components.
A chat history between a user and a machine learning model is preserved despite the context window size constraint of the machine learning model to ensure an enduring understanding of the chat or conversation history. When the token constraint of the context window size is reached, a summary of the chat history is generated to replace the chat history. The original content of the summarized chat history is stored in a lookup table and indexed by keywords. Instructions are provided to the model that allow the model to ask for the original content of the chat history summary which is obtained from the lookup table and provided to the model.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
According to examples, an adapter apparatus that enables the transformation of messages, based on storage hierarchies of different cloud storage systems in a heterogenous cloud environment is disclosed. The adapter apparatus is included in a native cloud storage system and transforms messages from an external cloud storage system for execution on the native cloud storage system based on the native storage hierarchy. Similarly, the messages originating at the native cloud storage system are transformed for execution on the external cloud storage system based on the external storage hierarchy. A set of stored adapters including a user-defined adapter, a catalog-based adapter and a model-based adapter enable the conversion. Various virtualization functions including cache control, virtualization of properties, and leases are also enabled by the transformations.
A computing system is configured to train a machine-learning model for detecting suspicious network activities based on a training dataset. The training of the machine-learning model may be supervised or unsupervised training. The training dataset includes multiple strings. For each of the multiple strings, the computing system extracts one or more N-grams substrings, where N is a natural number that is equal to or greater than 2. The computing system then determines a probability of each N-grams substring that may occur in a string. When the machine-learning model is executed, it is configured to classify whether a given string contained in network communication is a random string. In response to classifying that the given string is a random string, an alert is generated at a particular computing system to which the network communication is directed.
Examples of the present disclosure describe systems and methods for an enterprise data container (EDC) that facilitates the secure transfer of data between data boundaries of one or more computing environments. In examples, the EDC serves as a message wrapper for transmitted data. The EDC includes metadata, identification, tracking, security attributes, authenticity, and handling caveats relevant to the operational constraints of one or more computing environments through which data is transferred. The EDC is computing environment agnostic and agnostically manages the data wrapped in the EDC.
A semiconductor package assembly including: a package layer; an interposer layer electrically coupled to the package layer; and a compute die including one or more processing elements, wherein the compute die is electrically coupled to the interposer layer via one or more electrical connections positioned proximate the geometric center of the one or more processing elements.
H01L 23/00 - Details of semiconductor or other solid state devices
H01L 23/538 - Arrangements for conducting electric current within the device in operation from one component to another the interconnection structure between a plurality of semiconductor chips being formed on, or in, insulating substrates
H01L 25/18 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices the devices being of types provided for in two or more different main groups of the same subclass of , , , , or
H05K 1/18 - Printed circuits structurally associated with non-printed electric components
H10B 80/00 - Assemblies of multiple devices comprising at least one memory device covered by this subclass
A liquid-based cooling system for cooling rack-mounted IT equipment including: a first liquid-based cooling device configured to provide a first quantity of cooled fluid to the rack-mounted IT equipment including: an output port configured to provide the first quantity of cooled fluid to the rack-mounted IT equipment, wherein the first quantity of cooled fluid is configured to absorb waste heat from the rack-mounted IT equipment to generate a first quantity of warmed fluid, and an input port configured to receive the first quantity of warmed fluid from the rack-mounted IT equipment and remove the waste heat from the first quantity of warmed fluid to generate the first quantity of cooled fluid; and at least a second liquid-based cooling device configured to provide at least a second quantity of cooled fluid to the rack-mounted IT equipment including: an output port configured to provide the at least a second quantity of cooled fluid to the rack-mounted IT equipment, wherein the at least a second quantity of cooled fluid is configured to absorb waste heat from the rack-mounted IT equipment to generate at least a second quantity of warmed fluid, and an input port configured to receive the at least a second quantity of warmed fluid from the rack-mounted IT equipment and remove the waste heat from the at least a second quantity of warmed fluid to generate the at least a second quantity of cooled fluid.
A method for mutual authentication between a host controller and a battery subsystem of an electronic device includes, at a battery controller, generating a battery-side authentication challenge, and transmitting the battery-side authentication challenge to the host controller. A host-side authentication challenge is received from the host controller. The host-side authentication challenge is signed with a private encryption key of the battery controller to generate a battery-side message digest. The battery-side message digest and a public encryption key of the battery controller are transmitted to the host controller as a battery-side signature pair. A host-side signature pair is received from the host controller. Based at least in part on the host-side signature pair, a battery-side validation result is output that specifies a battery-side validity state for the host controller.
H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
21.
SYSTEM AND METHOD FOR SECURE PROCESSING OF SPEECH SIGNALS USING PSEUDO-SPEECH REPRESENTATIONS
A method, computer program product, and computing system for processing a speech signal. A sensitive portion of the speech signal is identified. A pseudo-speech representation of the sensitive portion is generated using a voice converter system. Speech processing is performed on the speech signal and the pseudo-speech representation of the sensitive portion using a speech processing system.
G10L 13/033 - Voice editing, e.g. manipulating the voice of the synthesiser
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
The disclosure herein describes using a transcript generation model for generating a transcript from a multi-speaker audio stream. Audio data including overlapping speech of a plurality of speakers is obtained and a set of frame embeddings are generated from audio data frames of obtained audio data using an audio data encoder. A set of words and channel change (CC) symbols are generated from the set of frame embeddings using a transcript generation model. The CC symbols are included between pairs of adjacent words that are spoken by different people at the same time. The set of words and CC symbols are transformed into a plurality of transcript lines, wherein words of the set of words are sorted into transcript lines based on CC symbols, and a multi-speaker transcript is generated based on the plurality of transcript lines. The inclusion of CC symbols by the model enables efficient, accurate multi-speaker transcription.
The described technology provides a method including determining a plurality of blocks configured on a system on chip (SoC), generating a register database, wherein the register database is configured to store one or more parameters of a plurality of registers, the plurality of registers representing the registers for a plurality of blocks the SoC, determining one or more pattern strings, wherein each of the strings identify a common functionality among a plurality of registers, performing a search on a register database for identifying a group of registers from the plurality of registers, wherein names of each of the group of registers include the pattern string, and generating a virtual register that relates to the group of registers.
A method for evaluating content output by a retrieval augmented generation (RAG) system includes obtaining question-answer information for data residing in a source index and prompting a large language model (LLM) to generate answer construct conditions for a first test question included in the question-answer information. Each of the answer construct conditions identifies a condition that is satisfied by a ground truth answer. The method further includes generating a question-specific evaluation metric for the first test question based on the answer construct conditions and prompting multiple differently configured RAG systems to answer the first test question based on information within the source index. Multiple answers to the first test question, generated by the multiple RAG systems, are evaluated by repeatedly assessing the question-specific evaluation metric and presenting, on a user interface, comparative quality data quantifying a relative quality of the multiple responses generated by the multiple RAG systems.
A knowledge source is analyzed using machine learning models to detect conflicting or contradictory data in the documents of the knowledge source. The documents in a knowledge source are partitioned into non-overlapping, contiguous text segments containing a unique topic. Questions are generated for each text segment from a generative language model. Factoid answers are generated for each question from a question answering language model. Similar pairs of questions are identified from the answers of each question. A natural language inference model determines whether the answers to the similar questions are contradictory. A remedy is generated to address the documents having the contradictory or conflicting segments.
A method of serving a generative transformer model includes determining a batch size to use in processing inference requests and allocating at least one prompt pipeline and at least on token pipeline to the generative transformer model to process the batch of inference requests. The number of prompt pipelines and the number of token pipelines, and the depths of the pipelines are determined based on the batch size, an average prompt length, a cache requirement per stage, and a memory footprint of model weights for the generative model per stage using a resource allocator component of the model serving system. Cache streaming is used to stream prompt cache from prompt pipelines to token pipelines to generate tokens. Cache streaming involves gather-copy operations which may be performed using compute kernels.
G06F 9/48 - Program initiatingProgram switching, e.g. by interrupt
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
Systems and methods for data encryption and decryption requiring successive partial decryption using multiple keys. The method is designed to generate a public key used to encrypt plaintext into an encrypted message and to generate multiple private keys, each of which are different from one another and are transmitted to separate computing devices to be used for decryption. The encrypted message is sent to one computing device for partial decryption using one private key, and the partial decryption is sent to another computing device for partial decryption using a different private key to generate the plaintext.
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
Systems and methods for converting the result of a radio frequency (RF) measurement into the quantum capacitance of a device are described. An example method includes, by performing a radio frequency (RF) measurement, extracting frequency shift and resonator loss shift of a resonator relative to a reference trace of the resonator, where the resonator is coupled to a quantum device. The method further includes from the extracted frequency shift and the resonator loss shift, without resonator fitting, deriving both a real part and an imaginary part of a quantum capacitance associated with the quantum device.
A data processing system implements receiving, at a file system filter, a request to access a file information associated with a file from a requesting application on a user device; determining that neither a copy of the file or a placeholder file are available on the user device; accessing file mapping information from a file mapping datastore to determine whether a path associated with the file is within a namespace under control of the cloud file provider, the filename information, or both the path information and the filename information associated with the files; obtaining content associated with the file from the cloud file provider based on the file mapping information responsive to the path associated with the file being within the namespace under control of the cloud file provider; and providing the content associated with the file to the requesting application.
“Purely electrical” solutions to power oscillations involve expensive storage techniques (e.g., batteries and/or capacitors) or wasted energy in “dummy loads” (e.g., resistive banks and/or heaters). Some power delivery systems may incorporate flow batteries as part of the energy storage and delivery solution, particularly using piped electrolyte to distribute power directly to storage racks. For longer duration fluctuations in power consumption, flow batteries may store power during off-peak demand periods and release power during peak demand periods. However, flow batteries typically do not react fast enough to compensate for rapid fluctuations in power consumption. The presently disclosed technology utilizes the pipework of electrolyte distribution systems in place for the flow battery as a distributed electrolytic capacitor. This form of “fast” energy storage is ideally suited to complement “slow” chemical energy storage of a flow battery and is thus capable of acting as a power-smoothing solution and a UPS supplement or replacement.
H01M 8/04082 - Arrangements for control of reactant parameters, e.g. pressure or concentration
H01G 11/02 - Hybrid capacitors, i.e. capacitors having different positive and negative electrodesElectric double-layer [EDL] capacitorsProcesses for the manufacture thereof or of parts thereof using combined reduction-oxidation reactions, e.g. redox arrangement or solion
H01M 8/18 - Regenerative fuel cells, e.g. redox flow batteries or secondary fuel cells
H01M 16/00 - Structural combinations of different types of electrochemical generators
31.
SECURITY GRAPH CARDINALITY REDUCTION IN NETWORK-BASED COMPUTER SYSTEMS
Systems, methods, and techniques are directed to reducing cardinality in graphs of network-based computer systems. In an example, a graph representative of resources in the network-based computer system is generated. The graph comprises nodes representative of resources and edges between nodes representative of relationships between respective nodes. A level of structural similarity between first and second nodes to satisfy a structural similarity criterion. The first and second nodes are grouped in a grouped node, resulting in a modified graph. A security vulnerability of the computer system is identified based on the modified graph. Performance of a mitigation step with respect to the security vulnerability is caused. In another aspect, an edge associated with the first node is grouped in a grouped edge with an edge associated with the second node. In another aspect, a grouped node is grouped with another grouped node as a parent grouped node.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
32.
SYSTEMS AND METHODS FOR THERMAL MANAGEMENT OF ELECTRICAL SWITCHES
A system may include an electrical switch. A system may include a heat exchanger in thermal communication with the electrical switch and configured to receive heat from the electrical switch. A system may include a liquid cooling conduit configured to flow a working fluid therethrough, wherein the liquid cooling conduit is in thermal communication with the heat exchanger and configured to receive heat from the heat exchanger. A system may include an exhaust device in fluid communication with the liquid cooling conduit and configured to exhaust heat from the working fluid.
Techniques are described herein that are capable of performing a security action based on an AI-determined intent and/or impact of a resource in an enterprise. A security alert regarding an identified resource of an enterprise is received. Intents of subsets of information regarding a software application utilized by the enterprise are determined using an AI model. The intents are mapped to subsets of resources in the enterprise and/or the AI model is used to determine impacts of the subsets of the resources on the enterprise. In response to the security alert, a security action is performed with regard to the identified resource as a result of an intent and/or impact associated with the identified resource satisfying an action criterion associated with the security action.
Access management in a network-based service involves identifying an action involving a first account and determining if an account policy forbids the action. If forbidden, the system identifies an identity data structure linking the first account with a second account of a different type. The system determines if the second account allows the action, sends a transmission to the user's device identifying the second account, and receives an access token. If the token is valid, the system performs the action. The system may also create a new account if policies allow.
A framework uses a service table and an API table to autonomously invoke downstream APIs. The service and API tables collectively define configurable “properties” for establishing authentication with the downstream APIs, formatting and communicating API requests in a manner supported by the respective downstream APIs, and processing API responses received from the downstream APIs. Service level properties defined in the service table may be used to establish authentication with the downstream API as well as communicate API requests/responses in a manner supported by the downstream API. Request level properties defined in the API table may include any property used to format or otherwise generate an API request and/or process an API response in a manner supported by a corresponding downstream API.
A coordinator for diverse artificial intelligence (AI) assistants (e.g., Copilot, Gemini) is disclosed that improves the efficiency of using multiple custom AI assistants for complex tasks. The example AI assistant coordinator receives a user input comprising a task and selects one or more AI assistants, from among a predefined set of AI assistants, to perform the task. Some scenarios partition the task into portions, each of which is performed by a different AI assistant, and then the multiple results are aggregated by the coordinator. Some scenarios use a result from one AI assistant within the tasking for another AI assistant. The AI coordinator is capable of emulating a human user when interacting with the AI assistants (i.e., when using an API is not feasible), and also performing an action such as sending an email or generating a document, based on the input task and results from the AI assistants.
A data processing system includes a processor; and a memory in communication with the processor. The memory contains executable instructions that, when executed by the processor alone or in combination with other processors, cause the data processing system to perform functions of: detecting column headers within an unstructured text using a trained classifier; prompting a Large Language Model (LLM) to produce a table sketch based on detected headers and the unstructured text; generating candidate rows for lines of the unstructured text not included in the table sketch using a symbolic system; ranking the candidate rows based on consistency with a consistency ranker; and assembling a final table based on the unstructured text by adding candidate rows based on rank to the table sketch.
Aspects of the disclosure include methods and systems for machine learning, and specifically to hardware and parameter-aware machine learning (ML) model graphics processing unit (GPU) efficiency tuning systems. A method includes receiving a request corresponding to a machine learning model training task, a plurality of fixed configurations, and a plurality of dynamic configurations. A task embedding is generated from the plurality of fixed configurations. A prediction module is trained on known dynamic and fixed configurations and, for each combination of a dynamic configuration and a fixed configuration, a respective model utilization score. A plurality of model utilization scores are generated for a plurality of respective candidate configurations sampled from the dynamic configurations.
Aspects of the disclosure include methods and systems for machine learning, and specifically to hardware and parameter-aware machine learning (ML) model graphics processing unit (GPU) efficiency tuning systems. A method includes receiving a request corresponding to a machine learning model training task, a plurality of fixed configurations, and a plurality of dynamic configurations. A task embedding is generated from the plurality of fixed configurations. A prediction module is trained on known dynamic and fixed configurations and, for each combination of a dynamic configuration and a fixed configuration, a respective model utilization score. A plurality of model utilization scores are generated for a plurality of respective candidate configurations sampled from the dynamic configurations.
Responsive to receiving the request, a response is returned including an optimal training efficiency configuration for the training task according to the plurality of model utilization scores.
A method executed within a virtual machine (VM) host computer system includes determining, during VM provisioning, a physical peripheral device with confidential and non-confidential modes is to be assigned to the VM. Based on an attribute linked to the VM, the VM host directs the physical peripheral device to switch to the confidential mode and subsequently connects the physical peripheral device to the VM. Later, during the shutdown of the VM, the VM host determines that the physical peripheral device is assigned to the VM and that the physical peripheral device is operating in confidential mode. The VM host instructs the physical peripheral device to switch to the non-confidential mode and unassigns the physical peripheral device from the VM.
Systems, devices, methods, and machine-readable media configured to provide voice synthetization in a multiplayer video game are provided. A system can include a multiplayer video game including a character selection interface through which a player selects a character to represent them in playing the video game, and a voice model trained to convert audio from the player directly into audio in a voice of the character and provide an output that includes the audio in the voice of the character.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
G10L 13/033 - Voice editing, e.g. manipulating the voice of the synthesiser
G10L 21/007 - Changing voice quality, e.g. pitch or formants characterised by the process used
G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
41.
SECURITY ACTION BASED ON COMMUNICATION-BASED ANALYSIS
Techniques are described herein that are capable of performing a security action based on a communication-based analysis. A security event, which is triggered by an operation performed by a user in an organization, is detected. A security analysis result is generated by determining whether a communication history of the user includes (1) a communication from the user that initiates an interaction with another user in the organization and/or a communication that is addressed specifically to the user from another user in the organization, (2) a communication from the user that references the operation, and/or (3) a communication that provides an explanation of a purpose of the operation that satisfies an explanation criterion. In response to the security event, a security action is performed with regard to the operation as a result of the security analysis result satisfying a security criterion.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
42.
System and Method for Central Processing Unit (CPU)-based Machine Learning Training Using Affinitized Threads
A method, computer program product, and computing system for assigning a data shard associated with a machine learning application to each CPU core of a plurality of CPU cores. The data shard of a respective CPU core is loaded to a corresponding affinitized cache memory. A processing thread for the data shard is assigned to the respective CPU core. Multiple processing threads for the data shard are executed using the same respective CPU core and the corresponding cache memory.
A method for objectively evaluating content output by a retrieval augmented generation (RAG) system includes obtaining question-answer information for one or more data chunks residing in a source index and prompting a large language model (LLM) to generate one or more answer construct conditions for a first test question included in the question-answer information. Each of the answer construct conditions identifies a condition that is satisfied by a ground truth answer to the first test question. The method further includes generating a question-specific evaluation metric for the first test question based on the answer construct conditions and prompting multiple differently configured retrieval augmented generation (RAG) systems to answer the first test question based on information within the source index. The method additionally includes evaluating multiple answers to the first test question generated by the multiple RAG systems by repeatedly assessing the question-specific evaluation metric and presenting, on a user interface, comparative quality data quantifying a relative quality of the multiple responses generated by the multiple RAG systems.
G06N 3/006 - Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
A method for operating a transformer model includes algorithmically allocating a fixed budget for a key-value cache between multiple decoding layers per an allocation scheme that ensures progressively higher decoding layers in the transformer model are allocated progressively smaller quantities of cache memory. The method further includes configuring each of the multiple decoding layers of the transformer model to retain no more than a maximum number of key-value vector pairs in the key-value cache during a token decoding operation, the maximum number of key-value vector pairs being independently determined for each decoding layer of the multiple decoding layers based on the cache memory that is allocated to the decoding layer.
A technique for enhancing video representation in network-based meetings dynamically replaces low-quality video feeds with animated avatars. The system evaluates individual video feeds against quality thresholds related to head pose, facial feature visibility, and image clarity. When a feed fails to meet these thresholds, an animation of the participant is generated using a previously captured image. Speech context analysis enables the application of realistic facial expressions and lip movements to the animation. The animated avatar, synchronized with the speech of the participant, is then displayed in place of the original video feed, within the user interface of the network-based meeting. This approach maintains visual engagement for remote participants, even when in-room attendees are partially occluded, poorly captured by the camera, or have suboptimal head poses.
Systems, devices, methods, and machine-readable media configured to provide voice synthetization in a multiplayer video game are provided. A system can include a multiplayer video game including a character selection interface through which a player selects a character to represent them in playing the video game, and a voice model trained to convert audio from the player directly into audio in a voice of the character and provide an output that includes the audio in the voice of the character.
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
G10L 13/00 - Speech synthesisText to speech systems
G10L 25/00 - Speech or voice analysis techniques not restricted to a single one of groups
Systems, devices, methods, and machine-readable media configured to provide voice inference in a video game are provided. A video game system can include an encoder configured to generate a first encoding representative of physical characteristics of a specified entity, a similarity operator configured to determine similarity values between (i) corresponding stored encodings of multiple characters, the stored encodings representative of physical characteristics of respective characters of the multiple characters and (ii) the first encoding, identify a selected character from the multiple characters based on the similarity values, and provide an identifier of the selected character, a voice database configured to provide audio or a spectrogram of the selected character, and a video game configured to provide the audio of a player-selected character in a voice of the selected character.
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
G10L 13/00 - Speech synthesisText to speech systems
48.
DISTRIBUTED CAPACITIVE ENERGY STORAGE FOR FLOW BATTERIES
"Purely electrical" solutions to power oscillations involve expensive storage techniques (e.g., batteries and/or capacitors) or wasted energy in "dummy loads" (e.g., resistive banks and/or heaters). Some power delivery systems may incorporate flow batteries as part of the energy storage and delivery solution, particularly using piped electrolyte to distribute power directly to storage racks. For longer duration fluctuations in power consumption, flow batteries may store power during off-peak demand periods and release power during peak demand periods. However, flow batteries typically do not react fast enough to compensate for rapid fluctuations in power consumption. The presently disclosed technology utilizes the pipework of electrolyte distribution systems in place for the flow battery as a distributed electrolytic capacitor. This form of "fast" energy storage is ideally suited to complement "slow" chemical energy storage of a flow battery and is thus capable of acting as a power-smoothing solution and a UPS supplement or replacement.
A fast-switch mode is disclosed for switching a virtual machine (VM) host computer system between standard and confidential VM hosting modes. During initialization of a hypervisor during cold-boot of a VM host, the host enables the fast-switch mode, including allocating a contiguous memory portion to an address translation lookup table. The VM host initially operates in the standard VM hosting mode without a secure virtualization feature enabled. The VM host later switches to the confidential VM hosting mode by activating the secure virtualization feature and utilizing the address translation lookup table. Alternatively, The VM host initially operates in the confidential VM hosting mode without the secure virtualization feature enabled, and later switches to the standard VM hosting mode by deactivating the secure virtualization feature. In one example, the address translation lookup table is a reverse map table (RMP), and the secure virtualization feature is secure encrypted virtualization-secure nested paging (SEV-SNP).
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
A coordinator for diverse artificial intelligence (AI) assistants (e.g., Copilot, Gemini) is disclosed that improves the efficiency of using multiple custom AI assistants for complex tasks. The example AI assistant coordinator receives a user input comprising a task and selects one or more AI assistants, from among a predefined set of AI assistants, to perform the task. Some scenarios partition the task into portions, each of which is performed by a different AI assistant, and then the multiple results are aggregated by the coordinator. Some scenarios use a result from one AI assistant within the tasking for another AI assistant. The AI coordinator is capable of emulating a human user when interacting with the AI assistants (i.e., when using an API is not feasible), and also performing an action such as sending an email or generating a document, based on the input task and results from the AI assistants.
Systems and methods for a rubric engine for providing a rubric engine for generation of customized and tailored rubrics are provided herein. In an example, the rubric engine may receive, from a client device, an indication to generate a rubric for an assignment. The rubric engine may determine an assignment type for the assignment and one or more evaluation criteria for the rubric based on the assignment type. In some cases, the rubric engine may also determine a rubric scale for the rubric and/or audience context for the assignment. Responsive to these determinations, the rubric engine may generate the rubric for the assignment based on the evaluation criteria and the audience context for the assignment. The rubric may include an assessment for each of the one or more evaluation criteria across the rubric scale. Once generated, the rubric engine may associate the rubric with the assignment.
G09B 7/02 - Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by the student
An electronic device includes a device housing and a secondary component removably affixed to the device housing. A dual screw assembly is used to affix the secondary component to the device housing. The dual screw assembly includes a first screw inserted into a housing aperture of the device housing, such that a portion of the first screw extends into an interior of the device housing, and a second screw inserted into a screw aperture formed within a head of the first screw, such that at least a portion of the secondary component is positioned between a head of the second screw and the head of the first screw.
H05K 5/00 - Casings, cabinets or drawers for electric apparatus
53.
System and Method for Generative Artificial Intelligence (AI) Model-based Resolution of Ambiguities in Natural Language Query-to-Database Command Query Conversion
A method, computer program product, and computing system for processing a natural language query concerning data stored within an electronic database. The natural language query is converted into a database command query using a generative AI model. A flagged ambiguity is generated by flagging an ambiguity associated with the natural language query using the generative AI model. User feedback concerning the flagged ambiguity is obtained. The database command query is revised by processing the natural language query, the database command query, and the flagged ambiguity using the generative AI model.
Access management in a network-based service involves identifying an action involving a first account and determining if an account policy forbids the action. If forbidden, the system identifies an identity data structure linking the first account with a second account of a different type. The system determines if the second account allows the action, sends a transmission to the user's device identifying the second account, and receives an access token. If the token is valid, the system performs the action. The system may also create a new account if policies allow.
A fast-switch mode is disclosed for switching a virtual machine (VM) host computer system between standard and confidential VM hosting modes. During initialization of a hypervisor during cold-boot of a VM host, the host enables the fast-switch mode, including allocating a contiguous memory portion to an address translation lookup table. The VM host initially operates in the standard VM hosting mode without a secure virtualization feature enabled. The VM host later switches to the confidential VM hosting mode by activating the secure virtualization feature and utilizing the address translation lookup table. Alternatively, The VM host initially operates in the confidential VM hosting mode without the secure virtualization feature enabled, and later switches to the standard VM hosting mode by deactivating the secure virtualization feature. In one example, the address translation lookup table is a reverse map table (RMP), and the secure virtualization feature is secure encrypted virtualization-secure nested paging (SEV-SNP).
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
G06F 12/14 - Protection against unauthorised use of memory
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A data processing system for a cloud-based collaboration and communication application receives a call to a first application programming interface (API) at a collaboration and communication service from a client application during a boot process for the client application. The first API call indicates that a web uniform resource locator (URL) and valid domains for a tenancy are being requested. A call to a second API for a content and information management service is generated that requests site collection information for the tenancy. The site collection information us received and a site list for the tenancy is generated using the collaboration and communication service. A first site in the site list is the web URL of the root site and remaining sites in the site list are the valid domains for each instance of the tenancy.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Systems, devices, methods, and machine-readable media configured to provide voice inference in a video game are provided. A video game system can include an encoder configured to generate a first encoding representative of physical characteristics of a specified entity, a similarity operator configured to determine similarity values between (i) corresponding stored encodings of multiple characters, the stored encodings representative of physical characteristics of respective characters of the multiple characters and (ii) the first encoding, identify a selected character from the multiple characters based on the similarity values, and provide an identifier of the selected character, a voice database configured to provide audio or a spectrogram of the selected character, and a video game configured to provide the audio of a player-selected character in a voice of the selected character.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 13/033 - Voice editing, e.g. manipulating the voice of the synthesiser
Aspects of the disclosure include machine learning architectures with task agnostic embedding-based labeling escalation on fly. A method includes receiving a request corresponding to a task and generating, by a first pass system, a first decision. The first pass system includes a first pass model having a first complexity. The method includes generating, for the task, a task embedding in an embedding space, determining, in the embedding space, a top K subspace having K embeddings having K closest distances to the task embedding, and determining embedding labels for the K embeddings. The method includes determining to escalate the task to a second pass system having a second pass model having a second, higher complexity and, responsive to determining the embedding labels, generating, by the second pass system, a second decision for the task and returning, responsive to receiving the request, a response including the second decision.
Back-to-back stacked silicon-based capacitors in a package substrate for a system-on-chip (SoC) and methods of forming the same are described. An example system includes a package substrate comprising a core layer including plated-through holes. The system further includes at least one die mounted on top of the package substrate, where the at least one die includes at least one voltage domain. The system further includes a set of back-to-back stacked silicon-based capacitors formed within the core layer of the package substrate. The set of back-to-back stacked silicon-based capacitors may be formed in slots within the core layer in regions excluding the plated-through holes. A subset of the set of back-to-back stacked silicon-based capacitors may be coupled to components within the at least one voltage domain to manage an impedance associated with the at least one voltage domain.
H01L 21/48 - Manufacture or treatment of parts, e.g. containers, prior to assembly of the devices, using processes not provided for in a single one of the groups or
H01L 23/00 - Details of semiconductor or other solid state devices
H01L 23/538 - Arrangements for conducting electric current within the device in operation from one component to another the interconnection structure between a plurality of semiconductor chips being formed on, or in, insulating substrates
H01L 25/16 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices the devices being of types provided for in two or more different subclasses of , , , , or , e.g. forming hybrid circuits
60.
DYNAMIC REAUTHORIZATION THRESHOLDS FOR CLOUD RESOURCE ALLOCATION
A dynamic reauthorization threshold is disclosed. A cloud customer is assigned a first reauthorization threshold associated with a first resource usage quota. When a current usage of the customer exceeds the first reauthorization threshold, a customer reauthorization process is performed. If the customer fails the reauthorization process, resources allocated to the customer are deallocated. If the customer passes the reauthorization process, the customer is assigned a next reauthorization threshold associated with a second resource usage quota. The next reauthorization threshold is determined based on at least one of: a second reauthorization threshold determined based at least on a variance of the current resource usage from the first resource usage quota, a base scaling factor, and a randomization factor, or a third reauthorization threshold determined by a machine learning model based at least on the first reauthorization threshold, the current resource usage, and the second reauthorization threshold.
A computer-implemented method includes dividing a data file into a plurality of data segments and assigning segment identifiers to the data segments, respectively. The data segments, if combined using a first sequence of the segment identifiers, form the data file. The method also includes shuffling, according to a reordering pattern, the segment identifiers into a second sequence that is different from the first sequence. The reordering pattern indicates a mapping between the first sequence and the second sequence. The method further includes representing the reordering pattern in metadata, and providing the plurality of data segments and the metadata to an interface for an implementation of one or more networking protocols, which implements at least a transport protocol. The plurality of data segments are provided to the interface in order of the second sequence. Related systems and software are also disclosed.
An image processing unit may select a pixel in the line of pixels by enabling a first connection between a source driver and an input of refresh circuitry of the pixel along a content data line, based at least in part on receipt of a pixel select signal corresponding to the pixel, wherein at least one of the pixels in the line of pixels is not selected. An image processing unit may refresh, based at least in part on receiving a pixel refresh signal, the selected pixel with content data provided using the content data line by enabling a second connection between the input of the refresh circuitry and the pixel along the content data line, wherein refreshing the selected pixel includes providing content data to the pixel using the content data line and not providing content data to the at least one of the pixels.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
The described technology provides a method including determining, based on a system physical address, a cluster of L3 cache nodes that are linked to a group of memory controller nodes, determining, based on the system physical address, an L3 cache node tied to a component hub in a SoC mesh, determining a memory controller node in the SoC mesh that maps to the system physical address, generating a deinterleaved address by relocating low DRAM space of the system physical address and removing the cache cluster bits from the system physical address, mapping the deinterleaved physical address to a DRAM address by assigning bits to DRAM address components, and storing the bit assignments of the DRAM address components.
The described technology provides a method including determining a plurality of blocks configured on a system on chip (SoC), generating a register database, wherein the register database is configured to store one or more parameters of a plurality of registers, the plurality of registers representing the registers for a plurality of blocks the SoC, determining one or more pattern strings, wherein each of the strings identify a common functionality among a plurality of registers, performing a search on a register database for identifying a group of registers from the plurality of registers, wherein names of each of the group of registers include the pattern string, and generating a virtual register that relates to the group of registers.
A corrupter may receive first output data of a designated domain from the large language model. The corrupter may synthesize qualified corrupt data for training the safeguard model configured to detect errors in second output of the large language model by: identifying a mapping of a first entity of the first output data to a first concept in an ontology corresponding to the designated domain, and generating the qualified corrupt data by replacing the first entity in the first output data with a second entity, wherein the second entity is mapped to a second concept of the ontology that complies with predefined corruption rule relative to the first concept of the ontology.
A code review comment is automatically generated using multiple agents that perform a dedicated task using a particular language model. A code quality estimator agent uses a code quality encoder model to determine whether a code change to a file of a repository presents a risk to the repository if merged. For those code changes classified as presenting a risk, a comment generator agent uses a generative language model to generate an initial code review comment for the code change and determines a severity of the issue with the code change. A comment critic agent uses a reasoning language model to critique the initial code review comment generated by the generative language model. A final code review comment is output by the comment critic agent when the comment critic agent determines that the initial code review comment is satisfactory.
G06F 8/71 - Version control Configuration management
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
The description relates to cold plate assemblies configured to cool computer chips. One example includes a manifold and a conformable cold plate positioned against the manifold to form a fluid passageway. The conformable cold plate is configured to conform to a shape of a computer chip positioned against the conformable cold plate when exposed to fluid pressure from the fluid passageway.
H01L 23/473 - Arrangements for cooling, heating, ventilating or temperature compensation involving the transfer of heat by flowing fluids by flowing liquids
A method executed within a virtual machine (VM) host computer system includes determining, during VM provisioning, a physical peripheral device with confidential and non-confidential modes is to be assigned to the VM. Based on an attribute linked to the VM, the VM host directs the physical peripheral device to switch to the confidential mode and subsequently connects the physical peripheral device to the VM. Later, during the shutdown of the VM, the VM host determines that the physical peripheral device is assigned to the VM and that the physical peripheral device is operating in confidential mode. The VM host instructs the physical peripheral device to switch to the non-confidential mode and unassigns the physical peripheral device from the VM.
In various examples, input queries (e.g. open user queries) are used combination with predefined queries to perform security policy-related actions using a generative machine learning (GML) model or GML models. In one example, an input query relating to a security policy is matched with a predefined query stored in an instruction database. In some examples, the instruction database contains examples of structured configuration data, which in turn can be used by a GML model to configure a predetermined extractor code module to perform a specific policy-related action. In other examples, a security context relating to a security policy is used together with an input query and template query to generate a GML model query. In some examples, the two approaches are combined.
70.
GUARDING MULTIMODAL ARTIFICIAL INTELLIGENCE SYSTEMS FROM MALICIOUS PROMPT ATTACKS
A data processing system implements obtaining a plurality of unlabeled user prompts including an unknown mixture of malicious prompts and benign prompts; analyzing each unlabeled user prompt using a multimodal vision language model to obtain embeddings representing each unlabeled user prompt; analyzing the embeddings to determine representation of each unlabeled user prompt of the plurality of unlabeled user prompts in a latent space; determining a first region of the latent space associated with benign user prompts and a second region of the latent space associated with malicious user prompts; generating labeled training data by labeling each unlabeled user prompt of the plurality of unlabeled user prompts with an indication whether each unlabeled user prompt is a benign user prompt falling with the first region or a malicious user prompt falling within the second region; and training a prompt classifier (106) using the labeled training data.
71.
SECURITY GRAPH CARDINALITY REDUCTION IN NETWORK-BASED COMPUTER SYSTEMS
Systems, methods, and techniques are directed to reducing cardinality in graphs of network-based computer systems. In an example, a graph representative of resources in the network-based computer system is generated. The graph comprises nodes representative of resources and edges between nodes representative of relationships between respective nodes. A level of structural similarity between first and second nodes to satisfy a structural similarity criterion. The first and second nodes are grouped in a grouped node, resulting in a modified graph. A security vulnerability of the computer system is identified based on the modified graph. Performance of a mitigation step with respect to the security vulnerability is caused. In another aspect, an edge associated with the first node is grouped in a grouped edge with an edge associated with the second node. In another aspect, a grouped node is grouped with another grouped node as a parent grouped node.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A system may receive, at a firewall, a data packet destined to a conflicting private IP address of the first private subnetwork and the second private subnetwork within the private network, the data packet including a destination IP address identifying the firewall. The system may evaluate, at the firewall, the data packet to determine whether a source IP address of the data packet satisfies a routing condition corresponding to a routing rule. The system may apply, at the firewall, the routing rule to determine a translated destination IP address of the first private subnetwork. The system may send the data packet to the first private subnetwork.
Methods, systems, and computer storage media for providing domain-integrated contextual response management using a domain-integrated contextual response engine in an artificial intelligence (AI) system are described. Domain-integrated contextual response management is a systematic approach that combines specific industry knowledge with contextual understanding to generate accurate, relevant, and specific industry-tailored responses to user queries. Domain-integrated contextual response management further includes fine-tuning models for Retrieval-Augmented Generation (RAG) tasks using customer-specific data based on a two-fold approach involving skill distillation and knowledge distillation (i.e., skill distillation from a more powerful model like Large Language Model “LLM” and knowledge distillation from domain-specific data). Domain-integrated contextual response management also includes creating a synthetic dataset that enables smaller models (e.g., domain-integrated contextual response models) to effectively manage RAG tasks while incorporating domain-specific knowledge. Domain-integrated contextual response management further ensures that the domain-integrated contextual response models can retrieve relevant information, support citations, and decline out-of-domain (OOD) questions.
A data processing system implements obtaining a first textual content, segmenting the first textual content into a plurality of first segments, and providing each segment of the plurality of first segments to a first natural language processing (NLP) model to obtain a set of first readability scores for the plurality of first segments. The first NLP model is configured to analyze a textual input and to output a readability score representing a measurement of readability of the textual input. The system further implements aggregating the set of first segment readability scores to determine a first readability score for the first textual content, and perform at least one of causing the first readability score to be presented to a user or performing one or more actions on the first textual content based on the readability score.
A satellite is provided, including an onboard computing device. The onboard computing device may include a processor configured to receive training data while the satellite is in orbit. The processor may be further configured to perform training at a machine learning model based at least in part on the training data. The processor may be further configured to generate model update data that specifies a modification made to the machine learning model during the training. The processor may be further configured to transmit the model update data from the satellite to an additional computing device.
A computing device may perform, via the one or more electrostatic antennas, a power-saving search cycle for detecting the electronic stylus devices. A computing device may, based at least in part on detecting a trigger event: cease to perform the power-saving search cycle for detecting the electronic stylus device; and perform a power-intensive search cycle for detecting the electronic stylus device, wherein an energy usage by the computing device for the power-intensive search cycle is greater than an energy usage by the computing device for the power-saving search cycle. A computing device may synchronize, based on detecting the electronic stylus device during the power-intensive search cycle, the computing device with the electronic stylus device.
An exemplary implementation verifies an answer generated by a generative artificial intelligence model. The answer is responsive to an augmented query. The augmented query comprises a query submitted by a user and a request to answer the query using evidence determined to be relevant to the query. The answer is verified to be relevant to both the query and the evidence, facts are extracted from the evidence, and claims are extracted from the answer. Claim-to-fact entailment scores corresponding to each of the respective claim-to-fact pairings are determined. The scores are determined by an agreement analyzer comprising a natural language inference model. The answer is verified based on a check of the scores. A verified answer, a qualified answer, and/or a failure message is communicated to the user based on the verification.
The description relates to cold plate assemblies configured to cool computer chips. One example includes a manifold and a conformable cold plate positioned against the manifold to form a fluid passageway. The conformable cold plate is configured to conform to a shape of a computer chip positioned against the conformable cold plate when exposed to fluid pressure from the fluid passageway.
In various examples, input queries (e.g. open user queries) are used combination with predefined queries to perform security policy-related actions using a generative machine learning (GML) model or GML models. In one example, an input query relating to a security policy is matched with a predefined query stored in an instruction database. In some examples, the instruction database contains examples of structured configuration data, which in turn can be used by a GML model to configure a predetermined extractor code module to perform a specific policy-related action. In other examples, a security context relating to a security policy is used together with an input query and template query to generate a GML model query. In some examples, the two approaches are combined.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
The description relates to hinged devices, such as hinged computing devices. One example can include a first portion secured to a first hinge arm that is configured to rotate around a first hinge axis and a second portion secured to a second hinge arm that is configured to rotate around a second hinge axis. A timing shuttle can be positioned on a central shaft that is located between the first hinge axis and the second hinge axis and is configured to control a frictional torque experienced by the first and second hinge arms depending upon orientation of the first and second hinge arms and to synchronize rotation of the first and second hinge arms around the first and second hinge axes.
Methods and systems are provided for classifying free-text content using machine learning. Free-text content (e.g., customer feedback) and parameter values organized according to a schema are received. A free-text corpus is generated, and an artificial-text corpus is generated by applying rules to the parameter values. The artificial-text corpus is generated by converting the parameter values into a finite set of words based on the rules and concatenating the words of the finite set of words into a fixed sequence wordlist. Feature vectors (e.g., sentence embeddings) based on the free-text corpus and the artificial-text corpus are combined and forwarded to a machine learning model for classification. The machine learning model may be trained with a bias towards a specified metric (e.g., precision, recall, F1 score). The model may be trained using transfer learning with training data from a different category of free-text content (e.g., a different category of customer feedback).
Methods, systems, and apparatuses include receiving input from a client device to facilitate electronic messaging between a first user associated with first attribute data and a second user, where the client device provides a messaging interface that facilitates the electronic messaging. A messaging intent is determined based on the first attribute data of the first user, where the messaging intent corresponds to a purpose of the electronic messaging. A set of attributes of the first attribute data is mapped to prompt inputs based on the messaging intent. A generative language model is applied to the prompt inputs. Suggestions for adding messaging content in the messaging interface are output by the generative language model based on the prompt inputs. The suggestions are presented on the messaging interface.
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
One example provides, on a quantum computing device (10, 400), a method of tuning the couplings of a set of quantum dots (QD1, QD2, QD3, QD4, QD5, QD6, QD7) to a pair of Majorana zero modes (MZMs) for performing Majorana Parity Readout (MPR). The quantum computing device comprises at least one readout resonator. The method comprises, using the readout resonator, measuring (700) a plurality of resonator responses to form a measured array, preparing (704) reference datasets, and comparing (706) the measured array and the reference datasets to determine a distance between the measured array and a plurality of combinations of parameters within the reference datasets to locate a selected combination of parameters with a lowest distance from the measured array. The method further comprises using the selected combination of parameters to tune the coupling of the set of quantum dots to the pair of MZMs to perform a MPR.
A video prediction technique generates a motion graph based on given video frames. The motion graph includes spatial edges and temporal edges. Each spatial edge describes a same-frame semantic relationship between two graph nodes that are associated with a same video frame. Each temporal edge describes an interframe relationship between two graph nodes of temporally neighboring frames. The temporal edges include backward temporal edges and forward temporal edges. The technique further includes generating initial motion feature information associated with the graph nodes in the plural given video frames, and updating the motion feature information by performing message-passing operations. The technique decodes the motion feature information into dynamic vector information. The technique then predicts and synthesizes a subsequent video frame based on the given video frames and the dynamic vector information.
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
H04N 19/547 - Motion estimation performed in a transform domain
H04N 19/90 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups , e.g. fractals
A method for simulating a long-form conversation includes: instructing a language model to generate dialog associated with a primary topic; receiving, from the language model, a short-form conversation transcript that includes the dialog, and dynamically extending the short-form conversation via a feedback loop that provides for identifying secondary topics based on entities referenced in the dialog; instructing the trained language model to generate additional dialog of the conversation associated with the secondary topics; receiving from the trained language model an extension of the dialog; and appending the extension to the previously-created dialog to create a long-form conversation transcript. The long-form conversation transcript may be synthesized into audio data that is usable to train a speech recognition model. In some cases, generating the audio data entails auto-generating speech synthesis markup language (SSML) annotations based on the dialog or injecting randomized disfluencies that enhance the realism of the resulting audio data.
The disclosed configurations generate synthetic attack data that emulates the markers of a cyberattack. The generated synthetic attack data is used to train an attack detection machine learning model that detects and mitigates actual cyberattacks in real-time. Synthetic attack data is generated by a synthetic attack data generation model, which is trained with a synthetic attack data generation prompt. The synthetic attack data generation prompt is constructed out of attack data samples and a prompt guideline. The prompt guideline is created from attack procedure descriptions, such as security blog posts or other write-ups about actual cyberattacks. Prompt guidelines may include samples of actual attack data that indicate how to format synthetic attack data. Once deployed, the attack detection machine learning model infers the occurrence of a cyberattack from log entries. Detected cyberattacks may be mitigated in an automated or semi-automated manner.
An input tensor is received in an attention layer of machine learning (ML) network. A model query adapter generates a model query tensor based on the input tensor. An embedded knowledge base (KB) comprises a KB key tensor and a KB value tensor. A KB query adapter generated a KB query tensor based on the input tensor. An attention function combines attention over a model value tensor based on the model query tensor and a model key tensor with attention over the KB value tensor based on the KB query tensor and the KB key tensor, resulting in an output token.
Systems and methods are provided for generating visualization data associated with raw data using a machine learning model. For example, the machine learning model may automatically generate a set of candidate analytics and/or a scenario for visualizing the raw data based on summary data. Given the summary data and answers to prompts for visualizing data, the generated candidate analytics may reflect a context of the raw data as intended by the user. A visualization code scaffold according to a visualization specification may be used to generate programmatic output that corresponds to the candidate analytics, which may thus be used to generate a visualization accordingly. In some examples, an infographic may further be generated based on the visualization and a prompt using a diffusion model.
A method for simulating a long-form conversation includes instructing a language model to simulate a conversation by generating dialog associated with a primary topic; receiving, from the language model, a short-form conversation transcript that includes the dialog, and dynamically extending the short-form conversation via a feedback loop that provides for identifying secondary topics based on entities referenced in the dialog; instructing the trained language model to generate additional dialog of the conversation associated with the secondary topics; receiving from the trained language model an extension of the dialog; and appending the extension to the previously-created dialog to create a long-form conversation transcript. The long-form conversation transcript may be synthesized into audio data that is usable to train a speech recognition model. In some cases, generating the audio data entails auto-generating speech synthesis markup language (SSML) annotations based on the dialog or injecting randomized disfluencies that enhance the realism of the resulting audio data.
G10L 15/183 - Speech classification or search using natural language modelling using context dependencies, e.g. language models
G10L 13/033 - Voice editing, e.g. manipulating the voice of the synthesiser
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 15/06 - Creation of reference templatesTraining of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
G10L 15/18 - Speech classification or search using natural language modelling
G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
90.
INPUT AUGMENTATION FOR GUIDING GENERATIVE APPLICATIONS IN EVIDENCE-SUPPORTED CLINICAL INFORMATION EXTRACTION
Example solutions for augmenting text inputs for analyzing clinical documents include: identifying a clinical named entity within text content of a clinical input document; adding an anchor tag to the text content, the anchor tag including an entity ID and a clinical attribute associated with the clinical named entity, thereby generating an enhanced input document; and submitting a first query prompt and the enhanced input document to a generative artificial intelligence (GAI) model, the first query prompt including task text and anchoring markup text, the task text includes instructions of a task to be performed by the GAI model on the enhanced input document, the anchoring markup text includes a template of the anchor tag and instruction to add a reference anchor tag to output generated by the GAI model, where the reference anchor tag is to include the entity ID and the clinical attribute of the anchor tag.
G16H 50/70 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
A method for operating a sparse depth imaging system is presented. The method comprises receiving a depth map of an environment. The depth map comprises a plurality of pixels having locations in an optical sensor coordinate system. A pattern of illuminator dots in an optical source coordinate system is received. Each illuminator dot has a fixed location in a defined plane in the optical source coordinate system. The depth map is projected into a 3D point cloud in the optical sensor coordinate system. Each point in the 3D point cloud is assigned a 2D location in the defined plane. A depth value for each illuminator dot is interpolated based on transformed depth of points in the 3D point cloud. Each illuminator dot is assigned a 3D location in the optical sensor coordinate system. A depth for each illuminator dot is output in the optical sensor coordinate system.
Responses by an a artificial intelligence (AI) model are generated based on affective states of the user. In response to receiving a user input, content of the user input is analyzed and an affective state of the user is determined. The affective state is analyzed using a dual model comparison. A response is rendered based on the determined affective state.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Aspects of the disclosure include injecting a magic state in a code. Aspects include preparing the magic state on a first set of physical qubits, initializing a second set of the physical qubits to X = +1 state, and initializing a third set of the physical qubits to Y = +1 state. Aspects include initializing a fourth set of the physical qubits to Z = +1 state and measuring stabilizers of the code, thereby resulting in the magic state being injected into the code.
A lattice-based cryptography engine includes an interface configured to receive a lattice-based cryptographic operation request including corresponding operands. A register map is configured to store the operands and response to the request. A controller is coupled to receive the operands and output a sequence of instructions responsive to the request. A plurality of hardware units is coupled to receive and execute the instructions to generate the response. Each instruction is designated for one of the plurality of hardware units. A memory is coupled to the hardware units.
Systems and methods for accelerated model inference are provided. In particular, a computing device may receive a prediction request from an application in a production environment, decompress a first set of compressed weights of a compressed model based on the prediction request, perform evaluation of the compressed model using the first set of decompressed weights while decompressing a next set of compressed weights of the compressed model, generate a prediction using the decompressed weights, and return the prediction to the application.
Solutions disclosed herein provide for the reduction of hallucinations by language models, such as large language models (LLMs), by adversarial prompt refinement. A generator prompt template, including at least a generator prompt reference section, of a user is received from a computing device. The generator prompt template is provided as an input to a language model, the language model having an LLM-based generator, an LLM-based judger, and an LLM-based reproducer. Within the LLM-based generator, a generated response is generated from the generator prompt template. The generated response is then evaluated for a hallucination by the LLM-based judger. Based on that evaluation, a final generator prompt template is generated using an adversarial generator prompt template refinement process. For example, the adversarial generator prompt template refinement process may utilize an evolutionary prompt optimization process. The final generator prompt template, having a reduced hallucination risk, is then deployed to the LLM-based generator.
This disclosure describes a framework for improving the retrieval of content items for user queries using a generative artificial intelligence (AI) model. Specifically, this disclosure describes a content retrieval system (e.g., a system for standardizing and retrieving content items) that utilizes a generative AI model to standardize user queries and content items into a common object format with normalized values, which improves the accuracy of content retrieval. Additionally, the content retrieval system improves system efficiency by enabling real-time results through a combination of selective online and offline calls to the generative AI model and a distilled encoder neural network.
A dense encoder is adapted as a hierarchical corpus encoder in an information retrieval system to use negative samples from sibling nodes in a hierarchical tree of vector embeddings for documents in a corpus. Both the encoder and hierarchical tree are co-trained using a loss function that takes the document hierarchy into account. The hierarchical corpus encoder may be used in both supervised training cases where query-document relevance judgments are present and in zero-shot cases where a query dataset is absent. The hierarchical corpus encoder demonstrates significant performance improvements over a variety of dense encoder and generative retrieval baselines, under both supervised and unsupervised scenarios, thereby establishing the effectiveness of jointly learning a document hierarchy.
Example solutions for fine-tuning a language model include: generating a dataset that includes a plurality of paired samples, each paired sample of the plurality of paired samples includes (i) a factual question and a true outcome for that factual question and (ii) a counterfactual question and a true outcome for that counterfactual question; submitting a factual query to an answer model, the factual query including the factual question and the true outcome of the factual question, the answer model generating a factual answer in response to the factual query; submitting a counterfactual query to the answer model, the counterfactual query including the counterfactual question and the true outcome of the counterfactual question, the answer model generating a counterfactual answer in response to the counterfactual query; and performing fine-tuning on a target model using at least the factual question paired with factual answer and the counterfactual question paired with counterfactual answer.
A thermal management system includes a high-pressure (HP) container, a low-pressure (LP) container in fluid communication with the HP container and having a fluid pressure less than the HP container, and a two-phase working fluid partially in the HP container and partially in the LP container. The two-phase working fluid has a vapor phase and a liquid phase. A pump is configured to move the working fluid through the system, and a condenser is configured to condense the vapor phase of the working fluid into the liquid phase.