Techniques are described herein in which a programmable logic device (PLD) is integrated into a baseboard management controller (BMC). A programming-enhanced BMC is powered on by a PLD that is integrated into the programming-enhanced BMC and that is coupled to an internal bus of the programming-enhanced BMC. A configuration file is provided from immutable BMC hardware in the BMC to the PLD based at least on the programming-enhanced BMC being powered on. The configuration file specifies a configuration to be programmatically applied to programmable hardware of the PLD. The programmable hardware of the PLD is programmed by loading the configuration file, which causes the programmable hardware to render a peripheral interface that is defined by the configuration file natively on the internal bus of the programming-enhanced BMC.
G06F 30/34 - Conception de circuits pour circuits reconfigurables, p. ex. réseaux de portes programmables [FPGA] ou circuits logiques programmables [PLD]
2.
GENERATIVE AI-DRIVEN MULTI-SOURCE DATA QUERY SYSTEM
Embodiments of the disclosed technologies include, in response to receiving a query, matching the query to metadata from a plurality of heterogeneous data sources, and selecting one or more data sources from the plurality of heterogeneous data sources for answering the query, by sending the query and embeddings of the matched metadata to a generative artificial intelligence (GAI), and prompting the GAI to select matching data sources. Based on the data from the GAI, generating one or more custom queries targeted to the matching data sources selected by the GAI, the custom queries formatted to be sent to the selected data sources, executing the one or more custom queries across the selected data sources, and summarizing results from the executing and providing a response to the query.
A computing system including a quantum computing device. The quantum computing device includes a Majorana island at which Majorana zero modes (MZMs) are instantiated. The quantum computing device further includes a quantum dot electrically connectable to an MZM, a capacitance sensor capacitively coupled to the quantum dot, and a controller. The controller is configured to set a Majorana island gate voltage of the Majorana island and a quantum dot gate voltage of the quantum dot to a candidate resonance Majorana island voltage and a candidate resonance quantum dot voltage. The controller is further configured to receive a capacitance measurement of the quantum dot and the Majorana island and determine whether resonance occurs based on the capacitance measurement. The controller is further configured to reset the gate voltages. The controller is further configured to output a quasiparticle poisoning value indicated by the one or more determinations of whether resonance occurs.
4.
SUB-KELVIN TEMPERATURE GRADIENT SYSTEM FOR SCALABLE QUANTUM CONTROL
Examples described in this disclosure relate to sub-kelvin control systems and methods for scalable quantum control. An example system includes a first cooling sub-system operable to maintain an operating temperature for a first device within a first sub-kelvin temperature range. The system further includes a second cooling sub-system, separate from the first cooling sub-system, operable to maintain an operating temperature for a second device, different from the first device, within a second sub-kelvin temperature range. The first sub-kelvin range may comprise a range between 50 milli-kelvin (mK) to 999 mK and the second sub-kelvin range may comprise a range between 1 mK to 299 mK. The combination of the first cooling sub-system and the second cooling sub-system is configured to maintain a temperature gradient between the first device and the second device despite the first device and the second device being in close proximity to each other.
5.
ADAPTIVE VIDEO COMPRESSION USING GENERATIVE MACHINE LEARNING
Various embodiments of the technology described herein relate to compression of video data, including selecting a pivot image from a video including a plurality of images and causing a first machine learning model to generate a descriptor of the pivot image, where the descriptor includes a language description associated with the pivot image. In one example, the pivot image and the descriptor are provided to a decoder for reconstruction of the video. In an embodiment, the decoder includes a generative machine learning model that takes as an input the pivot image and the descriptor. The decoder uses the pivot image to generate an image based at least in part on the descriptor. The image is combined with other images generated by the generative machine learning model to reconstruct the video.
H04N 19/463 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression par compression des paramètres d’encodage avant la transmission
H04N 19/503 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre la prédiction temporelle
H04N 21/266 - Gestion de canal ou de contenu, p. ex. génération et gestion de clés et de messages de titres d'accès dans un système d'accès conditionnel, fusion d'un canal de monodiffusion de VOD dans un canal multidiffusion
In an example embodiment, a generator model such as a large language model (LLM) is leveraged to generate embeddings for both pieces of content and users. The embeddings map the pieces of content and the users into the same latent n-dimensional space. The embeddings are then fine-tuned using a two-tower deep neural network, with one of the towers representing users and the other tower representing content. The two-tower deep neural network is trained to optimize the embeddings over some shared goal, such as user engagement with content, and uses information such as user interactions with content in that process. A clustering technique, such as K-nearest neighbor (kNN) can then be used to identify a grouping of top user/content pairs based on similarity between users and content, as reflected in the embeddings. For a given piece of content, therefore, the top users from that cluster can then be recommended as an audience for the content.
G06Q 30/0242 - Détermination de l’efficacité des publicités
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
Embodiments of the present disclosure include countermeasure circuit techniques for cyberattacks. In one embodiment, portions of combinational logic receive shared input bit groups and produce shared output bit groups. Shared output bit groups may be coupled between series configured combinational logic portions using control gates. Clock signals are delayed to activate the control gates after the outputs are stable. In some embodiments, a first combinational logic group and second combinational logic group operate on a clock and inverse clock.
G06F 21/75 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du calcul ou du traitement de l’information par inhibition de l’analyse de circuit ou du fonctionnement, p. ex. pour empêcher l'ingénierie inverse
8.
GENERATING INFORMED PRIORS FOR HYPERPARAMETER SELECTION
A system iteratively evaluates the target machine learning model using evaluation hyperparameter values of the target machine learning model to measure performance of the target machine learning model for different combinations of the evaluation hyperparameter values. The system trains a surrogate machine learning model using the different combinations of the evaluation hyperparameter values as features and the performance of the target machine learning model based on a corresponding combination of the evaluation hyperparameter values as labels. The system generates a feature importance vector of the surrogate machine learning model based on the training of the surrogate machine learning model, generate informed priors based on the feature importance vector, and generates the target hyperparameter values of the target machine learning model based on the informed priors.
A computing system is provided, including a processor configured to receive a standardized stabilizer instrument specification including an input Clifford unitary, an output Clifford unitary, and a plurality of stabilizer instrument bit matrices. The processor is further configured to receive a logical instrument input error correction code and a logical instrument output error correction code. The processor is further configured to compute a logical instrument specification based at least in part on the standardized stabilizer instrument specification, the logical instrument input error correction code, and the logical instrument output error correction code. The logical instrument specification includes a logical input Clifford unitary, a logical output Clifford unitary, a plurality of logical instrument bit matrices, and a logical instrument relabeling matrix. The processor is further configured to store the logical instrument specification in memory.
This document relates to communication by backscattering of satellite signals. One example includes a satellite backscatter transmitter having a first antenna configured to receive a radio frequency satellite signal, a modulator configured to modulate the radio frequency satellite signal to obtain a modulated radio frequency satellite signal, a digital logic circuit configured to selectively control the modulator to encode information according to a communication scheme, and a second antenna configured to passively retransmit the modulated radio frequency satellite signal to a receiver.
11.
SECURITY ENHANCEMENT FOR COMPUTING DEVICE STATE CHANGE
Systems and methods are disclosed herein for identifying a bypass of a computing device state change. In an example system, a determination is made that a computing component, such as an application executing on the computing device, is blocking a state change of the computing device. The state change includes various types of actions to protect the computing device, such as an automatic lock, logoff, standby mode change, or powering off change. An idle period of the computing device is detected. A proximity change of a user relative to the computing device is also detected. Based on the idle period and the proximity change, an action to remediate the blocking of the state change is performed, such as generating a notification associated with the blocking of the state change for providing to the user and/or automatically bypassing the blocking of the state change.
The techniques disclosed herein enable an autonomous agent to interpret an input dataset and orchestrate a suite of software modules to perform a computational task on a representation of a chemical material. The input dataset includes a prompt defining a computational task to be performed on a chemical material. Moreover, the input dataset includes data defining a chemical included in the chemical material, molecular descriptors describing the chemical and/or the chemical material, and an external variable. The agent analyzes the benefits and drawbacks of each model within the context of the computational task to determine a technique for performing the computational task. Accordingly, the agent formulates a chain of calls invoking the functionality of data processing tools and models to perform the computational task responsive to the prompt.
G16C 20/30 - Prévision des propriétés des composés, des compositions ou des mélanges chimiques
G16C 20/70 - Apprentissage automatique, exploration de données ou chimiométrie
G16C 60/00 - Science informatique des matériaux, c.-à-d. TIC spécialement adaptées à la recherche des propriétés physiques ou chimiques de matériaux ou de phénomènes associés à leur conception, synthèse, traitement, caractérisation ou utilisation
G06N 3/00 - Agencements informatiques fondés sur des modèles biologiques
Techniques for implementing an AI threat modeling tool are disclosed. A static analysis tool is used to extract a candidate code snippet from a code repository. The candidate code snippet is identified as potentially being a security relevant code element. The static analysis tool generates additional context associated with the candidate code snippet. An LLM prompt is generated. This prompt is structured to include the candidate code snippet, the context, and a directive to assign a classification to the candidate code snippet. The classification includes a source classification, a sink classification, a sanitizer classification, or a flow step classification. The LLM operates on the prompt to generate output comprising a specific classification for the candidate code snippet. The output is formatted into a data extension file that is consumable by the static analysis tool.
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
Example solutions perform natural language query processing on hybrid utterances. A precise segment is identified, within the hybrid utterance, and processed with a symbolic AI interpreter configured to generate a first interpretation. The precise segment is replaced, within the hybrid utterance, with a placeholder term thereby resulting in a vague utterance. The vague utterance is processed with a statistical AI interpreter configured to generate a second interpretation. The first interpretation is merged with the second interpretation using the hybrid utterance as a template for the merger and using the placeholder term as the location for the first interpretation within the second interpretation. A complete interpretation is generated and transmitted to a query generator.
Detection of malicious direct memory access (DMA) device used for direct device assignment. A virtualization computer system assigns a peripheral device to an operating context within a virtualization environment. The peripheral device is DMA capable. The virtualization computer system monitors a signal source that is affected by DMA operations initiated by the peripheral device while the peripheral device is assigned to the operating context. Based on monitoring the signal source, the virtualization computer system identifies a signal pattern characterizing the DMA operations that are initiated by the peripheral device. Using the signal pattern, the virtualization computer system determines that the DMA operations initiated by the peripheral device are abnormal and the virtualization computer system identifies the peripheral device as malicious.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
16.
METHODS AND SYSTEMS FOR ENHANCING MULTIMODAL CAPABILITIES IN LARGE LANGUAGE MODELS
Systems and methods are provided for enhancing the speech modality in a large language model (LLM) and for retaining in-context learning capabilities without overfitting to trained tasks. Systems obtain a first set of training data comprising tuples of a sample of speech combined with synthetically generated pairings of speech comprehension test questions and answers that correspond to the sample of speech and obtain a second set of training data comprising pairings of automatic speech recognition data. Systems generate and align a first set of encodings of the first set of training data and a second set of encodings of the second set of training data. Systems train the LLM on a greater amount of the first set of training data than the second set of training data and use the trained LLM to perform a natural language processing task.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G10L 15/16 - Classement ou recherche de la parole utilisant des réseaux neuronaux artificiels
G10L 15/183 - Classement ou recherche de la parole utilisant une modélisation du langage naturel selon les contextes, p. ex. modèles de langage
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
17.
APPARATUS AND METHODS FOR PRIME FIELD MODULAR REDUCTION
Apparatus and methods for prime field modular reduction are described. As an example, a custom modular reduction digital circuit for reducing an n-bit integer based on a modulus, where the modulus comprises a k-bit integer for use with a cryptographic algorithm, is described. The custom modular reduction digital circuit includes a first circuit to generate at least two partial results by processing: (1) k lower order significant bits of the n-bit integer and (2) at least a subset of bits for congruent representations corresponding to any n-k higher order bits of the n-bit integer that are higher in significance than the most significant bit of the k-bit integer. The custom modular reduction digital circuit further includes a second circuit to process the at least two partial results, output by the first circuit, to generate a reduced version of the n-bit integer for use with the cryptographic algorithm.
G06F 7/72 - Méthodes ou dispositions pour effectuer des calculs en utilisant une représentation numérique non codée, c.-à-d. une représentation de nombres sans baseDispositifs de calcul utilisant une combinaison de représentations de nombres codées et non codées utilisant l'arithmétique des résidus
Techniques for using a sensor to perform laser signal decoding are disclosed. The sensor may be a global shutter sensor or a rolling shutter sensor. The sensor generates a first set of images while operating in a first mode. In response to detecting a laser signal in the first set of images, the sensor is caused to operate in a second mode. The laser signal includes an embedded frequency signal component and repeats at a periodic rate. While the sensor is operating in the second mode, the sensor generates a second set of images, which capture an entire period of the laser signal. From the second set of images, the embedded frequency signal component is determined. A decoding operation is performed using the embedded frequency signal component.
A data processing system implements obtaining build logs that include information associated with a software build problem; analyzing the logs to generate a knowledge graph identifying the relationship between various entities in the logs; extracting a signature of a candidate root cause of the build problem from the knowledge graph representing a subset of nodes and edges of the knowledge graph; providing the signature of the candidate root cause to a graphical language model to obtain a prediction of a category of root cause failure selected from among a plurality of root cause failures; constructing a prompt for a language model to generate a root cause failure analysis that describes the root cause of the build problem, the prompt including the category of root cause; receiving the root cause failure analysis from the language model; and performing one or more actions in response to receiving the root cause failure analysis.
In a cloud computing environment, a cross-tenant access security measure monitors conditional access policies for changes or additions that hamper or threaten an authorized access from an assistant tenant user to a focus tenant. Some cross-tenant access security tracks role assignments to detect rogue roles, or detect hampering role changes. In some cases, focus tenant events and assistant tenant events are correlated in an audit. In some cases, the authorized access is a zero standing time bound access. In some cases, the authorized access is constrained to an IP address range, or constrained to login from a managed device, or both. In some cases, assets are excluded from managed response remediation actions. In some, managed response is modulated by product-specific Role Based Access Control. In some, repeated logins are avoided, to permit faster managed responses.
The disclosed concepts relate to contextualization of generative language models. In some implementations, a linked entity database is populated with entity resource identifiers of entities extracted from a search log by an entity linker. A contextualized prompt data structure is generated based on the linked entity database, e.g., by including linked entity context information in the contextualized prompt data structure. A response to the contextualized prompt data structure is received, where the response is conditioned on the linked entity context information.
A phase-interpolator, PI, circuit (700) generates an interpolated clock (PI_CLK) to capture data in a capture circuit at a target phase in a phase range between two reference clocks based on an interpolation code (S) within a range of interpolation codes is described. A clamping circuit (704) coupled to the PI circuit provides an interpolation code (S) within a reduced range, where the integral non-linearity, INL, of the interpolated clocks is below a threshold, such that data capture based on the interpolated clock has a lower bit error rate, BER. As a result, the interpolated clock is generated within a reduced phase range corresponding to the reduced range of interpolation codes. When a target phase for an interpolated clock is outside the reduced phase range, the clamping circuit may adjust the target phase clock (PHA_REF) relative to a reference clock to adjust the target phase to be within the reduced phase range for improved BER.
H03K 5/135 - Dispositions ayant une sortie unique et transformant les signaux d'entrée en impulsions délivrées à des intervalles de temps désirés par l'utilisation de signaux de référence de temps, p. ex. des signaux d'horloge
H03K 5/00 - Transformation d'impulsions non couvertes par l'un des autres groupes principaux de la présente sous-classe
23.
SYSTEMS AND METHODS FOR ZERO TRUST DNS BASED NETWORKING
Examples of the present disclosure describe systems and methods for zero trust domain name system (DNS) (ZTDNS) based networking. A computing device implementing ZTDNS based networking blocks any outbound connections that are not included in a list of trusted IP addresses. The list of trusted IP addresses is updated in response to the computing device receiving from a trusted DNS server an IP address corresponding to a DNS request. In some examples, the ZTDNS based networking intercepts and evaluates outbound communications for applications that implement a custom application DNS client. In other examples, the ZTDNS based networking intercepts and evaluates outbound communications for virtual environments. The outbound communications for both the custom application DNS client and the virtual environments are proxied through a local DNS client of the computing device.
H04L 61/4511 - Répertoires de réseauCorrespondance nom-adresse en utilisant des répertoires normalisésRépertoires de réseauCorrespondance nom-adresse en utilisant des protocoles normalisés d'accès aux répertoires en utilisant le système de noms de domaine [DNS]
First and second device terminals of a multi-terminal quantum device (200) are coupled to first and second external measurement terminals respectively, whilst a device ground terminal is coupled to an external ground terminal. The device ground terminal is coupled to the external ground terminal via two parallel ground lines (101, 102). A first of these ground lines (101) includes a voltage measurement device (105), and a second ground line (102) comprises a voltage generator (106). A controller (103) receives as input a time-varying voltage measurement on the first line (101), and uses this measurement to generate a control signal to the voltage generator (106). The control signal causes the voltage generator (106) to generate a time-varying stabilization voltage on the second ground line (106) in order to mitigate or cancel any residual voltages on the device ground terminal.
G05F 1/46 - Régulation de la tension ou de l'intensité là où la variable effectivement régulée par le dispositif de réglage final est du type continu
G06N 10/40 - Réalisations ou architectures physiques de processeurs ou de composants quantiques pour la manipulation de qubits, p. ex. couplage ou commande de qubit
25.
MEDIA SERVER PROXY THAT SWITCHES STREAMING MEDIA PROTOCOLS
A media server proxy switches streaming media protocols ("SMPs") during streaming of media segments. The media server proxy receives a request, from a playback tool, according to a first SMP to provide information about outgoing media segments of a media sequence. The media server proxy generates the information about outgoing media segments and sends the information to the playback tool. The media server proxy also retrieves, from a remote server, incoming media content for the media sequence according to a second SMP different than the first SMP. The media server proxy assembles outgoing media segments based at least in part on the incoming media content. The media server proxy streams, to the playback tool, outgoing media segments according to the first SMP. In this way, the media server proxy can deliver media segments at very low latency, even when the first SMP typically has much higher latency.
H04N 21/222 - Serveurs secondaires, p. ex. serveur proxy ou tête de réseau de télévision par câble
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 21/438 - Interfaçage de la voie descendante du réseau de transmission provenant d'un serveur, p. ex. récupération de paquets du flux vidéo codé d'un réseau IP
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
26.
DIGITAL PHASE-LOCKED LOOPS (PLL) INCLUDING CLOSED-LOOP TIME-TO-DIGITAL CONVERTER (TDC) GAIN CALIBRATION CIRCUITS AND RELATED METHODS
In a calibrated digital phase-locked-loop (DPLL) circuit, during a normal operating mode, a control value provided to a digitally controlled oscillator (DCO) is updated by a feedback circuit to keep an output clock generated by the DCO synchronized with a reference clock. The feedback circuit includes a time-to-digital converter (TDC) circuit to measure a phase difference as a time interval. In a calibration operating mode of the calibrated DPLL circuit, calibration of a resolution of a time measurement of the time interval measured by the TDC is performed in the feedback circuit while the control value provided to the DCO is kept constant. Calibrating the TDCs in each of the DPLLs in an integrated circuit (IC) to a nominal resolution in this manner improves synchronization of the clock domains. In some examples, the TDC circuit is a Vernier type circuit and calibration sets a delay difference to a nominal resolution.
H03L 7/07 - Commande automatique de fréquence ou de phaseSynchronisation utilisant un signal de référence qui est appliqué à une boucle verrouillée en fréquence ou en phase utilisant plusieurs boucles, p. ex. pour la génération d'un signal d'horloge redondant
G06F 1/12 - Synchronisation des différents signaux d'horloge
H03L 7/085 - Détails de la boucle verrouillée en phase concernant principalement l'agencement de détection de phase ou de fréquence, y compris le filtrage ou l'amplification de son signal de sortie
G04F 10/00 - Appareils pour mesurer des intervalles de temps inconnus par des moyens électriques
27.
SIGNIFICANCE ORDERED PREFIX TREE FOR COMPUTE-EFFICIENT ROOT CAUSE INVESTIGATION
generating a significance-ordered prefix tree based on the significant telemetry point values and the Z-scores; using the significance-ordered prefix tree to identify cohorts to evaluate in combination; computing a cohort Z-score for each of the identified cohorts and identifying, based on the cohort Z-scores, a subset of the cohorts that are statistically significant indicators of the condition of interest.
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
H04L 41/0604 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant du filtrage, p. ex. la réduction de l’information en utilisant la priorité, les types d’éléments, la position ou le temps
H04L 43/065 - Génération de rapports liés aux appareils du réseau
H04L 41/0631 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant l’analyse des causes profondesGestion des fautes, des événements, des alarmes ou des notifications en utilisant l’analyse de la corrélation entre les notifications, les alarmes ou les événements en fonction de critères de décision, p. ex. la hiérarchie ou l’analyse temporelle ou arborescente
28.
SYSTEM AND METHOD FOR PERFORMING QUERY OPERATIONS ON RUN LENGTH ENCODED DATA
A method, computer program product, and computing system for processing query operations on run length encoding (RLE), data in a parallel processing computing system. Data for query execution is received at a parallel processing computing system, at least a portion of the data being compressed according to RLE, thereby forming RLE data; and a query operation is executed on the RLE data without performing a decompression operation on the RLE data.
A technique partitions a user's original query into plural smaller component queries, each of which has a common part and an instance-specific part. The technique distributes the component queries to plural processor instances of a processor. The plural processor instances transform the respective component queries into query-component responses by acting in parallel, independent of each other. The technique generates a final response based on the query-component responses, e.g., by assembling the component-query responses into the final response. The technique reduces latency because the processor instances work on parts of the user's original query at the same time, rather than as a single stream of consecutive tokens. The plural processor instances have access to a shared cache memory, and utilize relevant data that has been computed in response to previous queries.
A computing device may include a substrate. A computing device may include a processing unit supported by the substrate. A computing device may include an optical transmitter supported by the substrate and in electrical communication with the processing unit.
A method, computer program product, and computing system for optimizing query operations on run length encoding (RLE)data in a parallel processing computing system. Data is received in a plurality of columns of an input table of a parallel processing computing system for query execution; the system determines that at least a portion of the received data in a first number of columns is compressed according to run length encoding (RLE), thereby comprising RLE data columns including RLE data and that the received data in a second number of columns is not compressed according to run length encoding (RLE), thereby comprising non-RLE data columns including non-RLE data. A query operation is executed on the RLE data and the non-RLE data by prioritizing processing of the RLE data columns over processing of the non-RLE data columns.
Probation of direct memory access (DMA) device used for direct device assignment. A virtualization computer system identifies a peripheral device as being removed from a direct assignment to a first operating context of a virtualization environment. The peripheral device is DMA capable. The virtualization computer system assigns the peripheral device to a second operating context of the virtualization environment and initiates a device validation against the peripheral device. Based on the device validation indicating that the peripheral device is normal, the virtualization computer system reassigns the peripheral device to a third operating context of the virtualization environment. Based on the device validation indicating that the peripheral device is abnormal, the virtualization computer system excludes the peripheral device from assignment to a third operating context of the virtualization environment.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
Example solutions for processing LLM prompts include creating a first large language model (LLM) prompt based on an input LLM prompt. The first LLM prompt represents a first step toward generating a solution to the input LLM prompt. The first LLM prompt is submitted to an LLM as a first sub-query, thereby resulting in the generation of a first LLM output. A second LLM prompt is generated based on the input LLM prompt. The second LLM prompt represents a second step toward generating the solution. The second LLM prompt includes the first LLM output. The second LLM prompt is submitted to the LLM as a second sub-query, thereby resulting in the generation of a second LLM output. The second LLM output represents the solution to the input LLM prompt in response to the input LLM prompt.
Methods, systems, and computer storage media for providing generative artificial intelligence (AI) output validation using a generative AI output validation engine in an artificial intelligence system. The generative AI output validation engine assesses and determines the quality (e.g., quantified as an output validation score) of generative AI output (e.g., LLM output). In operation, a generative AI output comprising summary data is accessed. Raw data from which summary data is generated is accessed. A plurality of output validation operations associated with a generative AI output validation engine are executed. The generative AI output validation engine comprises multi-categorical analytical models that provide corresponding output validation operations for quantifying quality of generative AI outputs. Using the generative AI output validation engine, generating an output validation score for the summary data. Communicating the output validation score. A feedback loop is established to incorporate human feedback for fine-tuning the generative AI output validation engine models.
Visual annotation is able to extend solution space for multi-modal large language models (MMLLMs), enabling task performance where previously not feasible. Examples receive a task image and a task prompt; perform a localization process on the task image, based on at least the task prompt, the localization process annotating a first feature within the task image using a first image annotation; retrieve first textual information for the first feature from a first selected data source of a plurality of data sources; generate a model prompt based on at least the task prompt, the model prompt comprising the task image with the first image annotation, the first textual information, and information linking the first feature with the first textual information; and generate, with an MMLLM, a task output based on at least the model prompt.
Some embodiments enhance the security of domain name resolution and other DNS operations, by automatically intercepting the DNS operation, determining an associated device identity or ascertaining an associated user identity, and enforcing a security policy based on at least the DNS operation and based on at least one of the identities. Some securable DNS operations include resolution requests, reverse lookups from IP addresses to domain names, DNS record accesses, mail server mappings, redirection, forwarding, and DNS record cache operations. Enforcing the policy includes, e.g., preventing a result requested by the DNS operation, permitting computational progress toward the requested result, allowing a different result, modifying a DNS record, or flushing a DNS record from a cache. In some embodiments, DNS operation security functionality utilizes or implements a conditional access security functionality, thereby providing, e.g., a secure conditional domain name resolution.
H04L 61/4511 - Répertoires de réseauCorrespondance nom-adresse en utilisant des répertoires normalisésRépertoires de réseauCorrespondance nom-adresse en utilisant des protocoles normalisés d'accès aux répertoires en utilisant le système de noms de domaine [DNS]
37.
POWER STABILIZATION USING A CAPACITOR BANK CONNECTED TO A BI-DIRECTIONAL CONVERTER
Power draw stabilization is provided. A target power consumption of a source load is determined. The source load is generated by electronics supplied power by a primary power source through a power rail. The power rail is coupled to a capacitor bank by a bi-directional converter configured to smooth fluctuations in power drawn from the primary power source by performing mode switch operations. The mode switch operations include, in response to the source load exceeding a target power consumption, controllably switching to a second directional mode that directs current released from the capacitor bank to the power rail. The mode switch operations further include, in response to the source load dropping below the target power consumption, controllably switching the operational mode of the bi-directional converter to a first directional mode to direct current from the power rail into the capacitor bank.
H02J 3/32 - Dispositions pour l'équilibrage de charge dans un réseau par emmagasinage d'énergie utilisant des batteries avec moyens de conversion
H02J 9/06 - Circuits pour alimentation de puissance de secours ou de réserve, p. ex. pour éclairage de secours dans lesquels le système de distribution est déconnecté de la source normale et connecté à une source de réserve avec commutation automatique
H02J 3/46 - Dispositions pour l’alimentation en parallèle d’un seul réseau, par plusieurs générateurs, convertisseurs ou transformateurs contrôlant la répartition de puissance entre les générateurs, convertisseurs ou transformateurs
38.
FIXING USAGES OF DEPRECATED APIS USING LARGE LANGUAGE MODELS
Techniques for intelligently prompting an LLM to fix code are disclosed. A corpus of release notes for a set of libraries is accessed. The release notes include information describing deprecated or removed APIs associated with the libraries. The corpus is stored in a vector database. A code snippet is accessed. This snippet is identified as potentially using a deprecated API. The code snippet is used to identify a set of release notes from the vector database. These release notes are determined to satisfy a threshold level of similarity with the code snippet. An LLM prompt is built and is fed to the LLM. The LLM prompt instructs the LLM to update the code snippet based on the identified set of release notes. Output of the LLM is displayed. This output includes a proposed rewritten version of the code snippet.
Disclosed systems and methods identify a data record set and determine whether one or more predetermined conditions exist for triggering analysis of one or more records in the data record set. Disclosed embodiments trigger the analysis only in response to determining that the predetermined conditions have been met. Upon triggering the analysis of the data record set, disclosed embodiments identify a subset of the data record set to undergo the analysis while refraining from performing the analysis on the remaining records in the data record set. Further, embodiments identify an analysis model based on a level of analysis to be performed and apply the analysis model to the subset of the data record set to identify any presence of sensitive data. Lastly, disclosed embodiments selectively perform a security process to the data record set in response to detecting the presence of the sensitive data.
The present disclosure relates to a vector processor implemented on programmable hardware (e.g., a field programmable gate array (FPGA) device). The vector processor includes a plurality of vector processor lanes, where each vector processor lane includes a vector register file with a plurality of register file banks and a plurality of execution units. Implementations described herein include features for optimizing resource availability on programmable hardware units and enabling superscalar execution when coupled with a temporal single-instruction multiple data (SIMD).
G06F 9/30 - Dispositions pour exécuter des instructions machines, p. ex. décodage d'instructions
G06F 9/38 - Exécution simultanée d'instructions, p. ex. pipeline ou lecture en mémoire
G06F 15/78 - Architectures de calculateurs universels à programmes enregistrés comprenant une seule unité centrale
G06F 15/80 - Architectures de calculateurs universels à programmes enregistrés comprenant un ensemble d'unités de traitement à commande commune, p. ex. plusieurs processeurs de données à instruction unique
A hinge assembly rotatably coupling a first portion of a foldable display device to a second portion of the foldable display device includes a hinge spine and a hinge attachment frame attached to the first portion. A virtual pivot link connects the hinge attachment frame to the hinge spine, wherein the virtual pivot link is movably coupled to the hinge spine via a virtual pivot. A display support is slidably attached to the virtual pivot link, such that the display support moves relative to the hinge attachment frame and the virtual pivot link during movement of the first portion relative to the second portion.
The present disclosure provides methods, apparatuses and non-transitory computer-readable medium for providing a structured image collection. User data may be obtained. A user intent may be determined based on the user data through an intent classifier, wherein the user intent comprises at least a sequential result intent or a grouped result intent. A target query may be determined based on the user data. The structured image collection may be generated based on the target query and the user intent, wherein the structured image collection comprises a plurality of images that are structurally related.
Systems and methods are disclosed herein for providing a shared ambiance in a virtual meeting. In some instances, systems receive audio signals from a set of audio inputs corresponding to a plurality of participants in a virtual meeting. Each audio signal includes a voice component and a background noise component. Systems isolate the background noise component from the voice component for each received audio signal and then determine an ambiance score for each isolated background noise component. Based on the determined ambiance scores, systems select a particular background noise component from the isolated background noise components and transmit the particular background noise component to a set of audio outputs corresponding to the plurality of participants in order to provide a shared ambiance for the plurality of participants in the virtual meeting.
Techniques for intelligently prompting an LLM to refactor code are disclosed. A code snippet is accessed. This code is identified as potentially comprising a reference to an out-of-compliance library. Context for the code snippet is generated. An LLM prompt is then built. This prompt will be fed to the LLM, and the prompt instructs the LLM to refactor the code snippet into modified code, which calls a compliant library. Output of the LLM is displayed. This output is based on the LLM operating in response to the LLM prompt. The output includes a proposed rewritten version of the code snippet.
A computing system (10) for conditional generation of protein sequences includes processing circuitry (18) that implements a denoising diffusion probabilistic model (26). In an inference phase, the processing circuitry (18) receives an instruction (32) to generate a predicted protein sequence (64) having a target functionality, the instruction (32) including first conditional information (34) and second conditional information (36). The processing circuitry (18) concatenates a first conditional information embedding (40) generated by a first encoder (38) and a second conditional information embedding (44) generated by a second encoder (42) to produce a concatenated conditional information embedding (46). The processing circuitry (18) samples noise from a distribution function (52) and combines the concatenated conditional information embedding (46) with the sampled noise to produce a noisy concatenated input (56). The processor (10) inputs the noisy concatenated input (56) to a denoising neural network (58) to generate a predicted sequence embedding (60), inputs the predicted sequence embedding (60) to a decoding neural network (62) to generate the predicted protein sequence (64), and outputs the predicted protein sequence (64).
Systems, methods, apparatuses, and computer program products are disclosed for employing a hybrid boot to reimage a target device using a mobile device. A mobile device provides, to a target device, a boot file configured to execute an intermediate operating system. The mobile device performs a user presence check to determine whether the target device is in proximity to the mobile device. Responsive to determining that the target device is in proximity to the mobile device, the mobile device provides, to the intermediate operating system on the target device, transfer information associated with at least a first restricted-access portion of a customized system image to cause the intermediate operating system to obtain the first restricted-access portion of the customized system image and reimage the target device based at least on the first restricted-access portion of the customized system image.
A disclosed method facilitates AI-generation of a customized email per a methodology that significantly reduces the risk of the customized email including hallucinated facts or undesirable personal identity information (PII). The method includes identifying an email template and a recipient identifier that identifies a recipient of the customized email based on user inputs to an email application; mining contextual data stored in association with the recipient identifier; generating a large language model (LLM) prompt based on the email template and the contextual data; providing the LLM prompt as input to a trained large language model (LLM); receiving the customized email as an output from the LLM; and returning the customized email to the email application for display within a user interface.
The described technology provides a method including generating a full tracking vector wherein each bit of the full tracking vector indicates cache validity state of a coherence granule (cogran) in agent cache for a related agent, dividing the tracking vector into a plurality of partial vectors (PVECs), for each PVEC, determining whether cache validity state of at least one bit in the PVEC is set to valid, and in response to determining that cache validity state of at least on bit in a given PVEC is set to valid, storing the given PVEC and its PVEC pointer in a tracking_info field of a base snoop filter (SFT) entry for the cogran, wherein the PVEC pointer indicates the location of the given PVEC in the full tracking vector.
The present disclosure provides methods, systems and storage media for conducting a security review of a system. Certain examples relate to the use of trained generative AI to generating a root security query using a machine learning (ML) generator, based on a system description. A security requirement associated with the root security query is extracted, and an indication of the root security query is output at a user interface. A user input is received in response, and the ML generator generates a follow-up request that is output via the user interface. A second user input is received in response to the follow-up request, and the ML generator then determines that the security requirement is not satisfied by the target system.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
This document relates to providing meaningful information relating to a dataset. One example can obtain aggregated summaries and a related knowledge graph. The example can enable local, community, and global retrieval augmented generation utilizing the aggregated summaries and the knowledge graph.
The described technology provides a multi-level error correction method, including encoding data received from a double data rate (DDR) memory by performing primary coding to generate transitional symbols, wherein the primary coding comprising at least one of cyclical redundancy check (CRC) encoding and single error correction double error detection (SECDED) encoding, performing a secondary coding on the transitional symbols to generate inner codes, the inner codes comprising code 1 parities generated from the transitional symbols and code 2 parities generated from the transitional symbols and metadata stored on the DDR memory, wherein the secondary coding comprising Reed Solomon (RS) encoding, and saving the inner codes on parity bit storage locations on a die of the DDR memory.
G06F 11/10 - Détection ou correction d'erreur par introduction de redondance dans la représentation des données, p. ex. en utilisant des codes de contrôle en ajoutant des chiffres binaires ou des symboles particuliers aux données exprimées suivant un code, p. ex. contrôle de parité, exclusion des 9 ou des 11
52.
TRACKING AND MITIGATION OF QUASIPARTICLE POISONING ERRORS IN MAJORANA QUANTUM COMPUTING SYSTEMS
A computing system including a quantum computing device that includes Majorana islands at which Majorana zero modes (MZMs) are instantiated. The computing system further includes a controller configured to control the quantum computing device to perform a joint parity measurement at two or more MZMs. The controller is further configured to control the quantum computing device to perform quasiparticle poisoning (QPP) detection at the one or more Majorana islands to thereby generate error data. The error data includes one or more QPP indications associated with the one or more Majorana islands. The controller is further configured to receive the error data from the quantum computing device. The controller is further configured to update an accumulated error state of the one or more Majorana islands based at least in part on the error data, and to perform an update operation based at least in part on the accumulated error state.
Keyboard (100) and trackpad (104) configurations and related methods (300) utilize data from one or more bending sensors (138) to adjust a driving signal for a haptic actuator (222) on a haptic trackpad (104). In one example, a method (300) for adjusting a driving signal for a haptic actuator (222) on a force-sensing haptic trackpad (104) in a deformable keyboard (100) includes using at least data from a bending sensor (138) to determine that the keyboard (100) is bending. At least on condition of determining that the keyboard (100) is bending, the method (300) includes using the data from the bending sensor (138) to adjust an initial haptic driving signal to an adjusted haptic driving signal. The haptic actuator (222) is driven with the adjusted driving signal to generate haptic output via a touch receiving surface (166) of the force-sensing haptic trackpad (104).
G06F 3/02 - Dispositions d'entrée utilisant des interrupteurs actionnés manuellement, p. ex. des claviers ou des cadrans
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p. ex. souris 2D, boules traçantes, crayons ou palets
Disclosed solutions perform image compression using a variational autoencoder that enables greater compression than traditional methods, while simultaneously maintaining superior fidelity for the decompressed image. Examples persist the bottleneck layer output of a variational autoencoder as a compressed image in the form of a latent tensor. The latent tensor is decompressed by a variational autodecoder into a recovered image in pixel space. In some examples, different encoder/decoder pairs are trained on specific image types, based on feature attributes. For example, maps have lines that are narrow compared to their length (e.g., have a high aspect ratio) which are different than features within photographs of people and scenes. Some examples leverage contrastive language-image pre-training (CLIP) and/or bootstrapping language-image pre-training (BLIP) models to store embeddings, each associated with a compressed image, to enable natural language searches of compressed image collections without requiring decompression.
Undesirable light leakage is reduced in a mixed-reality head-mounted display device using an out-coupling diffractive optical element in a waveguide combiner that is implemented using a surface relief grating (SRG) having a gradient refractive index. The SRG has gratings with modulated depth in which shallower gratings have a lower refractive index and deeper gratings have a higher refractive index. The lower efficiency of the shallower gratings reduces forward-propagating virtual image light leaking into the real-world environment of the HMD device while simultaneously enabling light to propagate to the deeper gratings to thereby improve virtual image uniformity over the entirety of eyebox of the combiner. The SRG with gradient refractive index is alternatively fabricated using an inkjet deposition process with resin inks having different refractive indexes and subsequent nanoimprint lithography grating imprinting or physical vapor deposition by which a thickness-modulated resin layer is applied to a constant-height grating structure.
A computing system including one or more processing devices configured to compute a simulation of a chemical system. Computing the simulation includes, at a machine learning force fields (MLFF) model, computing local label sets of respective local regions. Each local label set includes MLFF labels. Computing the simulation further includes computing local uncertainty values and identifying local regions based on the uncertainty values. For each identified local region, computing the simulation further includes computing a substructure that surrounds that region. The substructure is selected by computing a substructure boundary that approximately minimizes a substructure uncertainty. Computing the simulation further includes selecting atomic positions within the substructure that approximately minimize the substructure uncertainty. Computing the simulation further includes computing substructure labels via quantum chemical simulation, training the MLFF model using the substructure labels, and outputting a potential energy surface (PES) specified by the MLFF labels and the substructure labels.
G16C 10/00 - Chimie théorique computationnelle, c.-à-d. TIC spécialement adaptées aux aspects théoriques de la chimie quantique, de la mécanique moléculaire, de la dynamique moléculaire ou similaires
57.
IMAGE SENSING PIXEL CONFIGURATIONS FOR REDUCED DARK CURRENT NOISE
An image sensor comprises a plurality of image sensing pixels arranged to form a sensor array. Each image sensing pixel of the plurality of image sensing pixels comprises a semiconductor photodetector connected to a photosensitive region that comprises a photon reception area configured to receive photons to facilitate image capture. For at least a particular image sensing pixel of the plurality of image sensing pixels, the length or the width of the photon reception area is smaller than about 80% of a pixel pitch measurement between the particular image sensing pixel and an adjacent image sensing pixel, which contributes to reduced volume of the photosensitive region and mitigated sensor noise. A space between the photosensitive region of the particular image sensing pixel and the photosensitive region of the adjacent image sensing pixel comprises at least one oxide layer and/or at least one metal layer.
A method (52) for enacting a measurement circuit (50) of a surface code on a plaquette of qubits (14) of a qubit lattice (40) comprises: distributing (52A) among a sequence of time steps a set of one-qubit projective measurements on each of three auxiliary qubits (14A) of the plaquette; distributing (52B) among the sequence of time steps a set of two-qubit projective measurements on each of four data qubits (14D) of the plaquette together with one of the three auxiliary qubits; distributing (52C) among the sequence of time steps a set of two-qubit projective measurements on two or more auxiliary-qubit pairs selected from the three auxiliary qubits of the plaquette; and advancing (52D) through each of the time steps of the sequence, executing the one- and two-qubit projective measurements distributed therein. In this method the measurement circuit corresponds to a stabilizer of the surface code, and the measurements generate measurement of a stabilizer operator.
A method (70) for implementing a measurement circuit (50) of a surface code on a plaquette (46) of qubits (14) of a Majorana-tetron lattice (66, 68) comprises: distributing (70A)among a sequence of time steps a set of one-qubit projective-measurement loops on each of three auxiliary qubits (14A) of the plaquette; distributing (70B) among the sequence of time steps a set of two-qubit projective-measurement loops on each of four data qubits (14D) of the plaquette together with one of the three auxiliary qubits; distributing (70C) among the sequence of time steps a set of two-qubit projective measurement loops on two or more auxiliary-qubit pairs selected from the three auxiliary qubits of the plaquette; and advancing (70D) through each of the time steps of the sequence, executing the one- and two-qubit projective measurements distributed therein. In this method the measurement circuit corresponds to a stabilizer of the surface code, and the measurements generate measurement of a stabilizer operator.
High availability network services are provided in a communications network comprising a plurality of network devices including a network function implemented as two instances configured as an active instance and a backup instance. The backup instance maintains state data such that the backup instance can actively provide services in response to a failure of the active instance. A pool of data forwarding functions sends, over a tunnel connection, ingress data packets to the network function based on a MAC address of the active instance on an overlay network. When the active instance has failed, the backup instance provides the network function and the pool of data forwarding functions sends over the tunnel connection, subsequent ingress data packets to the network function based on an overlay network MAC address of the backup instance.
H04L 69/40 - Dispositions, protocoles ou services de réseau indépendants de la charge utile de l'application et non couverts dans un des autres groupes de la présente sous-classe pour se remettre d'une défaillance d'une instance de protocole ou d'une entité, p. ex. protocoles de redondance de service, état de redondance de protocole ou redirection de service de protocole
H04L 45/28 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données en utilisant la reprise sur incident de routes
H04L 61/103 - Correspondance entre adresses de types différents à travers les couches réseau, p. ex. résolution d’adresse de la couche réseau dans la couche physique ou protocole de résolution d'adresse [ARP]
61.
PART COMPOSITION CLASSIFICATION FOR ARTICLES OF MANUFACTURE
Systems and methods for describing a composition of an article of manufacture are disclosed. In one aspect, a method (300) includes receiving (310) article composition data (122) for an article of manufacture (210) that identifies a set of parts (222) of the article, a stated composition (240, 260) for each part of the set of parts, and a physical quantity (246, 266) of the stated composition. The method further includes classifying (316) the stated composition of each part of the set of parts into a normalized composition (148) that includes a set of normalized chemicals (324, 334, 344, 354, 364). The method further includes outputting (398) an aggregated physical quantity (396) of each normalized chemical for the set of parts of the article. The method can include classifying (370) the normalized composition of each part into a material category (382) within a hierarchical taxonomy (378) based on the set of normalized chemicals of that normalized composition and outputting (398) an aggregated physical quantity (397) of each material category for the parts.
Enforcement of a communication policy at a communication intermediary configured to communicate between a first communicating entity and a second communicating entity is provided. The communication intermediary includes packet routers. The enforcement includes identifying, by the packet routers of the communication intermediary, a secure plaintext label in each network packet of labeled network traffic received at the packet routers, evaluating whether the labeled network traffic satisfies an enforcement condition of the communication policy based on the secure plaintext label, instructing a network controller to operate on the labeled network traffic according to the communication policy, based on the operation of evaluating. Each network packet includes encrypted content configured to be inaccessible by the packet routers. The secure plaintext label is accessible by the packet routers and includes a data encoding of a portion of the encrypted content.
Some embodiments engineer a prompt for submission to a language model, such as a software development large language model. Some embodiments ascertain a relationship between code development information and potential context. Code development information includes static analysis results, project settings, development tool history or status data, and other software development data which augments training data previously embedded in the language model. Some embodiments compute a prompt inclusion score of the potential context, based on at least the relationship, and use the inclusion score to determine whether to include the potential context in the language model prompt. In some scenarios, an embodiment determines where to place the context in the prompt. Scoring is performed by a formula, statistical scoring model, or machine learning scoring model. Some embodiments reduce context inclusion false positives and false negatives that were based on the use of embedding similarity scores alone.
A computing device (100) is provided, including processor (104) and a storage device (102) holding instructions that are executable by the processor (104) to implement a base artificial intelligence (AI) model (106) and two or more delta AI models (108A-C), each delta AI model (108) having lower dimensionality than the base AI model (106). An inference request (110) including an input prompt (112) is received, the inference request (110) specifying a selected delta AI model (108A) of the two or more delta AI models (108A-C). The input prompt (110) is input to the base AI model (106) to thereby generate a base model result vector (116). The input prompt (112) is input to the selected delta AI model (108A) to thereby generate a delta model result vector (118). An output vector (122) is generated by combining the base model result vector (116) and the delta model result vector (118) via a combination operation (120). The output vector (122) is output.
A computing system (10) is provided, including processing circuitry (14) configured to cause an interaction interface (84) for a trained generative model (80) to be presented, in which the interaction interface (84) is configured to communicate a portion of a user interaction history (40). The processing circuitry (14) is further configured to receive, via the interaction interface (84), an input (92) for the trained generative model (80) to generate an output (94). The processing circuitry (14) is further configured to send a command to create, via the trained generative model (80) or another trained generative model (81), a whiteboard (60) based on the user interaction history (40) and receive the created whiteboard (60). The processing circuitry (14) is further configured to generate a prompt (70) based on the whiteboard (60) and the instruction from the user and provide the prompt (70) to the trained generative model (80). The processing circuitry (14) is further configured to receive a response from the trained generative model (80) and output (94) the response via the interaction interface (84).
A technique is described herein for receiving a selected set of weights and a mask produced by any type of sparsification process by operating on an original set of weights. The mask describes positions of the selected set of weights and a non-selected set of weights among a combined set of weights. For example, the non-selected set of weights represent weights that have been zeroed out in the original set of weights. In an inference stage, a processor directly performs computations on the selected set of weights and the mask, without the preliminary step of reconstituting the non-selected weights in memory. Instead, the processor performs computations that take into account the influence of the non-selected weights. The technique is efficient because it reduces the consumption of memory during the execution of the machine-trained model, and reduces the transactional costs associated with moving weights between memory and processing functionality.
Example solutions provide an artificial intelligence (AI) agent for pre-build configuration of cloud services in order to enable the initial build of a computational resource (e.g., in a cloud service) to minimize the likelihood of excessive throttling or slack. Examples leverage prior-existing utilization data and project metadata to identify similar use cases. The utilization data includes capacity information and resource consumption information (e.g., throttling and slack) for prior-existing computational resources, and the project metadata includes information for hierarchically categorization, to identify similar resources. A pre-build configuration is generated for the customer's resource, which the customer may tune based upon the customer's preferences for a cost and performance balance point.
Control of network traffic in a network is provided, including classifying a network request from a network source address using request classifiers selected from a plurality of request classifiers based on the network request satisfying classification conditions of the selected request classifiers, associating the network request with each classifier metric corresponding to the selected request classifiers, aggregating the classifier metrics associated with the network request to determine an aggregate request control metric of the network request, and instructing a network traffic controller to operate on the network request based on whether the aggregate request control metric satisfies a request control condition. Each of the plurality of request classifiers is associated in memory with a corresponding classifier metric.
A system for establishing network reliability for a computer network includes a plurality of initiating nodes to transmit a plurality of packets across the network and a plurality of receiving nodes to receive the plurality of packets via the network. A portion of the plurality of packets transmitted from the initiating nodes are appended with identifiers that correspond to characteristics of entities using the network. The plurality of receiving nodes transmit acknowledgement receipts associated with packets appended with the identifiers to a network monitoring system that monitors quality of service associated with the characteristics.
H04L 43/0805 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité
H04L 43/091 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en mesurant la contribution de chaque composant du réseau au niveau du service réel
Methods, systems, and computer storage media for providing compute management using a compute management engine in an artificial intelligence (AI) system. A compute management engine supports dynamically switching between two modes of operation for an inference phase of a generative artificial AI model. The compute management engine employs a bypass engine that causes prompt stage operations to be executed without an in-memory compute engine and causes auto-regression stage operations to be executed with the in-memory compute engine. In operation, an inference phase operation is accessed. When the inference phase operation is a prompt stage operation, the inference phase operation is executed without an in-memory compute engine. When the inference phase operation is an auto-regressive stage operation, the inference phase operation is executed with the in-memory compute engine. Memory output is generated for the inference phase operation to cause a processor to output a processor output for the inference phase operation.
A media server uses selective just-in-time ("JIT") transcoding of media such as video. For example, the media server determines a measure of complexity of a given segment of a given media sequence. The given segment has been encoded at a base bit rate. The media server evaluates a complexity condition for the given segment. As part of evaluating the complexity condition, the media server compares the measure of complexity to a complexity threshold. Based at least in part on whether the complexity condition is satisfied, the media server selects between use of preemptive transcoding and use of JIT transcoding for the given segment at a given target bit rate. In this way, the media server can selectively incur the cost of preemptive transcoding operations for the given segment if JIT transcoding would likely introduce unacceptable delay, and the media server can otherwise use JIT transcoding operations for the given segment.
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 19/115 - Sélection de la taille du code pour une unité de codage avant le codage
H04N 19/177 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un groupe d’images [GOP]
H04N 19/40 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le transcodage vidéo, c.-à-d. le décodage partiel ou complet d’un flux d’entrée codé suivi par un ré-encodage du flux de sortie décodé
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
72.
DEADLOCK DETECTION AND REMOVAL FOR MESH NETWORK FOR A PROCESSOR-BASED SYSTEM
Systems and methods are disclosed for detecting a deadlock in a cyclical dependency between a set of the plurality of nodes in a mesh network. In some aspects, each of the nodes having a stall detection circuit. The stall detection circuit of each of the nodes operates by providing a stall output that is asserted not only when linked input and output pipeline circuits are stalled but when a stall input from an upstream node indicates that the upstream node is stalled. The stall output is provided as a stall input to the downstream node. In this manner, the stall outputs of the stall detection circuits are stable and asserted when there is a deadlock in a cyclical dependency between a closed loop of nodes in the mesh network.
Systems, methods, devices, and computer readable storage media described herein provide techniques for simplifying data access and management for data computing. In an aspect, a request to load data is received. The request comprises an aliased name associated with the data. A call is transmitted to a name resolution service executing on a computing device. The call comprises the aliased name and is configured to cause the name resolution service to identify the data associated with the aliased name. A response is received from the first resolution service. The response comprises metadata of the data. The data is obtained from a data source based on the metadata. A dataset is generated based on the obtained data. A response to the request is provided. The response comprises the generated dataset. In a further aspect, an application is configured to import a library into a computer program under development.
A method, computer program product, and computing system for: receiving a request from a user to use grounding material in a generative AI system; establishing a network connection with trusted-source material to allow access to the trusted-source material; processing the grounding material to confirm the integrity of the grounding material; and allowing the grounding material to be utilized in the generative AI system if the integrity of the grounding material is confirmed.
Embodiments of the disclosed technologies include receiving a first query including at least one first query term and configuring at least one prompt to cause a large language model to translate the at least one first query term into a set of functions that can be executed to obtain at least one second query term and generate and output a plan that is executable to create a modified version of the first query based on the at least one second query term. The plan is obtained by applying the large language model to the at least one prompt as configured. The plan is executed to determine the at least one second query term and create the modified version of the first query. The modified version of the first query is executed to provide, via the user interface, a response to the first query.
The description relates to automated binary code summarization. In one example, a binary code summarization tool receives binary code and combines the received binary code with natural language in a prompt for a large language model (LLM). The binary code summarization tool receives a semantic summarization from the LLM relating to the received binary code and evaluates the new semantic summarization for malicious functionality in the received binary code.
The present disclosure relates to systems and methods that add an outer product engine and an accumulator array to implement Advanced Reduced Instruction Set Computer Machine (ARM)'s scalable matrix extensions (SME) instruction set in an ARM central processing unit (CPU) core. The systems and methods reuse the existing SVE hardware already present in the ARM CPU core for executing the SME instruction set. The systems and methods of the present disclosure use temporal single-instruction multiple data (SIMD) processing an instruction over multiple cycles to reduce memory bandwidth needed in the ARM CPU core to process the SME instruction set.
A method, computer program product, and computing system for processing target content generated by processing source content using a generative artificial intelligence (AI) model, where the generative AI model performs a task using the source content to generate the target content. An ontological concept is extracted from the source content using a natural language processing (NLP) engine. An ontological concept is extracted from the target content using the NLP engine. An ontological concept comparison score is generated by comparing the ontological concept from the source content and the ontological concept from the target content based upon, at least in part, the task performed using the source content to generate the target content. An issue is identified in the target content based upon, the ontological concept comparison score and the task performed using the source content to generate the target content.
Systems and methods are provided for implementing automatic prompt optimization using textual gradients. In various embodiments, a feedback prompt, input into a large language model ("LLM"), is used to generate textual gradients that criticize a current prompt. The feedback prompt includes the current prompt and predictions that are incorrect compared with corresponding labels associated with minibatch data processed by the LLM using the current prompt. The textual gradients and current prompt are used in an editing prompt to the LLM to obtain a set of optimized prompts, which may be expanded using a paraphrasing prompt that is input into the LLM to generate a set of paraphrased prompts. A selection algorithm is used to select one or more optimized prompts from the set of optimized prompts and/or the set of paraphrased prompts, and the process is repeated with the selected one or more optimized prompts replacing the current prompt.
The present disclosure proposes a method, apparatus and computer program product for video search. A query in natural language may be received. A video set relevant to the query may be recalled from a video corpus. The video set may be ranked and filtered, to obtain a video subset. A video segment set relevant to the query may be recalled from a video segment corpus corresponding to the video corpus. The video segment set may be ranked and filtered, to obtain a video segment subset. It may be detected whether the query indicates video segment search intent. In response to detecting that the query indicates video segment search intent, the video subset and the video segment subset may be jointly ranked through prioritizing the video segment subset over the video subset, to produce a search result set for the query.
G06F 16/738 - Présentation des résultats des requêtes
G06F 16/783 - Recherche de données caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
Artificial intelligence (AI) operation is improved by combining pre-processing with quantization and post-processing with dequantization. Floating point conversion may be implemented as fixed point to fixed point conversion. Floating point conversion and precision may be mimicked, for example, using high precision parameters in a fixed point to fixed point conversion. Mimicking floating point using hardware acceleration reduces sequential operations, such as machine learning model preprocessing and quantization by a CPU, to one or two clock cycles in a single step operation. Accordingly, computing resources, such as computing device cameras, may provide raw data to a hardware accelerator configured to quickly render the input in the correct format to an inference model by simultaneously performing preprocessing and quantization, substantially reducing inference latency and device power consumption while freeing up a CPU for other tasks.
G06F 7/544 - Méthodes ou dispositions pour effectuer des calculs en utilisant exclusivement une représentation numérique codée, p. ex. en utilisant une représentation binaire, ternaire, décimale utilisant des dispositifs n'établissant pas de contact, p. ex. tube, dispositif à l'état solideMéthodes ou dispositions pour effectuer des calculs en utilisant exclusivement une représentation numérique codée, p. ex. en utilisant une représentation binaire, ternaire, décimale utilisant des dispositifs non spécifiés pour l'évaluation de fonctions par calcul
G06F 7/483 - Calculs avec des nombres représentés par une combinaison non linéaire de nombres codés, p. ex. nombres rationnels, système de numération logarithmique ou nombres à virgule flottante
82.
RECOMMENDATIONS OF EXPRESSIVE ILLUSTRATIONS BASED ON ANIMATION COMPATIBILITY
The disclosed techniques provide a messaging system with a user interface (UI) having a specific arrangement of suggested expressive illustrations, such as emojis. In some examples, a system analyzes a received emoji and provides a suggested list of emojis for a response. The suggested emojis are arranged in a way that candidate emojis capable of generating animation effects in combination with the received emoji, are listed higher in ranking, e.g., preceding, over other candidate emojis that cannot generate animation effects in combination with the received emoji. The system can rank individual emojis, or other types of graphical expressions, depending on whether a candidate emoji is capable of generating animated effects with the received emoji. In some embodiments, candidate emojis that are capable of generating animated effects precede candidate emojis that are incapable of generating animated effects.
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
H04L 51/216 - Gestion de l'historique des conversations, p. ex. regroupement de messages dans des sessions ou des fils de conversation
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel pour la prise en charge des services des réseaux sociaux
83.
VALIDATING READ-ONLY PORTIONS OF FIRMWARE OR A DOWNLOADED IMAGE THEREOF BY BOOT FIRMWARE VALIDATED BY SECURE FLASH MEMORY
Techniques are described herein in which boot firmware validated by secure flash memory validates read-only portions of firmware stored by the firmware or a downloaded image of the read-only portions. The secure flash memory validates a portion of the firmware, which includes the boot firmware and a reference hash of the read-only portions, by comparing a calculated hash of the portion and the reference hash of the portion. The boot firmware initiates a boot of the firmware and validates the read-only portions (or the downloaded image of the read-only portions) by comparing a calculated hash of the read-only portions (or a calculated hash of the downloaded image) and the reference hash of the read-only portions. The boot firmware completes the boot of the firmware based at least on the read-only portions (or the downloaded image) being validated.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
84.
DATA HEALTH EVALUATION USING GENERATIVE LANGUAGE MODELS
The disclosed concepts relate to leveraging a language model to identify data health issues in a data set. One example method involves accessing a data set. The example method also involves, using an automated evaluation planning agent, inputting a prompt to generate a data evaluation plan for the data set to a generative language model, the prompt including context describing the data set. The example method also involves receiving the data evaluation plan generated by the generative language model and identifying one or more data health issues in the data set by performing the data evaluation plan using an automated evaluation plan execution agent.
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
A device may include a heat sink body defining a fluid reservoir. A device may include a working fluid in the fluid reservoir. A device may include a movable contact surface configured to transfer heat from a heat-generating component to the working fluid, wherein at least a portion of the movable contact surface is movable relative to the heat sink body.
The technology described herein uses local computing resources, rather than remote resources (e.g., server based), to provide a result upon determining that the local resource is capable of providing the result with above a threshold quality. When a local machine-learning model is not capable of providing the result with above the threshold quality, then a remote machine-learning model may be used to provide the result. A goal of the technology is to select the most efficient resource to provide a result without significantly compromising the quality of the result. The technology described herein makes a series of determinations to identify one or more machine-learning model results that may be provided locally with or without hybrid resources. Different machine-learning model results may be provided using different hybrid workflows. In aspects, a remote result and a local result are generated and ranked by the client.
Embodiments of the present disclosure include techniques for simulating circuits, such as systems-on-a-chip (SoC). In various embodiments, a circuit being simulated may comprise one or more processors. However, simulation of the circuit using a hardware description language (HDL) may use a virtualized processor to execute firmware instructions rather than simulating the processor using HDL. Certain instructions from the virtualized processor may be sent, during simulation, to the HDL simulation for execution.
G06F 13/10 - Commande par programme pour dispositifs périphériques
G06F 30/331 - Vérification de la conception, p. ex. simulation fonctionnelle ou vérification du modèle par simulation avec accélération matérielle, p. ex. en utilisant les réseaux de portes programmables [FPGA] ou une émulation
G06F 115/08 - Blocs propriété intellectuelle [PI] ou cœur PI
G06F 111/02 - CAO dans un environnement de réseau, p. ex. CAO coopérative ou simulation distribuée
G06F 115/02 - Conception de systèmes sur une puce [SoC]
G06F 117/08 - Co-conception matériel-logiciel, p. ex. partitionnement matériel-logiciel
88.
HIGH POWER LINE SYSTEM FOR HOLLOW CORE FIBER AND USES THEREOF
A system for transmitting a light signal between a first solid core optical fiber network and a hollow core fiber network includes a plurality of transponder-amplifiers, where each transponder-amplifier of the plurality of transponder-amplifiers comprises a transponder in optical communication with one of a power amplifier and a pre-amplifier. The plurality of transponder-amplifiers is in optical communication with the first solid core optical fiber network and is operative to receive a plurality of first light signals from the plurality of transponder amplifiers. A multiplexer located downstream of the plurality of transponder-amplifiers is operative to receive the plurality of first light signals. The multiplexer is operative to select between a plurality of first light signals and transmits at least one light signal of the plurality of first light signals to the hollow core fiber network.
A device for transmitting data from a plurality of solid core optical fibers to a hollow core fiber comprises a multiplexer; a first 4F optical system that is operative to receive the light output from the multiplexer; an amplifier disposed downstream of the first 4F optical system and upstream of a second 4F optical system, where the second 4F optical system is operative to receive amplified light output from the amplifier and output the amplified light to the hollow core fiber in a form that is compatible with the hollow core fiber.
Disclosed in some examples are methods, systems, machine-readable mediums that provide for multiple concurrent message input elements in a messaging application to allow for saved draft messages. The improved interfaces allow a user to compose separate, respective responses to multiple incoming messages of a same messaging thread.
G06Q 10/107 - Gestion informatisée du courrier électronique
H04L 51/216 - Gestion de l'historique des conversations, p. ex. regroupement de messages dans des sessions ou des fils de conversation
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p. ex. messagerie instantanée [IM]
H04L 51/043 - Messagerie en temps réel ou quasi en temps réel, p. ex. messagerie instantanée [IM] en utilisant ou en gérant les informations de présence
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
The disclosed concepts relate to implementation of application and application engine functionality using machine learning. One example method involves obtaining a seed image representing a seeded application state and mapping the seed image to at least one seed image token using an image encoder. The example method also involves inputting the at least one seed image token as a prompt to a neural dreaming model that has been trained to predict training sequences obtained from one or more executions of one or more applications, the training sequences including images output by the one more applications during the one or more executions and inputs to the one or more applications during the one or more executions. The example method also involves generating subsequent image tokens with the neural dreaming model, and decoding the subsequent image tokens with an image decoder to obtain subsequent images.
In various examples there is a method performed by a UDP firewall. A flow of packets is received from a public communications network, the flow of packets being sent into a private communications network. The firewall forwards a threshold amount of the flow of packets into the private communications network and validates the flow of packets in response to receiving a packet from the private communications network. In response to the validation failing the firewall blocks the flow of packets. In response to the validation succeeding the firewall allows the flow of packets to continue to be forwarded into the private communications network.
A computer-implemented method is provided that generates shots for inclusion in a few-shot learning technique. The method includes generating an input, such as a prompt, for a generative model. The input includes a received example generative model input, and instructions which, when processed by the generative model, cause the generative model to generate example input instructions according to different tiers. The input is provided to the LLM, and in response the generated example input instructions are received. The generated example input instructions are stored as shots in a data store, with the computer language input.
Examples of the present disclosure describe systems and methods for heterogeneous scheduling for processors with multiple core types. In some examples, a scheduler assigns thread policies to respective threads. The scheduler then allocates the threads to heterogeneous cores in accordance with the thread policies assigned to the respective threads. The heterogeneous cores include one or more power efficient cores, one or more intermediate cores, and one or more performance-oriented cores, among other core types. In some examples, a core parking engine determines how many cores should be unparked for one or more power efficient cores, one or more intermediate cores, and one or more performance-oriented cores, among other core types.
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
G06F 1/3206 - Surveillance d’événements, de dispositifs ou de paramètres initiant un changement de mode d’alimentation
G06F 1/329 - Économie d’énergie caractérisée par l'action entreprise par planification de tâches
G06F 1/3293 - Économie d’énergie caractérisée par l'action entreprise par transfert vers un processeur plus économe en énergie, p. ex. vers un sous-processeur
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
95.
KEY HIERARCHIES FOR VIRTUAL TRUSTED PLATFORM MODULES IN COMPUTING SYSTEMS
Systems, methods, devices, and computer readable storage media described herein provide techniques utilizing key hierarchies for virtual trusted platform modules (vTPMs). In an aspect, a system comprises a vTPM. The vTPM receives a first seed value representative of a system feature of the system. The vTPM generates the first key from the first seed value, the first key configured to unseal a sealed state of an operating system of a virtual machine. The vTPM utilizes the first key to unseal the sealed state. The vTPM provides the unsealed state to an instance of the virtual machine to cause the instance to boot the operating system based on the unsealed state. In a further aspect, the vTPM receives, from an application executed by the instance, a request to perform a cryptographic operation to modify an object utilizing the first key.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
96.
SIDE CHANNEL ANALYSIS PROTECTED HMAC DRBG ARCHITECTURE
Secure hash-based message authentication code (HMAC) deterministic random bit generator (DRBG) architectures are provided. A circuit can include HMAC DRBG circuitry including a counter configured to increment based on a clock state and provide a counter output, and HMAC function circuitry coupled to the HMAC DRBG circuitry, the HMAC function circuitry including first and second hashing circuits, the HMAC function circuitry configured to implement an HMAC function using the first and second hashing circuits and the counter output, the HMAC function circuitry configured to split a key into first and second shares based on the counter output and provide the first share to the first hashing circuit and the second share to the second hashing circuit.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
Improved branch target buffer (BTB) structures are provided. A device can include branch target buffers storing entries corresponding to branch instructions and corresponding targets of the branch instructions. The device can include a victim cache storing a branch target buffer entry that has been evicted from a branch target buffer of the branch target buffers. The device can include branch prediction circuitry configured to access the victim cache responsive to receiving respective miss indications from each branch target buffer of the branch target buffers.
A branch prediction device includes a hierarchy of successively slower to access branch target buffers that store branch target buffer entries identifying branch instructions, branch prediction circuitry configured to predict future branch instructions, and a branch target buffer prefetch table coupled to receive candidate entries corresponding to predicted future branch instruction branch target buffer misses, each entry of the candidate entries corresponding to a precursor branch instruction, and to receive predicted precursor branch instructions that trigger promotion of an entry in a branch target buffer of the branch target buffers to a faster branch target buffer of the branch target buffers.
A deep learning model is trained to learn to generate a better-quality unit test case for a focal method through reinforcement learning using a reward score that considers static code quality properties of a best coding standard. The static code quality properties include an assertion in the predicted unit test case, an invocation of the focal method in the predicted unit test case, and a descriptive name for the predicted unit test case. A reward model is trained to compute a reward score for a model-predicted unit test case based on the static code quality properties. The reward score is used in a proximal policy optimization method to produce a policy loss that updates the parameters of the deep learning model towards generating a better-quality unit test case.
A method, computer program product, and computing system for generating first encoded data by performing a first encoding of data included within each of a plurality of memory dies of a memory module using an exclusive-or (XOR) encoding process. Second encoded data is generated by performing a second encoding of the data included within each of the plurality of memory dies of the memory module and the first encoded data using a cyclic code encoding process. Error correction is performed on the data included within each of the plurality of memory dies of the memory module using the first encoded data, the second encoded data, an XOR decoding process, and a cyclic code error correction process.
G06F 11/10 - Détection ou correction d'erreur par introduction de redondance dans la représentation des données, p. ex. en utilisant des codes de contrôle en ajoutant des chiffres binaires ou des symboles particuliers aux données exprimées suivant un code, p. ex. contrôle de parité, exclusion des 9 ou des 11