A computer-implemented method involves identifying a wireless band or channel linked to a wireless module's operation at a computer system. The method further includes detecting that the data bus is in a first data bus operation mode that causes radio frequency (RF) interference at the wireless band or channel and identifying a second data bus operation mode that mitigates RF interference at the wireless band or channel. Subsequently, the method configures the data bus to operate in the second data bus operation mode, thereby reducing RF interference and enhancing the computer system's overall performance and power usage.
This disclosure describes a framework for generating real-time audio translations of videos on a client device. Specifically, this disclosure describes a video dubbing system that utilizes a concurrent batch-processing architecture to provide real-time audio translations of videos on a client device. Additionally, in one or more implementations, the video dubbing system utilizes time-aware segmentation to prevent audio misalignment of the translated audio. As described below, the video dubbing system efficiently provides high-quality audio translations of videos that accurately align with the video content for the entire video, regardless of the video's length.
G10L 13/00 - Synthèse de la paroleSystèmes de synthèse de la parole à partir de texte
G06F 40/58 - Utilisation de traduction automatisée, p. ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
G10L 15/04 - SegmentationDétection des limites de mots
3.
ALIGNING LARGE LANGUAGE MODELS WITH IN-SITU USER INTERACTIONS AND FEEDBACK
A data processing system implements a framework that utilizes in-situ user interactions as a source of feedback for improving the training of LLMs to generate outputs that align with user preferences. The framework includes a user preference evaluation pipeline analyzes in-situ user interactions with the LLMs and generates preference information that can be used to improve the training of the LLM to improve the alignment of the LLM with user preferences. The user preference evaluation pipeline includes a feedback signal identification unit that identifies explicit and/or implicit feedback provided by the users in response to content output by the LLM in response to a user prompt. The feedback signal identification unit estimates user satisfaction with a set of satisfaction rubrics and user dissatisfaction with a set of user dissatisfaction rubrics to generate user preference data that can be used to align an LLM with these user preferences.
A system for secure dynamic loading and execution of web workers and WebAssembly modules in web-based communication applications employs a reusable generic worker approach. The system includes a main application executing in its own environment subject to a first content security policy, which loads a first web worker from a resource indicated by this policy. The first web worker executes in its own environment, specifies a second content security policy, and dynamically loads and executes additional web workers or compiles WebAssembly modules upon request from the main application. This approach improves flexibility, efficiency, and security by offloading resource-intensive tasks, simplifying security implementations, enhancing scalability, and boosting performance for real-time audio and video processing in communication applications.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
This disclosure provides redox cyclable molecules for energy storage. These molecules belong to either the 4H-pyran-4-ylidene family or include a six-membered aromatic ring with one nitrogen atom at position 1 (pyridinium family) or two nitrogen atoms at positions 1 and 4 (pyrazinium family) or at positions 1 and 3 (pyrimidinium family). Molecules in these families are used as analytes in redox flow batteries.
A computing system (1) including one or more processing devices (10) configured to extract ontology elements (24) from conversational turns (20). The ontology elements are extracted at least in part by executing a generative language model (22). The one or more processing devices assign a respective ontology element type (32) to each ontology element and store the ontology elements in an ontology index (30). The one or more processing devices receive a user input (20A), and, at the generative language model, compute a structured retrieval-augmented generation (RAG) query (50). The one or more processing devices execute the structured RAG query over the ontology index to obtain one or more retrieved ontology elements. At the generative language model, the one or more processing devices compute and output a generative language model output (58) based at least in part on the user input and the one or more retrieved ontology elements.
In a computing network implementing an adaptive load balancing scheme, an indication of a link failure in the computing network is received. In response to receiving the indication, a temporary freeze mode is implemented that prevents the adaptive load balancing scheme from attempting further path exploration. A subset of routing options is used that is known to having been recently acknowledged to be valid.
In a computing network implementing an adaptive load balancing scheme using entropy values (EVs) to select network paths, the next expected packet sequence numbers (PSNs) sent along different paths are tracked. A generation number is increased to obtain a new EV and a last probe packet is sent to clear an old EV. If a starting PSN is divisible by a number k, an entropy slot is derived for each PSN using a modulo function based on k.
Techniques are described for handling real-time data on a client delivered from a backend service via a transport mechanism such as a SignalR WebSocket connection. Once a communication channel between the backend service and the client is established via the transport mechanism, the backend service transmits summary data and current data to the client in a keyset-valueset format via the communication channel at respective intervals. The summary data includes data gathered by the backend service during equal-length summary data intervals, whereas the current data includes data gathered by the backend service since an end time of a most recent one of the summary data intervals. After activation of a UI element, the client transforms data associated with the element from the summary data packet and a most recent one of the current data packets into a desired format and displaying the transformed data via the UI.
This document generally relates to matching power usage on follower devices to a level of user interaction. One example includes a processor configured to run an interactive application and to proactively identify active periods where a user interacts with a presentation of the interactive application and inactive periods where the user does not interact with the presentation. The example includes a communication component configured to send a first set of parameter values to a follower device for use during the identified active periods and to send a second set of parameter values to the follower device during the identified inactive periods that cause the follower device to use a relatively lower amount of power.
A63F 13/235 - Dispositions d'entrée pour les dispositifs de jeu vidéo pour l'interfaçage avec le dispositif de jeu, p. ex. des interfaces spécifiques entre la manette et la console de jeu utilisant une connexion sans fil, p. ex. infrarouge ou piconet
A63F 13/24 - Parties constitutives, p. ex. manettes de jeu avec poignées amovibles
A63F 13/42 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p. ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p. ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel
An apparatus comprising a processor and a memory storing instructions that, when executed by the processor, perform a method for estimating cleave quality of a cleaved hollow core optical fibre, is described. The method comprises receiving an image of an end face of the fibre, and analysing pixel data of the image to determine cleave quality of the fibre by: determining at least one feature of the image, the feature representing a characteristic of the end face of the fibre, and using a model to map the at least one feature to an indication of cleave quality. The method further comprises outputting the indication of cleave quality. A method for creating a model mapping at least one feature of an image of an end face of a cleaved hollow core optical fibre to an indication of cleave quality of the fibre is disclosed. The indication of cleave quality comprises an indication of at least one of cleave angle and cleave profile. The method comprises at least one of training a machine-learning model using training inputs, defining at least one rule mapping at least one feature of an end face image of a cleaved hollow core optical test fibre and defining a lookup table by associating at least one feature of an end face image of a cleaved hollow core optical test fibre.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
G02B 6/255 - Épissage des guides de lumière, p. ex. par fusion ou par liaison
High lattice thermal conductivity metallic materials for thermal management applications are disclosed. The metallic materials have a lattice thermal conductivity of above 100 W/mK. Examples of such metallic materials include tantalum phosphide (TaP) and manganese vanadium (MnV). These materials with high lattice thermal conductivity may be used in various applications to efficiently transfer heat. For example, they may be used in heat dissipation devices as well as in a thermally conductive unit of a device that also includes a heat generating unit. In an implementation, these materials may be used at interfaces between metallic and semiconductor or insulator materials.
A system for drawing a glass preform or input cane. The system has a feeding unit for the glass preform or input cane, a first furnace to soften a portion of the glass preform or input cane, a first pair of capstan belts to apply tension to the softened portion of the glass preform or input cane drawing the glass preform into a glass fiber or output cane, or to apply tension to the softened portion of the glass input cane drawing the glass input cane into the glass fiber, wherein each capstan belt comprises a belt surface comprising magnetic material such that only the belt surfaces contact the softened portion, and a first magnet positioned to remove magnetic particles from the glass fiber or output cane as it is drawn out by the first pair of capstan belts and without contacting the glass fiber.
C03B 37/025 - Fabrication de fibres ou de filaments de verre par étirage ou extrusion à partir de tubes, tiges, fibres ou filaments ramollis par chauffage
Systems and methods are provided for implementing data backup and recovery using cache-coherent interconnect node-based non-volatile memory. A cache-coherent interconnect node partitions a memory pool into a plurality of memory regions as well as a backup storage into a plurality of memory portions, and pre-allocates a memory region and a corresponding memory portion to each compute node. When a rack-level power loss occurs, and a battery-based power source is activated, a cache-coherent interconnect controller saves data from each memory region into the corresponding memory, portion, and subsequently saves an entry for each memoy portion in an index portion of the backup storage. Subsequently, the controller causes a power circuitry to shut down the backup power source. After rack-level power restoration and memory region initialization, the controller restores, for each memory region, the data saved in a corresponding memory portion into that memory region, based on information in a corresponding entry.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
15.
DRAWING GLASS WITH A SACRIFICIAL PROTECTIVE LAYER BACKGROUND
A system for drawing a glass preform or input cane with reduced glass contamination. The system has a first sacrificial layer made from a film free of loosely adhered particulate matter and having a first end and a second opposite end, wherein the first end is wound around a first supply bobbin and the second end is wound around a first take-up bobbin, and wherein a portion of the first sacrificial layer is configured to cover a surface of a first capstan belt, a second sacrificial layer made from a film free of loosely adhered particulate matter and having a first end and a second opposite end, wherein the first end is wound around the second supply bobbin, wherein the second end is wound around a second take-up bobbin, and wherein a portion of the second sacrificial layer covers a surface of a second capstan belt.
C03B 37/025 - Fabrication de fibres ou de filaments de verre par étirage ou extrusion à partir de tubes, tiges, fibres ou filaments ramollis par chauffage
According to examples, a thermally separating power coupling device includes a housing that thermally and electrically isolates a high-temperature superconductor (HTS) from an electrically conductive cable. The power coupling device includes a power coupling system that includes a rotatable shaft having a motor side and a generator side. On the motor side, a set of motor magnets is attached to the shaft and a set of motor coils are positioned near the set of motor magnets. On the generator side, a set of generator magnets is attached to the shaft and a set of generator coils is positioned near the generator coils. When electrical current is supplied from the HTS to the motor coils, the motor coils rotate, thus causing the shaft to rotate. In addition, as the shaft rotates, the generator coils produce an electrical current that is outputted to the electrically conductive cable.
H02G 15/34 - Accessoires de câble pour câbles cryogéniques
H01F 27/04 - Passages de conducteurs ou d'axes à travers les enveloppes, p. ex. pour dispositifs de changement de prise
H02K 55/04 - Machines dynamo-électriques comportant des enroulements qui fonctionnent à des températures cryogéniques du type synchrone avec des enroulements à champ tournant
The disclosed concepts relate to providing help sessions for video game players. For instance, a help session starting state can be obtained from a video game session by a particular video game player. The help session starting state can be loaded into a help session. During the help session, inputs received from a client device of a video game helper can be directed to the help session. After the help session, an updated help session state can be obtained. In some cases, the particular video game player can choose to accept the updated help session state and proceed with video game play from that state. In other cases, the particular video game player can choose to reject that state and return back to the help session starting state.
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/355 - Réalisation d’opérations pour le compte de clients ayant des capacités de traitement restreintes, p. ex. serveurs transformant une scène de jeu qui évolue en flux vidéo codé à transmettre à un téléphone portable ou à un client léger
A63F 13/493 - Reprise du jeu, p. ex. après une pause, un dysfonctionnement ou une panne de courant
A63F 13/86 - Regarder des jeux joués par d’autres joueurs
18.
PROVIDING ARBITRATION FOR RESOURCE SHARING USING CHANNEL PRIORITY DIFFERENCES IN PROCESSOR-BASED DEVICES
Providing arbitration for resource sharing using channel priority differences in processor-based devices is disclosed herein. In one embodiment, a processor-based device comprises a data allocation circuit that is communicatively coupled to one or more ingress channels and one or more egress channels. The data allocation circuit assigns an ingress channel priority to each ingress channel, and assigns an egress channel priority to each egress channel. The data allocation circuit generates one or more channel pairs by iteratively identifying an unpaired egress channel having a highest egress channel priority, calculating absolute differences between each ingress channel priority of each unpaired ingress channel and the egress channel priority of the unpaired egress channel, and allocating the unpaired egress channel to an unpaired ingress channel corresponding to the smallest absolute difference as a channel pair. The data allocation circuit then performs one or more transactions using the corresponding one or more channel pairs.
A computer-implemented method can generate a parallel schedule for partitioning devices included in a device cluster for parallel execution of a transformer model. The transformer model is represented by a chain of cells. Each cell includes a set of tasks of the transformer model. Generating the parallel schedule includes dividing the chain of cells into one or more sequential stages, creating one or more replicas of the transformer model or some of the cells, and mapping the set of tasks included in a cell to one or more devices of the device cluster. For a given workload, the method can execute the transformer model on the device cluster according to the parallel schedule.
A data processing system implements receiving, via a user interface of a client device, an image; constructing, via a prompt construction unit, a first prompt by appending the image to a first instruction string including instructions to a generative model: providing the first prompt to the generative model; generating, by the generative model and according to the first prompt, a depth map using an intensity of darkness of each pixel of the image as a respective depth of the pixel in a digital three-dimensional (3D) transparent object; digitally engraving, by the generative model and according to the first prompt, each pixel of the image in the 3D transparent object based on the respective depth in the depth map into a digital 3D engraved object; receiving the digital 3D engraved object from the generative model; and providing the digital 3D engraved object to display on the user interface.
An intelligent router for generative artificial intelligence (GAI) model instances optimizes request routing to reduce latency. The system predicts output lengths using a trained response-length predictor and assesses the state of multiple GAI instances, including prompt and decode distributions. It estimates the workload mixing impact of routing requests to each instance and determines selection probabilities using a machine-learning routing model. The router either assigns the request to the most suitable instance or delays routing if conditions are suboptimal. This approach improves end-to-end latency, Time-To-First-Token (TTFT), and Time-Between-Tokens (TBT) by considering the distinct characteristics of GAI workload phases.
A computer-implemented method can receive an internal representation of a transformer model which defines one or more repeating blocks, each block including a sequence of cells, and each cell including a set of tasks of the transformer model. The method can search for a plurality of parallel schedules for partitioning devices included in a device cluster for parallel execution of the transformer model. The searching includes determining a number of model replicas, determining a number of stages that divide the one or more repeating blocks, determining a number of cell replicas for each cell in a block, and for each cell replica of a cell, generating a task mapping which maps the set of tasks included in the cell to devices partitioned into the cell replica.
The disclosed concepts relate to automatically identifying conditions in a video game to trigger a help session. When a help session is triggered, another video game player or machine learning model can temporarily take over for the current video game player until an ending condition is reached. Help session triggering can be designated by evaluation of prior gameplay data of other video game players to identify in-game conditions that may tend to cause user disengagement, such as in-game conditions that are associated with difficult in-game goals or negative in-game consequences.
A63F 13/422 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p. ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p. ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel mappage automatique pour assister le joueur, p. ex. freinage automatique dans un jeu de conduite automobile
A63F 13/49 - Sauvegarde de l’état du jeuPause ou fin du jeu
A63F 13/497 - Répétition partielle ou entière d'actions de jeu antérieures
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
The disclosed concepts relate to training a machine learning model to provide help sessions during a video game. For instance, prior video game data from help sessions provided by human users can be filtered to obtain training data. Then, a machine learning model can be trained using approaches such as imitation learning, reinforcement learning, and/or tuning of a generative model to perform help sessions. Then, the trained machine learning model can be employed at inference time to provide help sessions to video game players.
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/86 - Regarder des jeux joués par d’autres joueurs
A63F 13/497 - Répétition partielle ou entière d'actions de jeu antérieures
Systems, methods, and computer readable storage media described herein for dynamically routing jobs to job service architectures and consolidating data. In an aspect, a job request associated with a user account is received. A migration status of the user account is determined to indicate the user account is migrating from a first job service architecture to a second job service architecture. A determination of whether or not the migration state is enabled is made. If the migration state is enabled, the job request is routed to the second job service architecture, causing the second job service architecture to schedule a corresponding job. If the migration state is not, the job request is routed to the first job service architecture, causing the first job service architecture to schedule the job. In a further aspect, the job request comprises a script and the job comprises a step to execute the script.
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
26.
HARDWARE ACCELERATOR WITH GENERALIZED MATRIX-VECTOR MULTIPLICATION AND POST-PROCESSING CIRCUITS
aii Bii Bijij ij ) included in the input matrix row to obtain an intermediate product row (44). The GEMV circuit adds the intermediate product row to a current-iteration row sum (45). The product vector is equal to the current-iteration row sum computed in a final streaming iteration. The GEMV circuit transmits the product vector as a streaming output to a post-processing circuit (30) included in the hardware accelerator. The post-processing circuit performs a vector processing operation (50) on the product vector to compute vector processing result (52), and outputs the vector processing result.
A computer-implemented method can receive an internal representation of a transformer model, an internal representation of a device cluster, and an internal representation of a workload for execution of the transformer model on the device cluster. The method can generate a plurality of candidate execution plans based on the internal representation of the transformer model and the internal representation of the device cluster. Each candidate execution plan represents a unique parallel schedule for partitioning devices in the device cluster for parallel execution of the transformer model. The method can determine an optimal execution plan, including evaluating resource usage of the plurality of candidate execution plans based on the internal representation of the workload, and selecting, among the plurality of candidate execution plans, the optimal execution plan which yields the lowest resource usage. The evaluating includes simulating execution of the transformer model on the device cluster to process the workload.
CONJOINED MEMORY SYSTEMS SUPPORTING DATA STORAGE IN LARGER MEMORY SYSTEM WHEN SMALLER MEMORY SYSTEM IS UNAVAILABLE AND WITH SMALLER MEMORY SYSTEM READ LATENCY, AND RELATED PROCESSOR-BASED SYSTEMS AND METHODS
Conjoined memory system that includes a larger memory system conjoined with a smaller memory system to support data storage in the larger memory system when the smaller memory system is unavailable, and related methods of performing memory accesses and computer-readable media are also disclosed. The conjoined memory system is configured to selectively direct new, incoming memory write requests for incoming data (e.g., incoming data packets to be stored) through a bypass data path to be written to memory entries in the smaller memory system if available for data storage (e.g., memory entry(ies) are free). Memory access latency and dynamic power expended for such memory accesses is reduced. However, if the smaller memory system is not available for data storage (e.g., memory entries are full), the conjoined memory system can selectively direct new, incoming memory write requests instead to the larger memory system to be stored in memory entries therein.
Passive devices may be embedded into a cavity in a package substrate, with electrical contacts of the passive device on a contact surface orthogonal to a surface of the package substrate and extending through the package substrate. The electrical contacts of the passive device may be coupled to vias coupled to a power supply to provide capacitive decoupling. One or more through-hole vias (THVs), which provide current to ICs on the package substrate, may be excluded from the package substrate to accommodate the passive device. Embedding the passive devices in the cavity of the package substrate with the contact surface orthogonal to, rather than parallel to, the surface of the package substrate, reduces an area occupied by the passive device. In this manner, a number of the THVs excluded from the package substrate is reduced, which results in a smaller impact to the resistance of the power supply network.
H01L 23/50 - Dispositions pour conduire le courant électrique vers le ou hors du corps à l'état solide pendant son fonctionnement, p. ex. fils de connexion ou bornes pour des dispositifs à circuit intégré
H01L 21/48 - Fabrication ou traitement de parties, p. ex. de conteneurs, avant l'assemblage des dispositifs, en utilisant des procédés non couverts par l'un uniquement des groupes ou
H01L 23/528 - Configuration de la structure d'interconnexion
H01L 23/538 - Dispositions pour conduire le courant électrique à l'intérieur du dispositif pendant son fonctionnement, d'un composant à un autre la structure d'interconnexion entre une pluralité de puces semi-conductrices se trouvant au-dessus ou à l'intérieur de substrats isolants
H01L 23/498 - Connexions électriques sur des substrats isolants
30.
METRICS-BASED COMPUTATIONAL METHOD SELECTION FOR THE PREDICTION OF A PHYSICAL PROPERTY
Examples are disclosed that relate to the selection of a method to compute a physical property based upon metrics obtained using a universal machine learning force field. One disclosed example provides a computing system comprising a logic subsystem, and a storage subsystem comprising instructions executable by the logic subsystem. The instructions are executable to obtain one or more metrics computed based upon an energy and a force determined for a material using a universal machine learning force field (MLFF), and based at least upon the one or more metrics, determine to use one of a first computational method or a second computational method to compute a predicted physical property value for the material.
G16C 10/00 - Chimie théorique computationnelle, c.-à-d. TIC spécialement adaptées aux aspects théoriques de la chimie quantique, de la mécanique moléculaire, de la dynamique moléculaire ou similaires
G16C 60/00 - Science informatique des matériaux, c.-à-d. TIC spécialement adaptées à la recherche des propriétés physiques ou chimiques de matériaux ou de phénomènes associés à leur conception, synthèse, traitement, caractérisation ou utilisation
This disclosure describes a framework for performing user-requested tasks automatically across an interactive interface using various types of machine learning models. Specifically, this disclosure outlines and describes a task execution system that utilizes a generative artificial intelligence (AI) action model and retrieval-augmented generation (RAG) to complete user-requested actions across an interactive interface. The task execution system solves many of the current limitations of LAMs by using a generative AI action model to determine a session plan, which includes a set of actions for accomplishing stages of the actionable task across the interactive interface, obtaining visual context information of each interactive interface segment, integrates RAG results to improve the accuracy of both the session plan and individual actions, and self-corrects when faced with unexpected obstacles.
Enabling efficient hash-based signature verification in processor-based devices is disclosed herein. In one exemplary embodiment, a processor-based device includes a processor device and a hash compute core circuit. The hash compute core circuit receives, from a process executing on the processor device, a digit of a plurality of digits of a message digest, a signature value corresponding to the digit, and an initialized context value. The hash compute core circuit generates a hash chain by being configured to, for Y times wherein Y is an integer value calculated using a value of the digit, update the context value, and perform a hash operation on the signature value. The hash compute core circuit then transmits an ending value of the hash chain to the process, which stores the ending value of the hash chain.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
33.
ADJUSTING PROBABILITY OF AN END-OF-SENTENCE TOKEN IN A GENERATIVE ARTIFICIAL INTELLIGENCE MODEL
A vision language model ("VLM") generates text captions from video content. Innovations in controlling the complexity of captioning that uses a VLM are described. For example, a training tool updates a training set so that text captions are more concise, then fine-tunes a VLM using the updated training set. Or, as another example, a generative artificial intelligence model such as a VLM dynamically adjusts the probability of an end-of-sentence ("EOS") token so that the probability of the EOS token increases in successive iterations of output token generation, which tends to make generated text captions more concise. Or, as another example, a captioning tool identifies and ranks representative units (such as keyframes) of video, then selectively applies captioning (using a VLM) to representative units of the video based on ranking information. Together or individually, the innovations can improve the computational efficiency and accuracy of captioning that uses a VLM.
The techniques disclosed herein provide a real-time natural language processing (NLP) system for translating a speech audio input containing multiple natural languages (e. g., English, Mandarin, and French) into a translated audio output in a specific language (e. g., English). In a real-time translation context such as online meetings, feasibility can be dependent on achieving low latency to minimize the perceptible delay between the original speaker and the translated output. As such, the proposed techniques utilize an end-to-end (E2E) model in a translation module that implements the aspects of automatic speech recognition (ASR) in one machine learning model. In this way, the size of the end-to-end model, often referred to as the model footprint, is significantly smaller than that of a cascaded system that utilizes multiple distinct machine learning models. Consequently, the computing resource consumption of the end-to-end model is likewise reduced in relation to a cascaded system.
Techniques are described herein that are capable of responding to a query in a developer tool using semantically related keywords in relevant code chunks. A user-generated query regarding a location of an element in a codebase of a software development project is received. The codebase is parsed into code chunks. Semantically related keywords, including keywords from the user-generated query and other keywords that are semantically related to the keywords, are identified. Relevant code chunks are selected from the code chunks based on satisfaction of a relevancy criterion regarding the user-generated query. Execution of an instruction is triggered, which causes a visual representation of a response to the user-generated query to be generated. The execution of the instruction causes the visual representation to include at least portions of the relevant code chunks and further causes at least a subset of the semantically related keywords to be highlighted in the portions.
A computer-implemented method for compressed compact data storage and processing within a cloud-based environment is disclosed. In one aspect, the method for processing data signals, includes receiving a plurality of data signals corresponding to a user, the plurality of data signals includes a plurality of user raw records at corresponding time values, compressing the plurality of data signals using an incremental compression algorithm to form a single compressed iterative record, organizing the single compressed iterative record into hierarchical segments based on predefined time intervals using a waterfall data model, and storing the single compressed iterative record in a first cloud storage system.
A method of analysing a hollow core antiresonant optical fibre having an inner cladding comprising at least one capillary defined by a wall with a wall thickness comprises: directing light onto the fibre for interaction with at least one surface of the capillary wall; detecting a portion of the light which has interacted with the at least one surface of the capillary wall to determine a power level of the detected portion; and using the power level to deduce information regarding the wall thickness.
G01B 11/06 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur pour mesurer l'épaisseur
C03B 37/012 - Fabrication d'ébauches d'étirage de fibres ou de filaments
C03B 37/027 - Fibres constituées de différentes variétés de verre, p. ex. fibres optiques
Systems and methods are provided for automatic recovery of node resource memory devices. A platform basic input/output system ("BIOS") of a node collects, from a node resource of the node, operational state information for memory components of a memory device, and determines whether at least one memory component is undetected. If so, the platform BIOS sends a notification of the undetected memory component(s) to a controller of the node that relays the notification to a control plane fabric ("CPF") agent in a control plane. The CPF agent automatically determines a potential cause and a potential resolution, including memory device reset, firmware updates, etc. The CPF agent sends commands to the controller that cause the platform BIOS to initiate a recovery process for the plurality of memory components of the memory device, based on the potential resolution.
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
39.
CLIFFORD CIRCUIT FORECASTING WITHOUT FORWARD FAULT PROPAGATION
Disclosed are methods for managing execution of plugins of a machine-learning based system. A plugin configuration defines inputs required by the plugin and capabilities provided by the plugin. Capabilities describe the plugin's functionality, such as how the plugin affects the response, what type of content the plugin generates, etc. In some configurations, when responding to a prompt, a collection of relevant plugins is identified. Configurations of these plugins may be analyzed to optimize execution, including determining optimal execution order or enabling parallel execution. Plugin configurations may also be analyzed to improve security by conditionally preventing one plugin from accessing the output of another. Plugin configurations may also be used to inform a client what plugins will run and what results they may yield. This enables the client to optimize and streamline how the response is displayed.
A device sealing a plurality of hollow core optical fibers of an optical fiber cable, a method for sealing hollow core optical fibers of an optical fiber cable, and a kit for sealing a plurality of hollow core optical fibers of an end of an optical fiber cable. In use, the device comprises an endcap, a potting compound contained within the endcap, and an end of an optical fiber cable at least partially contained within the endcap; the plurality of hollow core optical fibers extend into the potting compound.
A method securely erasing data on a storage drive includes transmitting a communication that initiates an erasure operation on a storage drive and receiving a drive erasure attestation generated in association with erasure operation and by a root-of-trust of the storage drive. The drive erasure attestation includes a first claim that contains cryptographic evidence of a measured state of the storage drive following the erasure operation. The method further includes verifying the first claim and instructing a ledger service to record the drive erasure attestation in a ledger in response to the verification. Verification of the first claim depends upon confirmation of a match between first measurement values in the first claim and a first set of stored values previously-verified as corresponding to a correct implementation of the erasure operation.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
Systems, devices, methods, and computer-readable media for cycle detection in generative agent responses are provided. A method includes receiving, from the generative agent, a candidate completion, the candidate completion including a first response to a message from an entity conducting a conversation with the generative agent, determining, by a semantic extractor, a semantic embedding of the first response, and determining, by a cycle detector and based on the embedding and prior embeddings, whether the first response is a repetition of a prior candidate completion in the conversation.
A diffusion model is implemented in the analog processing domain. Analog restoration model circuitry is configured to denoise an analog signal (referred to as 'signal restoration' processing). Analog noise injection circuitry coupled to the analog restoration model circuitry receives the denoised signal and injects an amount of noise back into it. The resulting noise-injected signal is fed back to the analog restoration model circuitry for further signal restoration processing, and the resulting signal is again passed to the noise injection circuitry for noise injection. Various mechanisms for implementing the noise injection stage in the analog domain are described. In a first example embodiment, a constant noise signal is applied with a variable scaling factor. In a second example embodiment, a variable noise signal is generated using analog noise generation circuitry.
Incremental verification of a tamper-resistant ledger is disclosed herein. Periodic proofs are generated by periodically verifying the integrity of a tamper-resistant ledger. The periodic proofs enable a verifier to incrementally verify the integrity of the tamper-resistant ledger by verifying the periodic proofs. A periodic proof is generated based on a preceding proof and entries added to the tamper-resistant ledger since the preceding proof. A verifier verifies a periodic proof based on the preceding proof and the entries added to the ledger between preceding proof and the proof being verified. An action is performed responsive to verifying the integrity of the tamper-resistant ledger.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
46.
CONFIGURING A TENSOR OPERATION PIPELINE IN A HARDWARE ACCELERATOR
A computing method (200) is provided for configuring a tensor operation pipeline. In one example implementation, the method includes receiving a tensor operation pipeline definition and tensor data from a processor, at a configurable pipeline processing element array of a hardware accelerator (202). The method further includes, in each of a plurality of processing elements of the array, processing the tensor data by implementing a configurable tensor operation pipeline including one or more of the fixed tensor operation logic units according to the tensor operation pipeline definition (210). The method further includes outputting a tensor operation pipeline result based on the processing of the tensor data by each tensor operation pipeline in each processing element (212).
G06F 9/30 - Dispositions pour exécuter des instructions machines, p. ex. décodage d'instructions
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
47.
ENHANCED CONTROLS FOR THE DISPLAY OF REAL-TIME TEXT IN CALLS AND MEETINGS
The techniques disclosed herein provide enhanced controls for the display of real-time text (RTT) in calls and meetings. RTT is the ability for someone to send a text message on a character-by-character basis to everybody else in a call or meeting. The system disclosed herein integrates RTT, video, and live captions all in one central experience. This integrated experience allows users to participate equitably by making RTT accessible to users regardless of the operating mode they are in and still concurrently access other meeting content, including video streams, chat messages, live captions, transcripts, and artificial intelligence (AI) tools, such as Copilot. In one embodiment, during an online conference, in response to one of the attendees activating a RTT mode, when at least one user minimizes a meeting stage, such as for the purpose of multitasking while listening, the conference application maintains a display area for displaying RTT.
H04L 12/18 - Dispositions pour la fourniture de services particuliers aux abonnés pour la diffusion ou les conférences
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
H04L 65/401 - Prise en charge des services ou des applications dans laquelle les services impliquent une session principale en temps réel et une ou plusieurs sessions parallèles additionnelles en temps réel ou sensibles au temps, p. ex. accès partagé à un tableau blanc ou mise en place d’une sous-conférence
48.
SYSTEMS AND METHODS FOR SUPPORTING A HIGH THERMAL GRADIENT BETWEEN A QUBIT PLANE AND A CONTROL SYSTEM FOR THE QUBIT PLANE USING A SUPERCONDUCTING RIGID-FLEX CIRCUIT
Systems and methods for supporting a high thermal gradient between a qubit plane and a control system for the qubit plane are described. A system includes a qubit plane associated with a first rigid circuit portion of a superconducting rigid-flex circuit and a control system associated with a second rigid circuit portion of the superconducting rigid-flex circuit. The superconducting rigid-flex circuit includes a flexible circuit portion for interconnecting the first rigid circuit portion with the second rigid circuit portion. The system further includes a first cooling system operable to maintain an operating temperature for the qubit plane and the first rigid circuit portion of the superconducting rigid-flex circuit at or below 100 milli-kelvin. The system further includes a second cooling system operable to maintain an operating temperature for the control system and the second rigid circuit portion of the superconducting rigid-flex circuit at or below 10 kelvin.
49.
HARDWARE ACCELERATOR WITH CONFIGURABLE TENSOR OPERATION PIPELINE
A hardware accelerator (10) is disclosed that can flexibly be configured to support differing data types and differing operation flows. The hardware accelerator includes a plurality of fixed tensor operation logic units (16), tensor operation pipeline logic (18) configured to receive from the processor a pipeline command (24) including a software-defined tensor operation pipeline definition (26) defining a plurality of tensor operation stages (30) in a tensor operation pipeline (32) and associated predetermined tensor operations to be performed at each of the defined tensor operation stages. The hardware accelerator is further configured to receive tensor data (28) to be computed by the tensor operation pipeline, and implement the tensor operation pipeline to perform the tensor operations in each of the tensor operation stages on the tensor data, to thereby produce a tensor operation pipeline result (34) for the tensor data, and output the tensor operation pipeline result to the processor.
G06F 9/30 - Dispositions pour exécuter des instructions machines, p. ex. décodage d'instructions
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
The present disclosure provides methods, apparatuses and non-transitory computer-readable medium for prompt optimization. An initial prompt may be obtained, wherein the initial prompt comprises multiple sequential instructions. One or more failed cases associated with the initial prompt may be obtained. One or more target patterns may be determined for the one or more failed cases. One or more prompt revising suggestions may be determined for the one or more target patterns. At least one revised prompt may be generated through adding one or more conditional branches to the multiple sequential instructions according to the one or more prompt revising suggestions.
A computing system (10) including processing circuitry (12) configured to, during a calibration stage (56), perform a sparsity pattern search on a plurality of attention heads (52) included in one or more transformer layers (22) to select a respective sparsity pattern (54) associated with each of the attention heads. During an inferencing stage (58), processing circuitry receives an inferencing input (24). The processing circuitry pre-fills a context (60) based at least in part on the inferencing input. Pre-filling the context includes computing sparse attention scores (64) at each of the attention heads. Computing the sparse attention scores includes masking each of the attention heads using the respective sparsity pattern selected for that attention head during the calibration stage. The processing circuitry computes an inferencing output (48) by performing inferencing starting from the sparse attention scores. The processing circuitry outputs the inferencing output.
Described are techniques for passive user recognition in meeting environments, utilizing advanced biometric data processing. An in-room meeting system with a camera is used to capture meeting participant images. The images are analyzed to detect faces and generate face embeddings—vector representations of faces. These face embeddings are compared against a dynamically generated database of known users, accumulated from previous meetings, to verify participant identities without requiring explicit biometric submissions. This automated process enhances meeting efficiency by streamlining participant verification and improving security.
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
53.
HARDWARE ACCELERATOR FOR PERFORMING 1-DIMENSIONAL K-MEANS CLUSTERING IN PARALLEL
A hardware accelerator (14) that performs k-means clustering on 1-dimensional inputs by computing a minimum within-cluster sum of squares matrix and a backtracking index (B), and using the backtracking index (B) to identify start and end points for clusters within the 1- dimensional inputs. The within-cluster sum of squares matrix is generated in parallel by differing threads, using a two row ping pong buffer in shared memory (SM) of the thread block. The 1- dimensional inputs are read into shared memory (SM) and accessed as the threads compute successive rows of the minimum within-cluster sum of squares matrix. The backtrack index (B) is stored in global memory (GM) and holds index values for the 1-dimensional inputs that minimize the minimum within-cluster sum of squares function at each element in the sum of squares matrix. After identifying the start and end points for the clusters, cluster labels (25) can be generated for each of the 1-dimensional inputs.
G06F 16/28 - Bases de données caractérisées par leurs modèles, p. ex. des modèles relationnels ou objet
G06F 18/23213 - Techniques non hiérarchiques en utilisant les statistiques ou l'optimisation des fonctions, p. ex. modélisation des fonctions de densité de probabilité avec un nombre fixe de partitions, p. ex. K-moyennes
54.
CONFIGURABLE DIE-TO-DIE LANE REPAIR IN MULTI-DIE SYSTEMS COUPLED USING LINK MACROS
Systems and methods for configurable die-to-die lane repair in multi-die systems are described. A multi-die system includes a first die and a second die, each of which comprises modular D2D link macros, where each of the modular D2D link macros has M data lanes. A method for configuring die-to-die lane repair includes forming repair groups having D data lanes spanning M data lanes, or fewer than M data lanes, associated with one or more modular D2D link macros, where D is independently configurable for each repair group. The method further includes, for each one of the repair groups designating R redundant lanes from among the D data lanes, where R is a positive integer independently configurable for each repair group, and where a location of each of the designated redundant lanes within a die floor plan associated with a respective repair group is independently configurable.
H01L 21/66 - Test ou mesure durant la fabrication ou le traitement
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
55.
ASYNCHRONOUS FUNCTION EXECUTORS UTILIZING WORK UNIT STACKS
Implementations for executing asynchronous functions using a work unit stack executor on a data processing unit are provided. One aspect includes a computing system (100) for executing asynchronous functions using a work unit (WU) stack executor (508), the computing system (100) comprising a data processing unit (104) including a plurality of programmable processing cores (206) configured to execute an asynchronous function by performing a call to the asynchronous function, creating a future (502) corresponding to the asynchronous function, creating a WU stack (504), creating the WU stack executor (508) on the WU stack (504) to execute the future (502) and sending a WU (510) to start the WU stack executor (508).
A network interface controller (NIC) circuit may control data transfers between a network interface and a memory interface circuit. The NIC circuit receives data packets on the network interface and determines whether a packet type of a data packet corresponds to one of a first plurality of operations or a second plurality of operations. For data packets that correspond to one of the first plurality of operations, the NIC circuit controls the memory interface circuit according to the packet type and for data packets that correspond to one of the second plurality of operations, the NIC sends a notification to a processor circuit in the IC to execute software instructions to control the memory interface circuit according to the packet type. The NIC circuit quickly processes data packets corresponding to the first plurality of operations without software involvement but relies on software assistance for the second plurality of operations.
G06F 13/28 - Gestion de demandes d'interconnexion ou de transfert pour l'accès au bus d'entrée/sortie utilisant le transfert par rafale, p. ex. acces direct à la mémoire, vol de cycle
G06F 13/38 - Transfert d'informations, p. ex. sur un bus
G06F 12/1081 - Traduction d'adresses pour accès périphérique à la mémoire principale, p. ex. accès direct en mémoire [DMA]
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
57.
MEMORY INTERFACE CIRCUITS INCLUDING ENCRYPT/DECRYPT CIRCUITS TO RE-ENCRYPT ENCRYPTED DATA BLOCKS IN A MEMORY CIRCUIT AND RELATED METHODS
An exemplary memory interface circuit disclosed herein re-encrypts data in an encrypted data block in a memory circuit to further protect the data. In particular, the memory interface circuit reads an encrypted data block from the memory circuit and decrypts the encrypted data block using a first key that was previously used to encrypt the block of data. Then, the memory interface circuit encrypts the data again using a second key before storing the re-encrypted data back into the memory circuit. In some examples, the memory interface circuit includes a re-encryption circuit that includes secure configuration registers to control occasional re-encryption of the encrypted data in an effort to evade detection of the encryption key. In some examples, the time between re-encryptions may be adjusted in response to a frequency of memory accesses to the memory circuit.
G06F 12/14 - Protection contre l'utilisation non autorisée de mémoire
G06F 21/79 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du stockage de données dans les supports de stockage à semi-conducteurs, p. ex. les mémoires adressables directement
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
The described technology provides a device including a phase locked loop (PLL) circuit, the PLL circuit including a voltage controlled oscillator (VCO) and a phase detector, and a voltage supply and transconductance cell (Gm) configured to drain a current Iout from the VCO based on a sensed voltage (Vsup_sense) input into the Gm, wherein the Gm cell is configured to generate an open_loop signal based on the Iout drain from the VCO.
Methods, systems, and computer storage media for providing iterative data processing optimization using an iterative data processing optimization engine in a data intelligence system are described. Iterative data processing refers to handling data where the processing steps are repeated multiple times, across multiple views or modalities, to train machine learning models, filter and score data or generate output. The iterative data processing optimization engine employs expectation step machine learning models that are simple but with fast language models to efficiently and effectively probe and analyze data, while iteratively refining maximization step machine learning models that are optimized and fast to approximate the probing mechanism of the expectation step machine learning models more efficiently, for example, using metadata, external information, and compressed representation. The iterative data processing optimization engine can operate based on an agentic framework using lightweight artificial intelligence (Al) agents to perform model fitting, featurization, and report generation autonomously.
Data stored in a memory circuit may be encrypted using client keys that need to be available for high-speed data processing and yet held securely to avoid unauthorized access to the encrypted data. A secure processor circuit in a processor-based system obtains client keys associated with client applications and generates secure key-encryption keys that are used to encrypt the client keys so the client keys can be securely stored in the memory circuit. In some examples, data keys for encrypting data blocks associated with the client application may be generated from the client key, encrypted by a data key-encryption key generated in the secure processor circuit, and stored in the memory circuit. In such examples, because the client keys and data keys are encrypted while in memory, they are safer from software attacks on the memory circuit, which improves the security of the encrypted data blocks.
A computing system (10) including one or more processing devices (14) configured to receive prompt generation instructions (20) that specify an initial prompt (22) and a prompt evaluation criterion (26). In each of a plurality of iterations (35) of a prompt generation loop (30), the one or more processing devices are further configured to generate candidate prompts (38) at least in part at a machine learning model (36). The candidate prompts are generated based on a current-iteration prompt (34) that is initialized as the initial prompt in a first iteration. As specified by the prompt evaluation criterion, the one or more processing devices are further configured to compute respective evaluation scores (40) associated with the candidate prompts. Based on the evaluation scores, the one or more processing devices are further configured to replace the current- iteration prompt. The one or more processing devices are further configured to output a final prompt (42) generated in a final iteration.
G06F 40/131 - Fragmentation de fichiers textes, p. ex. création de blocs de texte réutilisablesLiaison aux fragments, p. ex. par utilisation de XIncludeEspaces de nommage
G06F 40/16 - Apprentissage automatique des règles de transformation, p. ex. au moyen d’exemples
G06F 40/216 - Analyse syntaxique utilisant des méthodes statistiques
Methods, systems, and computer storage media for providing a data analysis pipeline using a data analysis pipeline engine in a data intelligence system are described. A data analysis pipeline refers to a structured sequence of data processing steps that support transforming raw data into meaningful insights or actionable outcomes. The data analysis pipeline engine is an unsupervised learning pipeline based on clustering, topic modeling, and Large Language Models (LLMs). For example, the data analysis pipeline can use advanced machine learning techniques to automatically categorize emails into semantically similar clusters, enabling the data intelligence system to quickly identify and prioritize potentially high-risk emails for further investigation. The data analysis pipeline employs AI agents for context-aware graph induction relevance assessment. The AI agents employ induction and deduction loops to build and refine a data feature hypergraph (e.g., vulnerability hypergraph) that encompasses identified relevant data providing a holistic view of a contextual landscape.
Systems and methods are disclosed for clock phase calibration between source logic and a coupled serial data link transmitter, enabling low-latency synchronization into the transmitter. In calibration mode, a phase relationship is monitored between a first clock, driving the source logic, and a second clock, tightly synchronized with a serial clock of the data link. The first clock is adjusted to a first phase at which the first and second clocks are aligned. The first clock phase is set for operation mode based on the first phase. Monitoring uses a D-type flip-flop as a phase detector. Adjustment is in steps of half the serial clock period. Variations are disclosed.
Innovations in machine learning ("ML") models used in adaptive post-processing of decoded video in a conferencing tool are described. For example, as part of post-processing of decoded video, a super-resolution/video restoration model increases spatial resolution (e.g., by interpolation between sample values), mitigates compression artifacts, and mitigates upscaling artifacts introduced when increasing spatial resolution. Or, as another example, as part of post-processing of decoded video, a video restoration model mitigates compression artifacts, without increasing spatial resolution. For adaptive post-processing, a post-processing model can be selectively applied depending on results of scenario detection, results of segmentation, and/or results of video quality analysis. With the innovations, a conferencing tool can in effect provide video at higher quality without significantly increasing the network bandwidth consumed by the video or, alternatively, provide video using less network bandwidth without significantly hurting the quality of the video.
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/17 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet
65.
MACHINE LEARNING MODELS FOR ADAPTIVE POST-PROCESSING USING RESULTS OF SCENARIO DETECTION IN CONFERENCING TOOLS
Innovations in machine learning ("ML") models used in adaptive post-processing of decoded video in a conferencing tool are described. For example, as part of post-processing of decoded video, a super-resolution/video restoration model increases spatial resolution (e.g., by interpolation between sample values), mitigates compression artifacts, and mitigates upscaling artifacts introduced when increasing spatial resolution. Or, as another example, as part of post-processing of decoded video, a video restoration model mitigates compression artifacts, without increasing spatial resolution. For adaptive post-processing, a post-processing model can be selectively applied depending on results of scenario detection, results of segmentation, and/or results of video quality analysis. With the innovations, a conferencing tool can in effect provide video at higher quality without significantly increasing the network bandwidth consumed by the video or, alternatively, provide video using less network bandwidth without significantly hurting the quality of the video.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/85 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo
According to implementations of the present disclosure, a solution for molecular property prediction is provided. In the solution, an initial atom cluster centered on the at least one target atom and with a specified radius is determined based on at least one target atom in a molecule. An adjustment strategy corresponding to the cross-cluster property is determined based on a cross-cluster property of a cross-cluster atom contained in the initial atom cluster of the at least one target atom. Adjustment on a cross-cluster atom contained in the initial atom cluster of the at least one target atom is performed, based on the adjustment strategy, to obtain a modified atom cluster corresponding to the at least one target atom. A target molecular property of the molecule is determined based on the modified atom cluster corresponding to the at least one target atom.
According to an implementation of the disclosure, a solution for executing a computation for a neural network is provided. According to the solution, a data flow graph for a neural network to be computed is obtained, and the data flow graph indicates at least one operation in the neural network and data respectively associated with the at least one operation; scheduling information for the neural network is determined based on the data flow graph and a processing resource configuration of a target device for computing the neural network, the scheduling information indicates a data transformation required to execute the at least one operation; and a computation for the neural network is executed at the target device based on the scheduling information. The implementation of the disclosure supports executing a computation for a neural network at devices with different configurations, providing versatility.
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
Aspects of the disclosure include removing a faulty qubit in a quantum circuit. The faulty qubit is determined to be in the quantum circuit, the faulty qubit being associated with a plaquette having other qubits, where adjacent plaquettes are neighboring the plaquette. A route is determined to isolate the plaquette from the adjacent plaquettes. Measurements are caused to be performed on the quantum circuit for the route that isolates the plaquette having the faulty qubit and the other qubits.
G06N 10/20 - Modèles d’informatique quantique, p. ex. circuits quantiques ou ordinateurs quantiques universels
G06N 10/40 - Réalisations ou architectures physiques de processeurs ou de composants quantiques pour la manipulation de qubits, p. ex. couplage ou commande de qubit
G06N 10/70 - Correction, détection ou prévention d’erreur quantique, p. ex. codes de surface ou distillation d’état magique
Systems and methods are disclosed for credit-based flow control of multiple data links using a common reverse channel. The links transfer data from a source to respective buffers at a sink. Credits represent available buffer space. For each data link, a credit counter at the source is decremented as data is transmitted and incremented as the sink returns credits. Reporting logic at the data sink generates a credit report as sink logic retrieves data from the buffer, freeing buffer space. Encoding logic aggregates the credit reports from multiple links for transmission over the common reverse channel to the source, where individual credit reports are extracted and distributed among the links, for update to the respective credit counters. For each link, data transmission pauses when the credit counter decreases to a threshold. Return multiple links' credits over a single reverse channel saves power. Variations are disclosed.
Systems and methods are disclosed for phase calibration between clock and data lanes at a data link receiver. In calibration mode, matching signals are transmitted over the clock and data lanes, and a phase offset is measured at the receiver. A phase shifter in one signal path is adjusted to a first phase to obtain a desired phase offset. For operation mode, the phase shifter is set based on the first phase. Embodiments measure phase offset using an XOR gate and use a phase interpolator as the phase shifter. Embodiments with multiple data lanes apply coarse calibration to a shared clock lane relative to a first data lane, and similar fine calibration to other data lanes. Calibration provides optimum signal-to-noise ratio or timing margin, enabling high transmission speeds at relatively low power. Variations are disclosed.
H04L 7/00 - Dispositions pour synchroniser le récepteur avec l'émetteur
G06F 1/12 - Synchronisation des différents signaux d'horloge
H03K 19/17 - Circuits logiques, c.-à-d. ayant au moins deux entrées agissant sur une sortieCircuits d'inversion utilisant des éléments spécifiés utilisant des twistors
Systems and methods are disclosed for reduced-power serial data links. A clock-forwarded serial link carries a clock lane and one or more data lanes. Every active serial data cycle is accompanied by its own serial clock edge: a clock delay allows the same clock edge to drive data at a transmitter and latch data at a receiver. Power is saved by idling the serial clock when data is not being transmitted. A valid signal can be omitted, providing a space saving. At the destination, similar clock-forwarding and delay enables a single parallel clock edge to drive data to the boundary of its clock domain, e.g. from a deserializer to a FIFO. The data link exhibits zero-cycle entry and exit. Variations with half- or single-cycle entry or exit are disclosed.
Systems and methods for initializing and calibrating asymmetric die-to-die (D2D) interfaces are described. As an example, during the calibration of a parameter, a calibration finite-state machine (CAL FSM) can perform certain measurements and adjustments. Once a stage of calibration is finished, the CAL FSM can communicate this information to a cluster FSM. The cluster FSM can then communicate to the node FSM the completion status. Once all the clusters have communicated to the node FSM that they have finished the current stage of calibration, the node FSM advances to the next stage of calibration and communicates to the pertinent cluster FSMs to advance, which in turn communicate to the CAL FSMs within the cluster to advance to the next stage of calibration. The clusters that are communicating in one direction are now able to receive the calibration stage information via other clusters that are communicating in the other direction.
H01L 25/065 - Ensembles consistant en une pluralité de dispositifs à semi-conducteurs ou d'autres dispositifs à l'état solide les dispositifs étant tous d'un type prévu dans une seule des sous-classes , , , , ou , p. ex. ensembles de diodes redresseuses les dispositifs n'ayant pas de conteneurs séparés les dispositifs étant d'un type prévu dans le groupe
73.
CREATING VIRTUAL THREE-DIMENSIONAL SPACES USING GENERATIVE MODELS
This document relates to generation of three-dimensional virtual spaces from user- provided two-dimensional input images. For instance, three-dimensional submeshes can be derived from the user-provided two-dimensional input images. Then, the submeshes can be arranged in a submesh layout, with spaces between the submeshes. The spaces can be populated with image content generated by a generative image model, which is then blended with the submeshes, resulting in a final three-dimensional virtual space.
A cloud computing resource system may receive an allocation request to connect the virtual machine to a customer network, wherein the virtual machine is executing while the allocation request is received and the allocation request includes network configuration information of the customer network. A cloud computing resource system may detect a discovery request from the virtual machine triggered by receipt of the allocation request, wherein the virtual machine remains executing during detection of the discovery request. A cloud computing resource system may update, responsive to detecting the discovery request from the virtual machine, a virtual network interface controller of the virtual machine with the network configuration information of the customer network, wherein the virtual machine remains executing during updating of the network configuration information.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
H04L 61/5014 - Adresses de protocole Internet [IP] en utilisant le protocole de configuration dynamique de l'hôte [DHCP] ou le protocole d'amorçage [BOOTP]
A computing system (10) including memory (12) storing a prompt library (20). The prompt library includes prompt fragments (22) and prompt templates (26). The computing system further includes one or more processing devices (14) configured to, at a prompt compiler (40), receive a prompt generation input (30) including prompt input data (32). At the prompt compiler, based at least in part on the prompt input data, the one or more processing devices are further configured to select a prompt template and one or more of the prompt fragments from the prompt library. The one or more processing devices are further configured to fill the selected prompt template with the prompt input data and the one or more selected prompt fragments to compute a compiled prompt (44). At a first machine learning model (50), the one or more processing devices are further configured to process the compiled prompt and to output the machine learning model output (52).
Innovations in machine learning ("ML") models used in adaptive post-processing of decoded video in a conferencing tool are described, For example, as part of post-processing of decoded video, a super-resolution/video restoration model increases spatial resolution (e.g., by interpolation between sample values), mitigates compression artifacts, and mitigates upscaling artifacts introduced when increasing spatial resolution, Or, as another example, as part of post-processing of decoded video, a video restoration model mitigates compression artifacts, without increasing spatial resolution, For adaptive post-processing, a post-processing model can be selectively applied depending on results of scenario detection, results of segmentation, and/or results of video quality analysis, With the innovations, a conferencing tool can in effect provide video at higher quality without significantly increasing the network bandwidth consumed by the video or, alternatively, provide video using less network bandwidth without significantly hurting the quality of the video.
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/154 - Qualité visuelle après décodage mesurée ou estimée de façon subjective, p. ex. mesure de la distorsion
H04N 19/86 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la diminution des artéfacts de codage, p. ex. d'artéfacts de blocs
77.
TRANSPORT LAYER NETWORK RECOVERY FOR PACKET-SWITCHED COMPUTER NETWORKS
A computing system (10) for transport layer network recovery on a packet-switched computer network includes a source computing device (12) with a processor (14) that executes a network traffic communication module (40), a load balancing module (42), and a congestion control module (44). The network traffic communication module (40) provisions a plurality of source ports to transmit outbound packets to a destination computing device (18), each source port being associated with a respective network path. The load balancing module (42) assigns each outbound packet to one of the source ports using a port scheduling algorithm (56) to uniformly distribute the packets among the source ports and associated network paths. The congestion control module (44) detects a congestion control condition for a packet transmitted via a source port associated with a congested network path. The load balancing module (42) assigns a next source port for a next outbound packet from a remainder of the source ports not associated with the congested network path.
78.
SYSTEM AND METHOD FOR REAL-TIME OPTIMIZATION OF RETRIEVAL AUGMENTED GENERATION (RAG) HYPERPARAMETERS
A method, computer program product, and computing system for processing a query provided to a generative AI model. A content portion retrieved by a Retrieval Augmented Generation system for the query is processed. User context information associated with a user providing the query is determined. Hyperparameters are generated for processing the prompt with the generative AI model by processing the query, the content portion, and the user context information using run-time surrogate model inversion optimization.
A computing system and method for data augmentation. A method includes receiving first data examples, each in one of a particular category associated with a label; receiving a definition for each label, resulting in label definitions, the label definitions describing subject matter of the associated category; generating second data examples from the first data examples, each in one of the particular categories; generating third data examples from the first data examples based on a data repository; merging the second and third data examples to form a corpus; and training a language model with the corpus.
Systems and methods are provided for generating an instruction code for determining an adaptive bitrate for transmitting a stream of data over a network. The adaptive bitrate processing includes iteratively determining a bitrate based on dynamically changing network and data streaming states. In particular, the present technology is directed to generating candidate instruction codes for the adaptive bitrate determination by a large language model. A prompt comprises a description of the adaptive bitrate and an instruction to generate the candidate instruction code for determining adaptive bitrates according to reinforcement learning. Given the candidate codes, the system validates the candidate codes for compilation and data normalization. The system further evaluates and selects an instruction code with a top performance based on reward convergence of reinforcement learning for deployment. Training of the adaptive bitrate instruction code according to the reinforcement learning is based on but not limited to an actor-critic network model.
Ungrouping and grouping of system busses using link macros capable of joining and splitting is described. A method for communication between a first die and a second die in a multi-die system, where the first die comprises a set of D2D transmit link macros, and where each of the set of D2D link transmit macros has a same bandwidth capacity per transmit link macro, includes ungrouping data received from any of the system busses into a first group of data and a second group of data when a bandwidth of a respective system bus exceeds the bandwidth capacity per transmit link macro (2110). The method further includes using a first D2D transmit link macro, transmitting the first group of data to the second die (2130). The method further includes joining the second group of data with a third group of data for transmission using a shared transmit link macro (2140).
Systems and methods for state clearing of internal memory and flip-flops of a system using built-in test and scan circuitry are provided. A method includes a security processor: (1) fetching memory built-in self-test (MBIST) data and providing the MBIST data to an internal field-test (IFT) controller such that the IFT controller can drive the MBIST data via an IJTAG network to respective MBIST controllers, and (2) fetching scan data and providing the scan data to the IFT controller such that the IFT controller can drive the scan data via a scan network to respective embedded deterministic test (EDT) controllers. The method further includes each of the respective MBIST controllers state clearing respective internal memories by writing pertinent MBIST data into the respective internal memories. The method further includes each of the respective EDT controllers state clearing respective flip-flops by writing pertinent scan data into the respective flip-flops.
G11C 7/24 - Circuits de protection ou de sécurité pour cellules de mémoire, p. ex. dispositions pour empêcher la lecture ou l'écriture par inadvertanceCellules d'étatCellules de test
Systems and methods for implementing power oversubscription in graphic processing unit (GPU) servers are provided. An increase to a quantity of servers allocated to a group of GPU servers in an inference cluster is applied. Based on the power consumption of the group of GPU servers exceeding a first threshold, a frequency of low priority inference workloads is capped, and based on the power consumption of the group of GPU servers exceeding a second threshold, the frequency of the low priority inference workloads are capped and a frequency of high priority inference workloads are capped, enabling an increase in allocated server capacity in the existing inference clusters while maintaining service level objectives (SLOs).
The virtually divided input trackpad disclosed herein is designed to provide enhanced ergonomic user interaction with digital interfaces. The virtually divided input trackpad comprises two virtually separated functional areas, each of which may be dedicated to distinct functionalities such as object movement and object rotation. This arrangement allows for simultaneous two-handed operation, offering users an intuitive, efficient, and accessible way of controlling digital environments, which is especially beneficial for users with specific accessibility needs.
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p. ex. souris 2D, boules traçantes, crayons ou palets
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
A computing system with hardware acceleration for execution of generative models is provided. The computing system comprises a processor and memory storing instructions that, when executed by the processor, cause the processor to execute a generative model. An accelerator module performs compute operations during execution of the generative model. Prior to execution of the generative model, the accelerator module determines a maximum and minimum value for a functional computation to be performed during execution of the generative model. The accelerator module modifies possible inputs into functional computation to reduce the size of an input value by N bits. The accelerator module performs the functional computation based upon the modified input value, the minimum value, and the maximum value. During execution of the generative model, the accelerator module obtains a value for the functional computation to be used during generation of output of the generative model.
Various embodiments discussed herein relate to external document tagging and query augmentation for language model generation. Some embodiments assign a label for at least a portion of an accessed document. Based at least in part on assigning the label for at least the portion of the document, some embodiments then generate a tag indicating whether more information is needed to respond to the query. This process of tagging iteratively repeats for each chunk and/or document until all relevant information is accessed, after which a language model consolidates all the relevant chunks and/or documents to respond to the query. In other words, various embodiments efficiently generate new queries for subsequent retrieval rounds, aiming to retrieve information beyond the scope of the initial query. Once enough information is received for responding to the initial query, various embodiment pass all the relevant information to the language model to get a final response.
The present disclosure proposes a method, apparatus and computer program product for text classification. A text may be received. The text may be encoded as an initial vector. A hidden layer vector of the text may be generated based on the initial vector. A text category of the text may be identified based on the hidden layer vector. A relevant chunk in the text may be identified based on the hidden layer vector, the relevant chunk including one or more consecutive tokens relevant to the text category. Furthermore, the embodiments of the present disclosure also propose a text classification system, comprising a feature extraction module, a hidden layer, a sequence classifier, and a token classifier.
A hardware accelerator (10) including input memory (13) that receives first and second input matrices (20, 24). The hardware accelerator further includes processing circuitry (11) including one or more tiles (12) that each include a respective tensor processor (48) configured to receive a first and second input block (22, 26) of the first and second input matrices. Each tile receives a first block scale factor (28) associated with rows (28) of the first input block and a second block scale factor (29) associated with columns (66) of the second input block. Each tile multiplies the first input block by the second input block, applies the first block scale factor to rows of the result block (81), and applies the second block scale factor to columns of the result block to obtain a scaled result block (30). The processing circuitry further includes an accumulator (32) that accumulates scaled result blocks to obtain a scaled result matrix (34), and output memory (14) that receives and output the scaled result matrix.
Systems and methods for bidirectional communication between a first die and a second die using a shared route are described. The method includes, during a first phase of operation, allowing bidirectional communication between the first die and the second die using the shared route. The method further includes, during a second phase of operation: (1) pausing bidirectional communication between the first die and the second die using the shared route, (2) parking the first transmit driver by coupling an input terminal of the first transmit driver to a voltage level, and (3) parking the second transmit driver by coupling an input terminal of the second transmit driver to the same voltage level, where the voltage level is one of a voltage supply level or a ground level. Additional systems and methods for clock gating of signals that make the bidirectional communication even more efficient are also described.
Examples of the present disclosure describe systems and methods for sensory and response modeling in OWT systems. In examples, a payload is received by a sensory machine learning (ML) model implemented within an OWT system. The sensory ML model outputs an indication associated with data within the payload, such as whether the data belongs to one or more object classes or is indicative of anomalous activity. The output of the sensory ML model is provided to a response ML model implemented within the OWT system. The response ML model outputs a determination associated with the payload, such as whether the payload is permitted to egress across a data boundary of the OWT system or the manner in which data in the payload can be used in the one or more computing environments. The payload is then processed in accordance with the determination.
Multi-die systems with modular die-to-die link macros for enabling die-to-die communication are described. A multi-die system (800) includes a first die (810) comprising a first set of modular die-to-die (D2D) transmit link macros and a first set of modular D2D receive link macros (814). The multi-die system (800) further includes a second die (850), coupled to the first die (810) via die-to-die (D2D) links (830), comprising a second set of modular D2D transmit link macros and a second set of modular D2D receive link macros (854). Each of the transmit/receive macros has the same physical shape, size, and the bandwidth capacity. The modularity associated with respective modular D2D transmit link macros and respective modular D2D receive link macros allows different combinations of an amount of bandwidth for data being transmitted or received via the D2D links and different amounts of edge depths for the first die and the second die along the die edge.
G06F 13/42 - Protocole de transfert pour bus, p. ex. liaisonSynchronisation
G06F 15/78 - Architectures de calculateurs universels à programmes enregistrés comprenant une seule unité centrale
G11C 7/10 - Dispositions d'interface d'entrée/sortie [E/S, I/O] de données, p. ex. circuits de commande E/S de données, mémoires tampon de données E/S
H04L 47/726 - Réservations de ressources sur plusieurs routes utilisées simultanément
H01L 25/065 - Ensembles consistant en une pluralité de dispositifs à semi-conducteurs ou d'autres dispositifs à l'état solide les dispositifs étant tous d'un type prévu dans une seule des sous-classes , , , , ou , p. ex. ensembles de diodes redresseuses les dispositifs n'ayant pas de conteneurs séparés les dispositifs étant d'un type prévu dans le groupe
Various features pertaining to a computer-executable agent are described herein, where the computer-executable agent is configured to complete a multi-step task requested by a user. Several machine learning models, optionally distributed between a server computing system and a client computing device, are utilized to complete the task. The machine learning models generate a high-level plan that describes steps that are to be performed to complete the multi-step task, and further generate low-level plans that describe, for each step, a sequence of actions to be performed by the computer-executable agent to complete the step.
Systems, methods, and computer readable storage media described herein provide key refreshment utilizing tamper-resistant public key commitment. In an aspect, a ledger manager stores a set of public keys associated with a user account in a ledger database. The ledger manager assigns a first public key of the set of public keys as an active key. Responsive to receiving an update key request from a computing device on behalf of the user account, the ledger manager updates the active key to a second public key of the set of public keys. Responsive to receiving an active key request from another computing device on behalf of a another user account, the ledger manager causes data accessible to the another computing device to be encrypted using the second public key. In an aspect, the ledger manager utilizes the second public key to encrypt second data accessible to the another computing device.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
According to implementations of the disclosure, a metasurface-based wireless imaging solution is provided. In the solution, a metasurface is arranged in a signal propagation path from a transmitter to a receiver via a target object, that is, a wireless signal for imaging sensing is enabled to interact with the metasurface. An image of the target object is generated using the received signal. In addition, in the implementation of the disclosure, a design of end-to-end optimization for a wireless imaging system is proposed. In implementations of the present disclosure, powerful wavefront shaping of the metasurface is used for wireless imaging. In this way, a miniaturized wireless imaging solution can be achieved without the need for mobile device or object movement and with reduced dependence on the number of transceivers.
Systems, devices, methods, and computer-readable media for guided conversations with a generative artificial intelligence (AI) agent to complete an artifact are provided. A method includes generating a prompt, the prompt including a context, the artifact to be completed during a guided conversation, and rules to be followed in conducting the guided conversation, providing the prompt to the generative AI agent, receiving a response to a message from the generative AI agent, receiving a first function call to update a field of the artifact, the function call including a field and a value, determining, based on the value and a schema of the artifact, a result indicating whether the update is valid or invalid, providing, to the generative AI agent, the result, and receiving, from the generative AI agent, the artifact after the conversation is completed.
A computing system (10A, 10B) is provided, comprising processing circuitry (14) and associated memory (16). The processing circuitry (14) is configured to receive a prompt (28) including a message (30) as natural language input from an interaction interface (26), extract an intent of the message (30), and select a domain-specific language (DSL) domain (40) corresponding to the intent of the message (30). The processing circuitry (14) then generates a DSL plan (44) encoded in a DSL based on the message (30) and the selected DSL domain (40), generates code (48) based on the message and the generated DSL plan (44), executes the code (48) in a code execution environment to generate content (72) corresponding to the message (30) and the selected DSL domain (40), and outputs the generated content (72).
The present disclosure proposes a method, apparatus and computer program product for time analysis in video conference. A total duration of a video conference may be obtained. A conference description of the video conference and/or a document associated with the video conference may be obtained. A plurality of sessions included in the video conference and a planned duration of each session may be determined based on at least one of the total duration, the conference description, and the document. For each session, it may be detected that the video conference proceeds to the session. In response to detecting that the video conference proceeds to the session, a timer corresponding to the session may be started, the timer displaying a remaining duration associated with a planned duration of the session. In response to the remaining duration being below a predetermined threshold, an indication may be output.
Methods, systems, and machine-readable mediums that enhance transaction authentication of a transaction of a first application by checking that a state of one or more other applications matches prespecified states or that one or more of those applications transition states within a prespecified period of time. For example, the prespecified states may correspond to the application being installed on a specified device (e.g., of the user) and having an authenticated session with a specified user.
G06Q 20/36 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des portefeuilles électroniques ou coffres-forts électroniques
G06Q 20/40 - Autorisation, p. ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasinExamen et approbation des payeurs, p. ex. contrôle des lignes de crédit ou des listes négatives
Systems and methods herein provide a photospread engine and its related functions. In an example, a method includes identifying, by a photospread engine, images for a photospread and determining image areas based on the plurality of images. Each of the image areas may correspond to a respective image. The photospread engine may also determine a center point for each image area and then minimize a loss function for the image areas. The loss function may correspond to an overlap loss for an overlap area between the image areas and a spreading loss for a distance sum between the center point of each image area and a centroid of the image areas. The photospread engine may generate a photospread including the images on a canvas based on optimizing the loss function for the image areas.
A data processing system implements receiving a call requesting a generative model to generate a semantic tree for a source content; constructing a first prompt including the source content and instructions to the model to analyze a semantic structure of the source content and to generate a semantic outline and content chunks of the source content, the semantic outline including one or more topics each connected with one or more of the content chunks, to compute one summary for each of the content chunks, to apply indices to reference each topic node of the semantic tree to one of the topics, and to apply indices to reference each leaf node of the semantic tree to one of the content chunks and the respective summary; providing the first prompt to the model and receiving the semantic tree of the source content; and storing the semantic tree in a database.