Apparatus and methods for prime field modular reduction are described. As an example, a custom modular reduction digital circuit for reducing an n-bit integer based on a modulus, where the modulus comprises a k-bit integer for use with a cryptographic algorithm, is described. The custom modular reduction digital circuit includes a first circuit to generate at least two partial results by processing: (1) k lower order significant bits of the n-bit integer and (2) at least a subset of bits for congruent representations corresponding to any n-k higher order bits of the n-bit integer that are higher in significance than the most significant bit of the k-bit integer. The custom modular reduction digital circuit further includes a second circuit to process the at least two partial results, output by the first circuit, to generate a reduced version of the n-bit integer for use with the cryptographic algorithm.
H04L 9/30 - Clé publique, c.-à-d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
Example solutions for processing LLM prompts include creating a first large language model (LLM) prompt based on an input LLM prompt. The first LLM prompt represents a first step toward generating a solution to the input LLM prompt. The first LLM prompt is submitted to an LLM as a first sub-query, thereby resulting in the generation of a first LLM output. A second LLM prompt is generated based on the input LLM prompt. The second LLM prompt represents a second step toward generating the solution. The second LLM prompt includes the first LLM output. The second LLM prompt is submitted to the LLM as a second sub-query, thereby resulting in the generation of a second LLM output. The second LLM output represents the solution to the input LLM prompt in response to the input LLM prompt.
Methods, systems, and computer storage media for providing generative artificial intelligence (AI) output validation using a generative AI output validation engine in an artificial intelligence system. The generative AI output validation engine assesses and determines the quality (e.g., quantified as an output validation score) of generative AI output (e.g., LLM output). In operation, a generative AI output comprising summary data is accessed. Raw data from which summary data is generated is accessed. A plurality of output validation operations associated with a generative AI output validation engine are executed. The generative AI output validation engine comprises multi-categorical analytical models that provide corresponding output validation operations for quantifying quality of generative AI outputs. Using the generative AI output validation engine, generating an output validation score for the summary data. Communicating the output validation score. A feedback loop is established to incorporate human feedback for fine-tuning the generative AI output validation engine models.
A device and method for robotic process automation (RPA) using speech recognition that receives a voice input; invokes, using the received voice input, an RPA workflow, the RPA workflow comprising a sequence of tasks; based at least on the invoked RPA workflow, retrieves an argument from a cloud device; modifies, with the retrieved argument, at least one task of the sequence of tasks; and executes the modified at least one task as part of the RPA workflow.
Examples are disclosed that relate to fans configured to automatically adjust for imbalances in mass. One example provides a self-balancing fan, comprising a hub comprising a plurality of blade interfaces, and a plurality of blade structures each attached to a corresponding blade interface of the hub, each blade interface comprising a tapered notch in the hub and being configured to increase a balancing force exerted by the hub against the blade structure as a function of increasing distance of the blade structure from the hub.
A phase-interpolator (PI) circuit generates an interpolated clock to capture data in a capture circuit at a target phase in a phase range between two reference clocks based on an interpolation code within a range of interpolation codes is described. A clamping circuit coupled to the PI circuit provides an interpolation code within a reduced range, where the integral non-linearity (INL) of the interpolated clocks is below a threshold, such that data capture based on the interpolated clock has a lower bit error rate (BER). As a result, the interpolated clock is generated within a reduced phase range corresponding to the reduced range of interpolation codes. When a target phase for an interpolated clock is outside the reduced phase range, the clamping circuit may adjust the target phase clock relative to a reference clock to adjust the target phase to be within the reduced phase range for improved BER.
H04L 7/02 - Commande de vitesse ou de phase au moyen des signaux de code reçus, les signaux ne contenant aucune information de synchronisation particulière
7.
OLIGONUCLEOTIDE ASSEMBLY USING pH BASED ELECTRODE CONTROLLED HYBRIDIZATION
Electrode controlled hybridization is used to change local pH and selectively assemble oligonucleotide complexes on the surface of a microelectrode array. The oligonucleotide complexes have sticky ends that provide locations for subsequent oligonucleotide complexes to hybridize. The order in which specific oligonucleotide complexes are joined together encodes information. Controlled activation of individual electrodes in the microelectrode array creates negative voltages that reduces a buffer solution and raises the pH in proximity to the electrodes. At higher pH levels double-stranded oligonucleotides de-hybridize. Nicks between oligonucleotide complexes and oligonucleotides anchored to the microelectrode array are closed creating covalent attachments. De-hybridized single-stranded oligonucleotides are removed leaving only the oligonucleotides connected to microelectrode array. Thus, during a given round of synthesis, oligonucleotide complexes are added only to the locations on the microelectrode array where the electrodes are not activated.
A method for annotating images to create a corpus for training a multi-task computer vision machine learning model is presented. The method comprises receiving, at one or more annotation specialist models, a plurality of images to be annotated. Via operation of the one or more annotation specialist models, pre-filtered annotations are generated for the plurality of images. Via operation of a data filtering and enhancement module, the pre-filtered annotations are filtered in accordance with predefined noise criteria so as to output candidate annotations for the plurality of images. The method further comprises, for each of one or more candidate annotations, selectively (1) storing the candidate annotation into the corpus as a final annotation for its associated image, or (2) adding the candidate annotation to its associated image using the one or more annotation specialist models and the data filtering and enhancement module for subsequent iterative annotation and filtering.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06F 40/284 - Analyse lexicale, p. ex. segmentation en unités ou cooccurrence
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
In non-limiting examples of the present disclosure, systems and methods are described that relate to providing, in a browser environment, a sidebar search capability to users. Once in a primary content page, the user is able to select text for searching. In response, the system provides a context menu or keyboard shortcut that includes an option for conducting a sidebar search. In response to user selection, the system passes highlighted or selected text as a parameter to the search engine. The results are provided in an area alongside the currently displayed content page, such as in a sidebar search pane. The user is able to experience search results without leaving the context of their current search tab.
A device includes: a processor, and a memory storing executable instructions which, when executed by the processor, causes the processor, alone or in combination with other processors, to provide the following: a user interface comprising administrator access to a collaboration system, the user interface comprising a control to invoke an artificial intelligence (AI) assistant function; and an Application Programming Interface (API) to, in response to activation of the control, download user activity data for the collaboration system, generate a prompt for a Large Language Model (LLM) comprising the user activity data and instructing the LLM to generate a report based on the user activity data, and submit the prompt to the LLM and receive the report generated by the LLM. The user interface provides the report and controls for administrative actions suggested by the report.
A system for facilitating ray trace operations with shared traversal performs a pre-test operation that includes testing one or more volumes against an acceleration structure associated with a virtual environment to identify a set of candidate nodes of the acceleration structure. The virtual environment comprises one or more virtual objects defined by one or more object components. The system also performs a ray trace operation based upon the set of candidate nodes of the acceleration structure.
Various embodiments of the technology described herein relate to compression of video data, including selecting a pivot image from a video including a plurality of images and causing a first machine learning model to generate a descriptor of the pivot image, where the descriptor includes a language description associated with the pivot image. In one example, the pivot image and the descriptor are provided to a decoder for reconstruction of the video. In an embodiment, the decoder includes a generative machine learning model that takes as an input the pivot image and the descriptor. The decoder uses the pivot image to generate an image based at least in part on the descriptor. The image is combined with other images generated by the generative machine learning model to reconstruct the video.
The present disclosure relates to methods and systems that provide querying and analysis of clinical trials using probabilistic graphical models. The methods and systems train a probabilistic graphical model using clinical trial data and use the probabilistic graphical model to perform inferences in response to queries for clinical trials. The methods and systems use the probabilistic graphical model to handle multimodal datatypes of the clinical trial data and predict multiple attributes of the clinical trial for an input query.
G16H 10/20 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des essais ou des questionnaires cliniques électroniques
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p. ex. basé sur des systèmes experts médicaux
14.
END-TO-END AUTOMATIC SPEECH RECOGNITION SYSTEM FOR BOTH CONVERSATIONAL AND COMMAND-AND-CONTROL SPEECH
A contextual end-to-end automatic speech recognition (ASR) system includes: an audio encoder configured to process input audio signal to produce as output encoded audio signal; a bias encoder configured to produce as output at least one bias entry corresponding to a word to bias for recognition by the ASR system; a transcription token probability prediction network configured to produce as output a probability of a selected transcription token, based at least in part on the output of the bias encoder and the output of the audio encoder; a first attention mechanism configured to receive the at least one bias entry and determine whether the at least one bias entry is suitable to be transcribed at a specific moment of an ongoing transcription; and a second attention mechanism configured to produce prefix penalties for restricting the first attention mechanism to only entries fitting a current transcription context.
Various embodiments described herein dynamically control the distribution of power to individual components of a node in an overprovisioned rack, node, or accelerators of a data center based on service-level agreements (SLAs) defining priorities for workloads for certain user accounts. The SLA is used to determine a throttling order for throttling the accelerators. Controlling the distribution of power in the node includes throttling at an accelerator or coprocessor, based on the throttling order or SLA, until the power consumption is at or below a power policy limit. In this manner, various embodiments discussed herein provide (1) granular control over the execution of tasks in an overprovisioned rack and (2) a user experience consistent with a priority level defined by an SLA, while complying with power policy limit(s) to improve the lifespan and operation of hardware, as well as to reduce the wear and tear experienced by overprovisioned hardware.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06T 1/20 - Architectures de processeursConfiguration de processeurs p. ex. configuration en pipeline
The techniques disclosed herein provide adaptable notifications for incoming messages. A system uses AI to recognize one or more categories for individual messages of a thread. The system can then generate a summary of specific categories of messages to provide contextually relevant notifications that summarize a specific set of interactions for a message thread. This approach is more efficient than systems that provide individual notifications for each message, as the disclosed techniques enable a system to generate a controlled number of notifications and/or more contextually accurate notifications for specific users. The disclosed techniques also improve the security of a system by generating notifications that can summarize the content of received messages and/or summarize specific interactions within a particular message thread.
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p. ex. des poussées de notifications des messages reçus
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p. ex. messagerie instantanée [IM]
H04L 51/216 - Gestion de l'historique des conversations, p. ex. regroupement de messages dans des sessions ou des fils de conversation
17.
DATABASE MANAGEMENT ENGINE FOR A DATABASE MANAGEMENT SYSTEM
Methods, systems, and computer storage media provide a privacy compliance notification indicating a database's level of compliance with a privacy policy after restoring the database to the database's backup copy. The database is associated with a database management engine. The database supports privacy-based first-class data entities. The privacy-based first-class data entities are database entities having privacy system-level metadata properties associated with data operations in a database language syntax. The privacy compliance notification may be generated based on determining whether a privacy database operation associated with a database journal and a privacy journal has been executed on a database since the database was restored to a backup copy of the database. The database transaction journal includes a transaction log of database operations executed against the database, and the privacy journal includes the database operations logged as privacy database operations associated with the plurality of privacy-based first-class data entities.
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
18.
SYSTEMS AND METHODS FOR ZERO TRUST DNS BASED NETWORKING
Examples of the present disclosure describe systems and methods for zero trust domain name system (DNS) (ZTDNS) based networking. A computing device implementing ZTDNS based networking blocks any outbound connections that are not included in a list of trusted IP addresses. The list of trusted IP addresses is updated in response to the computing device receiving from a trusted DNS server an IP address corresponding to a DNS request. In some examples, the ZTDNS based networking intercepts and evaluates outbound communications for applications that implement a custom application DNS client. In other examples, the ZTDNS based networking intercepts and evaluates outbound communications for virtual environments. The outbound communications for both the custom application DNS client and the virtual environments are proxied through a local DNS client of the computing device.
H04L 61/4511 - Répertoires de réseauCorrespondance nom-adresse en utilisant des répertoires normalisésRépertoires de réseauCorrespondance nom-adresse en utilisant des protocoles normalisés d'accès aux répertoires en utilisant le système de noms de domaine [DNS]
19.
DETECTION OF MALICIOUS DIRECT MEMORY ACCESS DEVICE USED FOR DIRECT DEVICE ASSIGNMENT
Detection of malicious direct memory access (DMA) device used for direct device assignment. A virtualization computer system assigns a peripheral device to an operating context within a virtualization environment. The peripheral device is DMA capable. The virtualization computer system monitors a signal source that is affected by DMA operations initiated by the peripheral device while the peripheral device is assigned to the operating context. Based on monitoring the signal source, the virtualization computer system identifies a signal pattern characterizing the DMA operations that are initiated by the peripheral device. Using the signal pattern, the virtualization computer system determines that the DMA operations initiated by the peripheral device are abnormal and the virtualization computer system identifies the peripheral device as malicious.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
Example solutions perform natural language query processing on hybrid utterances. A precise segment is identified, within the hybrid utterance, and processed with a symbolic AI interpreter configured to generate a first interpretation. The precise segment is replaced, within the hybrid utterance, with a placeholder term thereby resulting in a vague utterance. The vague utterance is processed with a statistical AI interpreter configured to generate a second interpretation. The first interpretation is merged with the second interpretation using the hybrid utterance as a template for the merger and using the placeholder term as the location for the first interpretation within the second interpretation. A complete interpretation is generated and transmitted to a query generator.
Techniques for implementing an AI threat modeling tool are disclosed. A static analysis tool is used to extract a candidate code snippet from a code repository. The candidate code snippet is identified as potentially being a security relevant code element. The static analysis tool generates additional context associated with the candidate code snippet. An LLM prompt is generated. This prompt is structured to include the candidate code snippet, the context, and a directive to assign a classification to the candidate code snippet. The classification includes a source classification, a sink classification, a sanitizer classification, or a flow step classification. The LLM operates on the prompt to generate output comprising a specific classification for the candidate code snippet. The output is formatted into a data extension file that is consumable by the static analysis tool.
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
This document relates to communication by backscattering of satellite signals. One example includes a satellite backscatter transmitter having a first antenna configured to receive a radio frequency satellite signal, a modulator configured to modulate the radio frequency satellite signal to obtain a modulated radio frequency satellite signal, a digital logic circuit configured to selectively control the modulator to encode information according to a communication scheme, and a second antenna configured to passively retransmit the modulated radio frequency satellite signal to a receiver.
23.
FRAMEWORK FOR ANALYZING PROPERTIES OF CHEMICAL MATERIALS
The techniques disclosed herein enable an autonomous agent to interpret an input dataset and orchestrate a suite of software modules to perform a computational task on a representation of a chemical material. The input dataset includes a prompt defining a computational task to be performed on a chemical material. Moreover, the input dataset includes data defining a chemical included in the chemical material, molecular descriptors describing the chemical and/or the chemical material, and an external variable. The agent analyzes the benefits and drawbacks of each model within the context of the computational task to determine a technique for performing the computational task. Accordingly, the agent formulates a chain of calls invoking the functionality of data processing tools and models to perform the computational task responsive to the prompt.
G16C 20/30 - Prévision des propriétés des composés, des compositions ou des mélanges chimiques
G16C 20/70 - Apprentissage automatique, exploration de données ou chimiométrie
G16C 60/00 - Science informatique des matériaux, c.-à-d. TIC spécialement adaptées à la recherche des propriétés physiques ou chimiques de matériaux ou de phénomènes associés à leur conception, synthèse, traitement, caractérisation ou utilisation
G06N 3/00 - Agencements informatiques fondés sur des modèles biologiques
24.
METHODS AND SYSTEMS FOR ENHANCING MULTIMODAL CAPABILITIES IN LARGE LANGUAGE MODELS
Systems and methods are provided for enhancing the speech modality in a large language model (LLM) and for retaining in-context learning capabilities without overfitting to trained tasks. Systems obtain a first set of training data comprising tuples of a sample of speech combined with synthetically generated pairings of speech comprehension test questions and answers that correspond to the sample of speech and obtain a second set of training data comprising pairings of automatic speech recognition data. Systems generate and align a first set of encodings of the first set of training data and a second set of encodings of the second set of training data. Systems train the LLM on a greater amount of the first set of training data than the second set of training data and use the trained LLM to perform a natural language processing task.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G10L 15/16 - Classement ou recherche de la parole utilisant des réseaux neuronaux artificiels
G10L 15/183 - Classement ou recherche de la parole utilisant une modélisation du langage naturel selon les contextes, p. ex. modèles de langage
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
25.
APPARATUS AND METHODS FOR PRIME FIELD MODULAR REDUCTION
Apparatus and methods for prime field modular reduction are described. As an example, a custom modular reduction digital circuit for reducing an n-bit integer based on a modulus, where the modulus comprises a k-bit integer for use with a cryptographic algorithm, is described. The custom modular reduction digital circuit includes a first circuit to generate at least two partial results by processing: (1) k lower order significant bits of the n-bit integer and (2) at least a subset of bits for congruent representations corresponding to any n-k higher order bits of the n-bit integer that are higher in significance than the most significant bit of the k-bit integer. The custom modular reduction digital circuit further includes a second circuit to process the at least two partial results, output by the first circuit, to generate a reduced version of the n-bit integer for use with the cryptographic algorithm.
G06F 7/72 - Méthodes ou dispositions pour effectuer des calculs en utilisant une représentation numérique non codée, c.-à-d. une représentation de nombres sans baseDispositifs de calcul utilisant une combinaison de représentations de nombres codées et non codées utilisant l'arithmétique des résidus
26.
SECURITY ENHANCEMENT FOR COMPUTING DEVICE STATE CHANGE
Systems and methods are disclosed herein for identifying a bypass of a computing device state change. In an example system, a determination is made that a computing component, such as an application executing on the computing device, is blocking a state change of the computing device. The state change includes various types of actions to protect the computing device, such as an automatic lock, logoff, standby mode change, or powering off change. An idle period of the computing device is detected. A proximity change of a user relative to the computing device is also detected. Based on the idle period and the proximity change, an action to remediate the blocking of the state change is performed, such as generating a notification associated with the blocking of the state change for providing to the user and/or automatically bypassing the blocking of the state change.
A phase-interpolator, PI, circuit (700) generates an interpolated clock (PI_CLK) to capture data in a capture circuit at a target phase in a phase range between two reference clocks based on an interpolation code (S) within a range of interpolation codes is described. A clamping circuit (704) coupled to the PI circuit provides an interpolation code (S) within a reduced range, where the integral non-linearity, INL, of the interpolated clocks is below a threshold, such that data capture based on the interpolated clock has a lower bit error rate, BER. As a result, the interpolated clock is generated within a reduced phase range corresponding to the reduced range of interpolation codes. When a target phase for an interpolated clock is outside the reduced phase range, the clamping circuit may adjust the target phase clock (PHA_REF) relative to a reference clock to adjust the target phase to be within the reduced phase range for improved BER.
H03K 5/135 - Dispositions ayant une sortie unique et transformant les signaux d'entrée en impulsions délivrées à des intervalles de temps désirés par l'utilisation de signaux de référence de temps, p. ex. des signaux d'horloge
H03K 5/00 - Transformation d'impulsions non couvertes par l'un des autres groupes principaux de la présente sous-classe
Techniques for using a sensor to perform laser signal decoding are disclosed. The sensor may be a global shutter sensor or a rolling shutter sensor. The sensor generates a first set of images while operating in a first mode. In response to detecting a laser signal in the first set of images, the sensor is caused to operate in a second mode. The laser signal includes an embedded frequency signal component and repeats at a periodic rate. While the sensor is operating in the second mode, the sensor generates a second set of images, which capture an entire period of the laser signal. From the second set of images, the embedded frequency signal component is determined. A decoding operation is performed using the embedded frequency signal component.
A data processing system implements obtaining build logs that include information associated with a software build problem; analyzing the logs to generate a knowledge graph identifying the relationship between various entities in the logs; extracting a signature of a candidate root cause of the build problem from the knowledge graph representing a subset of nodes and edges of the knowledge graph; providing the signature of the candidate root cause to a graphical language model to obtain a prediction of a category of root cause failure selected from among a plurality of root cause failures; constructing a prompt for a language model to generate a root cause failure analysis that describes the root cause of the build problem, the prompt including the category of root cause; receiving the root cause failure analysis from the language model; and performing one or more actions in response to receiving the root cause failure analysis.
In a cloud computing environment, a cross-tenant access security measure monitors conditional access policies for changes or additions that hamper or threaten an authorized access from an assistant tenant user to a focus tenant. Some cross-tenant access security tracks role assignments to detect rogue roles, or detect hampering role changes. In some cases, focus tenant events and assistant tenant events are correlated in an audit. In some cases, the authorized access is a zero standing time bound access. In some cases, the authorized access is constrained to an IP address range, or constrained to login from a managed device, or both. In some cases, assets are excluded from managed response remediation actions. In some, managed response is modulated by product-specific Role Based Access Control. In some, repeated logins are avoided, to permit faster managed responses.
The disclosed concepts relate to contextualization of generative language models. In some implementations, a linked entity database is populated with entity resource identifiers of entities extracted from a search log by an entity linker. A contextualized prompt data structure is generated based on the linked entity database, e.g., by including linked entity context information in the contextualized prompt data structure. A response to the contextualized prompt data structure is received, where the response is conditioned on the linked entity context information.
Examples of the present disclosure describe systems and methods for automatically assisting conversations using a graph database. In order to minimize misunderstanding of words and phrases used by participants during a conversation, phrases from the conversation may be received by conversation assistance application as the conversation takes place. Entities may be extracted from the phrase based on natural language recognition according to a domain context of the participant being assisted. One or more tags may be looked up from a graph database, and may be provided to the participant as a list of hashtags related to the conversation. Links to documents may be extracted based on the tags for the participant for viewing during the conversation.
A system iteratively evaluates the target machine learning model using evaluation hyperparameter values of the target machine learning model to measure performance of the target machine learning model for different combinations of the evaluation hyperparameter values. The system trains a surrogate machine learning model using the different combinations of the evaluation hyperparameter values as features and the performance of the target machine learning model based on a corresponding combination of the evaluation hyperparameter values as labels. The system generates a feature importance vector of the surrogate machine learning model based on the training of the surrogate machine learning model, generate informed priors based on the feature importance vector, and generates the target hyperparameter values of the target machine learning model based on the informed priors.
The systems and methods relate to a self-serve diagnostic experience that enables users to help themselves when issues or problems emerge with a customer workload. The systems and methods provide an interactive interface that guides users through a troubleshooting journey. Users may enter a problem with a customer workload using the interactive interface and may receive one or more insights automatically generated by one or more detectors based on an analysis of the backend telemetry data for the customer workload. The insights may provide contextual information about the issues and recommendations for steps to fix the issues. The interactive interface may also provide a visual overview of a plurality of resources, the resource dependencies, and the resource health for the plurality of resources. The systems and methods may also guide users in building one or more detectors for troubleshooting the one or more issues.
G06Q 30/016 - Fourniture d’une assistance aux clients, p. ex. pour assister un client dans un lieu commercial ou par un service d’assistance après-vente
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06Q 10/0631 - Planification, affectation, distribution ou ordonnancement de ressources d’entreprises ou d’organisations
G06Q 10/0637 - Gestion ou analyse stratégiques, p. ex. définition d’un objectif ou d’une cible pour une organisationPlanification des actions en fonction des objectifsAnalyse ou évaluation de l’efficacité des objectifs
Methods, systems, apparatuses, and computer program products are provided for enabling access to a resource in a secured manner. A token request from an application executing in a first computing environment may be received in a second computing environment. The second computing environment may assign a trust level to the received token request that indicates that the first computing environment may not be trusted. The token request, along with the trust level, may be provided to an authorization server to generate an authorization token that includes a trust indication indicative of the trust level of the second computing environment. When the application executing in the second computing environment transmits the authorization token to a resource manager to access a resource, the resource manager may be configured to perform a precautionary action to protect the resource prior to providing access, such as creating a backup of the resource.
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
In a cloud computing environment, a cross-tenant access security measure monitors conditional access policies for changes or additions that hamper or threaten an authorized access from an assistant tenant user to a focus tenant. Some cross-tenant access security tracks role assignments to detect rogue roles, or detect hampering role changes. In some cases, focus tenant events and assistant tenant events are correlated in an audit. In some cases, the authorized access is a zero standing time bound access. In some cases, the authorized access is constrained to an IP address range, or constrained to login from a managed device, or both. In some cases, assets are excluded from managed response remediation actions. In some, managed response is modulated by product-specific Role Based Access Control. In some, repeated logins are avoided, to permit faster managed responses.
Systems and methods are described for client-side rewriting of web page code. A proxy computing device receives a web page from a server computing device and analyzes the web page to identify a code component. The proxy computing device generates a modified version of the web page by replacing the identified code component with a wrapped code component and including a code rewriting and evaluation function in the web page. The wrapped code component includes a call to the code rewriting and evaluation function that includes the identified code component as an argument thereof. The code rewriting and evaluation function is configured to generate a rewritten code component by rewriting the identified code component and to evaluate the rewritten code component. The proxy computing device sends the modified version of the web page to a client computing device that is configured to load the modified version of the web page.
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p. ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
G06F 16/958 - Organisation ou gestion de contenu de sites Web, p. ex. publication, conservation de pages ou liens automatiques
G06F 21/54 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par ajout de routines ou d’objets de sécurité aux programmes
38.
TECHNIQUES FOR ENABLING ON-DEVICE INK STROKE PROCESSING
A data processing system implements obtaining device information and performance requirements information for a resource-constrained computing device; analyzing the device information and the performance requirements information to determine an amount to compress one or more machine learning models to permit the resource-constrained computing device to operate the one or more machine learning models on the resource-constrained computing device, the one or more machine learning models including a stroke classification model for classifying digital ink stroke information as handwriting or a drawing; compressing the one or more machine learning models to permit the one or more machine learning models to operate on the resource-constrained computing device to generate one or more compressed machine learning models; and deploying the one or more compressed machine learning models to the resource-constrained computing device to process ink stroke information captured by a user interface of the resource-constrained computing device.
The disclosed concepts relate to contextualization of generative language models. In some implementations, a linked entity database is populated with entity resource identifiers of entities extracted from a search log by an entity linker. A contextualized prompt data structure is generated based on the linked entity database, e.g., by including linked entity context information in the contextualized prompt data structure. A response to the contextualized prompt data structure is received, where the response is conditioned on the linked entity context information.
A time-to-digital converter (TDC) circuit generates a digital output indicating a time, known as a phase difference, from a phase of the generated signal to a corresponding phase of a reference signal. The digital output is used by the digitally controlled oscillator (DCO) to correct for the phase/frequency difference to synchronize the generated signal with the reference signal. In an aspect, an adaptive TDC circuit generates a first digital indication in a coarse mode when the offset time is above a threshold and generates a second digital indication in a fine mode when the offset time is below the threshold. The first digital indication and the second digital indication each comprise a same number of bits, and the first digital indication is normalized to the second digital indication for the digital output of the adaptive TDC circuit. A fractional bit may be employed to compensate for a quantization error.
Systems and techniques for multi-phase cloud service node error prediction are described herein. A set of spatial metrics and a set of temporal metrics may be obtained for node devices in a cloud computing platform. The node devices may be evaluated using a spatial machine learning model and a temporal machine learning model to create a spatial output and a temporal output. One or more potentially faulty nodes may be determined based on an evaluation of the spatial output and the temporal output using a ranking model. The one or more potentially faulty nodes may be a subset of the node devices. One or more migration source nodes may be identified from one or more potentially faulty nodes. The one or more migration source nodes may be identified by minimization of a cost of false positive and false negative node detection.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06N 5/01 - Techniques de recherche dynamiqueHeuristiquesArbres dynamiquesSéparation et évaluation
42.
GENERATING ANIMATED INFOGRAPHICS FROM STATIC INFOGRAPHICS
Implementations of the subject matter described herein relate to generating animated infographics from static infographics. A computer-implemented method comprises: extracting visual elements of a static infographic; determining, based on the visual elements, a structure of the static infographic at least indicating a layout of the visual elements in the static infographic; and applying a dynamic effect to the visual elements based on the structure of the static infographic to generate an animated infographic.
G06T 13/80 - Animation bidimensionnelle [2D], p. ex. utilisant des motifs graphiques programmables
G06V 10/77 - Traitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source
43.
DYNAMICALLY GENERATED CONTENT STICKERS FOR USE IN VIDEO CREATION
The present disclosure relates to methods and devices for dynamically generating stickers for use with a video. The methods and devices may dynamically generate a plurality of stickers in response to receiving a query with search terms for a sticker to add to a video being created. The plurality of stickers may include interactive content related to the search terms. The methods and devices may receive a selection of one or more of the stickers to include in the video. Upon an indication that a video is to be played, the methods and device may regenerate the selected stickers for the video with the content and provide video output with the video and one or more overlays with the selected stickers for presentation on a display.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 16/78 - Recherche de données caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
A data processing system implements obtaining build logs that include information associated with a software build problem; analyzing the logs to generate a knowledge graph identifying the relationship between various entities in the logs; extracting a signature of a candidate root cause of the build problem from the knowledge graph representing a subset of nodes and edges of the knowledge graph; providing the signature of the candidate root cause to a graphical language model to obtain a prediction of a category of root cause failure selected from among a plurality of root cause failures; constructing a prompt for a language model to generate a root cause failure analysis that describes the root cause of the build problem, the prompt including the category of root cause; receiving the root cause failure analysis from the language model; and performing one or more actions in response to receiving the root cause failure analysis.
A system comprising: an actuator; a signal generator configured to apply an electric signal to the actuator to expand and contract the actuator; an optical fibre associated with the actuator, the optical fibre configured to lengthen when the actuator expands and shorten when the actuator contracts; a coherent light source configured to transmit a coherent light through the optical fibre to provide illumination during the lengthening and shortening of the optical fibre.
Probation of direct memory access (DMA) device used for direct device assignment. A virtualization computer system identifies a peripheral device as being removed from a direct assignment to a first operating context of a virtualization environment. The peripheral device is DMA capable. The virtualization computer system assigns the peripheral device to a second operating context of the virtualization environment and initiates a device validation against the peripheral device. Based on the device validation indicating that the peripheral device is normal, the virtualization computer system reassigns the peripheral device to a third operating context of the virtualization environment. Based on the device validation indicating that the peripheral device is abnormal, the virtualization computer system excludes the peripheral device from assignment to a third operating context of the virtualization environment.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
47.
SIGNIFICANCE ORDERED PREFIX TREE FOR COMPUTE-EFFICIENT ROOT CAUSE INVESTIGATION
generating a significance-ordered prefix tree based on the significant telemetry point values and the Z-scores; using the significance-ordered prefix tree to identify cohorts to evaluate in combination; computing a cohort Z-score for each of the identified cohorts and identifying, based on the cohort Z-scores, a subset of the cohorts that are statistically significant indicators of the condition of interest.
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
H04L 41/0604 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant du filtrage, p. ex. la réduction de l’information en utilisant la priorité, les types d’éléments, la position ou le temps
H04L 43/065 - Génération de rapports liés aux appareils du réseau
H04L 41/0631 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant l’analyse des causes profondesGestion des fautes, des événements, des alarmes ou des notifications en utilisant l’analyse de la corrélation entre les notifications, les alarmes ou les événements en fonction de critères de décision, p. ex. la hiérarchie ou l’analyse temporelle ou arborescente
48.
DIGITAL PHASE-LOCKED LOOPS (PLL) INCLUDING CLOSED-LOOP TIME-TO-DIGITAL CONVERTER (TDC) GAIN CALIBRATION CIRCUITS AND RELATED METHODS
In a calibrated digital phase-locked-loop (DPLL) circuit, during a normal operating mode, a control value provided to a digitally controlled oscillator (DCO) is updated by a feedback circuit to keep an output clock generated by the DCO synchronized with a reference clock. The feedback circuit includes a time-to-digital converter (TDC) circuit to measure a phase difference as a time interval. In a calibration operating mode of the calibrated DPLL circuit, calibration of a resolution of a time measurement of the time interval measured by the TDC is performed in the feedback circuit while the control value provided to the DCO is kept constant. Calibrating the TDCs in each of the DPLLs in an integrated circuit (IC) to a nominal resolution in this manner improves synchronization of the clock domains. In some examples, the TDC circuit is a Vernier type circuit and calibration sets a delay difference to a nominal resolution.
H03L 7/07 - Commande automatique de fréquence ou de phaseSynchronisation utilisant un signal de référence qui est appliqué à une boucle verrouillée en fréquence ou en phase utilisant plusieurs boucles, p. ex. pour la génération d'un signal d'horloge redondant
G06F 1/12 - Synchronisation des différents signaux d'horloge
H03L 7/085 - Détails de la boucle verrouillée en phase concernant principalement l'agencement de détection de phase ou de fréquence, y compris le filtrage ou l'amplification de son signal de sortie
G04F 10/00 - Appareils pour mesurer des intervalles de temps inconnus par des moyens électriques
49.
SYSTEM AND METHOD FOR PERFORMING QUERY OPERATIONS ON RUN LENGTH ENCODED DATA
A method, computer program product, and computing system for processing query operations on run length encoding (RLE), data in a parallel processing computing system. Data for query execution is received at a parallel processing computing system, at least a portion of the data being compressed according to RLE, thereby forming RLE data; and a query operation is executed on the RLE data without performing a decompression operation on the RLE data.
A technique partitions a user's original query into plural smaller component queries, each of which has a common part and an instance-specific part. The technique distributes the component queries to plural processor instances of a processor. The plural processor instances transform the respective component queries into query-component responses by acting in parallel, independent of each other. The technique generates a final response based on the query-component responses, e.g., by assembling the component-query responses into the final response. The technique reduces latency because the processor instances work on parts of the user's original query at the same time, rather than as a single stream of consecutive tokens. The plural processor instances have access to a shared cache memory, and utilize relevant data that has been computed in response to previous queries.
A method, computer program product, and computing system for optimizing query operations on run length encoding (RLE)data in a parallel processing computing system. Data is received in a plurality of columns of an input table of a parallel processing computing system for query execution; the system determines that at least a portion of the received data in a first number of columns is compressed according to run length encoding (RLE), thereby comprising RLE data columns including RLE data and that the received data in a second number of columns is not compressed according to run length encoding (RLE), thereby comprising non-RLE data columns including non-RLE data. A query operation is executed on the RLE data and the non-RLE data by prioritizing processing of the RLE data columns over processing of the non-RLE data columns.
A computing device may include a substrate. A computing device may include a processing unit supported by the substrate. A computing device may include an optical transmitter supported by the substrate and in electrical communication with the processing unit.
A media server proxy switches streaming media protocols ("SMPs") during streaming of media segments. The media server proxy receives a request, from a playback tool, according to a first SMP to provide information about outgoing media segments of a media sequence. The media server proxy generates the information about outgoing media segments and sends the information to the playback tool. The media server proxy also retrieves, from a remote server, incoming media content for the media sequence according to a second SMP different than the first SMP. The media server proxy assembles outgoing media segments based at least in part on the incoming media content. The media server proxy streams, to the playback tool, outgoing media segments according to the first SMP. In this way, the media server proxy can deliver media segments at very low latency, even when the first SMP typically has much higher latency.
H04N 21/222 - Serveurs secondaires, p. ex. serveur proxy ou tête de réseau de télévision par câble
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 21/438 - Interfaçage de la voie descendante du réseau de transmission provenant d'un serveur, p. ex. récupération de paquets du flux vidéo codé d'un réseau IP
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
54.
GENERATIVE ARTIFICIAL INTELLIGENCE OUTPUT VALIDATION ENGINE IN AN ARTIFICIAL INTELLIGENCE SYSTEM
Methods, systems, and computer storage media for providing generative artificial intelligence (AI) output validation using a generative AI output validation engine in an artificial intelligence system. The generative AI output validation engine assesses and determines the quality (e.g., quantified as an output validation score) of generative AI output (e.g., LLM output). In operation, a generative AI output comprising summary data is accessed. Raw data from which summary data is generated is accessed. A plurality of output validation operations associated with a generative AI output validation engine are executed. The generative AI output validation engine comprises multi-categorical analytical models that provide corresponding output validation operations for quantifying quality of generative AI outputs. Using the generative AI output validation engine, generating an output validation score for the summary data. Communicating the output validation score. A feedback loop is established to incorporate human feedback for fine-tuning the generative AI output validation engine models.
Example solutions for processing LLM prompts include creating a first large language model (LLM) prompt based on an input LLM prompt. The first LLM prompt represents a first step toward generating a solution to the input LLM prompt. The first LLM prompt is submitted to an LLM as a first sub-query, thereby resulting in the generation of a first LLM output. A second LLM prompt is generated based on the input LLM prompt. The second LLM prompt represents a second step toward generating the solution. The second LLM prompt includes the first LLM output. The second LLM prompt is submitted to the LLM as a second sub-query, thereby resulting in the generation of a second LLM output. The second LLM output represents the solution to the input LLM prompt in response to the input LLM prompt.
First and second device terminals of a multi-terminal quantum device (200) are coupled to first and second external measurement terminals respectively, whilst a device ground terminal is coupled to an external ground terminal. The device ground terminal is coupled to the external ground terminal via two parallel ground lines (101, 102). A first of these ground lines (101) includes a voltage measurement device (105), and a second ground line (102) comprises a voltage generator (106). A controller (103) receives as input a time-varying voltage measurement on the first line (101), and uses this measurement to generate a control signal to the voltage generator (106). The control signal causes the voltage generator (106) to generate a time-varying stabilization voltage on the second ground line (106) in order to mitigate or cancel any residual voltages on the device ground terminal.
G05F 1/46 - Régulation de la tension ou de l'intensité là où la variable effectivement régulée par le dispositif de réglage final est du type continu
G06N 10/40 - Réalisations ou architectures physiques de processeurs ou de composants quantiques pour la manipulation de qubits, p. ex. couplage ou commande de qubit
57.
INTERACTION CUSTOMIZATION FOR A LARGE-FORMAT DISPLAY DEVICE
A method of customizing interactive control for a large-format touch-sensitive display device (LFTSDD) is disclosed. One or more images of a scene in front of the LFTSDD are received via a camera of the LFTSDD. The one or more images are computer-analyzed to recognize a human subject in the scene and a location of the human subject relative to the LFTSDD. A variable interaction zone of a display screen of the LFTSDD is determined based at least on the recognized location of the human subject relative to the LFTSDD. The variable interaction zone is smaller than the display screen and positioned a designated distance in front of the human subject on the display screen based at least on the recognized location of the human subject relative to the LFTSDD. A touch control affordance is visually presented in the variable interaction zone of the display screen of the LFTSDD.
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p. ex. souris 2D, boules traçantes, crayons ou palets
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
Techniques for intelligently prompting an LLM to refactor code are disclosed. A code snippet is accessed. This code is identified as potentially comprising a reference to an out-of-compliance library. Context for the code snippet is generated. An LLM prompt is then built. This prompt will be fed to the LLM, and the prompt instructs the LLM to refactor the code snippet into modified code, which calls a compliant library. Output of the LLM is displayed. This output is based on the LLM operating in response to the LLM prompt. The output includes a proposed rewritten version of the code snippet.
Techniques for implementing an AI threat modeling tool are disclosed. A static analysis tool is used to extract a candidate code snippet from a code repository. The candidate code snippet is identified as potentially being a security relevant code element. The static analysis tool generates additional context associated with the candidate code snippet. An LLM prompt is generated. This prompt is structured to include the candidate code snippet, the context, and a directive to assign a classification to the candidate code snippet. The classification includes a source classification, a sink classification, a sanitizer classification, or a flow step classification. The LLM operates on the prompt to generate output comprising a specific classification for the candidate code snippet. The output is formatted into a data extension file that is consumable by the static analysis tool.
Mixed reality images are inked with strokes and other annotations, permitting a headset, robot, or drone operator to see coherent graphical additions made by a remote user. Some embodiments get an ink stroke in an annotation which includes multiple points in a screen space, select a ray origin point based on at least samples of the annotation points, obtain a spatial mesh representation of at least a portion of an object which is at least partially shown in a mixed reality image, cast a ray from the ray origin point to the spatial mesh representation, thereby determining a ray-mesh intersection point, choose a projection plane, project the annotation onto the projection plane, and configure a display with the mixed reality image including the projected ink stroke or other annotation.
A method and system for scheduling online meetings while disconnected from a network involves setting an indicator in a meeting invitation when the invitation is generated while a client device is offline. Upon the client device regaining network connectivity, the indicator denoting needed online meeting connection data is processed. In one embodiment, the client device requests meeting connection data from a server, receives the meeting connection data, adds it to the meeting request, and then communicates the meeting request to the server for distribution to the meeting invitees. Alternatively, the client device communicates the meeting request with the indicator to the server. The server recognizes the indicator, requests and adds meeting connection data to the meeting request, and distributes the request to the meeting invitees. By utilizing an indicator set offline by the client device, online meeting requests can be generated and sent when a client device is offline.
In a calibrated digital phase-locked-loop (DPLL) circuit, during a normal operating mode, a control value provided to a digitally controlled oscillator (DCO) is updated by a feedback circuit to keep an output clock generated by the DCO synchronized with a reference clock. The feedback circuit includes a time-to-digital converter (TDC) circuit to measure a phase difference as a time interval. In a calibration operating mode of the calibrated DPLL circuit, calibration of a resolution of a time measurement of the time interval measured by the TDC is performed in the feedback circuit while the control value provided to the DCO is kept constant. Calibrating the TDCs in each of the DPLLs in an integrated circuit (IC) to a nominal resolution in this manner improves synchronization of the clock domains. In some examples, the TDC circuit is a Vernier type circuit and calibration sets a delay difference to a nominal resolution.
H03L 7/107 - Détails de la boucle verrouillée en phase pour assurer la synchronisation initiale ou pour élargir le domaine d'accrochage utilisant une fonction de transfert variable pour la boucle, p. ex. un filtre passe-bas ayant une largeur de bande variable
G04F 10/00 - Appareils pour mesurer des intervalles de temps inconnus par des moyens électriques
H03L 7/085 - Détails de la boucle verrouillée en phase concernant principalement l'agencement de détection de phase ou de fréquence, y compris le filtrage ou l'amplification de son signal de sortie
H03L 7/099 - Détails de la boucle verrouillée en phase concernant principalement l'oscillateur commandé de la boucle
63.
ENHANCING DOCUMENT METADATA WITH CONTEXTUAL MOLECULAR INTELLIGENCE
A molecule representation is extracted from a document and associated with the document in a metadata database. For example, an image of a molecular structure may be extracted from a document and stored in the metadata database in a text-based representation such as SMILES. The metadata database may be searched to identify documents that mention a particular molecule. Continuing the example, the metadata database may be searched with a SMILES representation to identify the document and other documents that refer to the same molecule. The metadata database may index documents based on different types of molecule representations, including text-based, image-based, graph-based, name, abbreviation, etc. This allows search over multiple representations of a molecule, improving accuracy and thoroughness. These improvements reduce the time and computational resources needed to search for documents that refer to a particular molecule.
G06F 16/58 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
The described technology provides a method including generating a full tracking vector wherein each bit of the full tracking vector indicates cache validity state of a coherence granule (cogran) in agent cache for a related agent, dividing the tracking vector into a plurality of partial vectors (PVECs), for each PVEC, determining whether cache validity state of at least one bit in the PVEC is set to valid, and in response to determining that cache validity state of at least on bit in a given PVEC is set to valid, storing the given PVEC and its PVEC pointer in a tracking_info field of a base snoop filter (SFT) entry for the cogran, wherein the PVEC pointer indicates the location of the given PVEC in the full tracking vector.
G06F 12/0831 - Protocoles de cohérence de mémoire cache à l’aide d’un schéma de bus, p. ex. avec moyen de contrôle ou de surveillance
G06F 12/0891 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache utilisant des moyens d’effacement, d’invalidation ou de réinitialisation
A computing system for conditional generation of protein sequences includes processing circuitry that implements a denoising diffusion probabilistic model. In an inference phase, the processing circuitry receives an instruction to generate a predicted protein sequence having a target functionality, the instruction including first conditional information and second conditional information. The processing circuitry concatenates a first conditional information embedding generated by a first encoder and a second conditional information embedding generated by a second encoder to produce a concatenated conditional information embedding. The processing circuitry samples noise from a distribution function and combines the concatenated conditional information embedding with the sampled noise to produce a noisy concatenated input. The processor inputs the noisy concatenated input to a denoising neural network to generate a predicted sequence embedding, inputs the predicted sequence embedding to a decoding neural network to generate the predicted protein sequence, and outputs the predicted protein sequence.
A system embeds source content segments of the source content to generate input vectors of the source content segments and embeds generated content segments of the artificial-intelligence-generated content to generate output vectors of the generated content segments. The system performs a similarity measurement on the input vectors and the output vectors to generate a similarity score for each pair of input vectors and output vectors. The system defines a similarity correspondence between individual content segments of the source content to individual generated content segments of the artificial-intelligence-generated content, based on performing the similarity measurement and outputs the explanation to a user interface device. The explanation indicates generated result correspondences between the individual content segments of the source content and the individual generated content segments of the artificial-intelligence-generated content.
Probation of direct memory access (DMA) device used for direct device assignment. A virtualization computer system identifies a peripheral device as being removed from a direct assignment to a first operating context of a virtualization environment. The peripheral device is DMA capable. The virtualization computer system assigns the peripheral device to a second operating context of the virtualization environment and initiates a device validation against the peripheral device. Based on the device validation indicating that the peripheral device is normal, the virtualization computer system reassigns the peripheral device to a third operating context of the virtualization environment. Based on the device validation indicating that the peripheral device is abnormal, the virtualization computer system excludes the peripheral device from assignment to a third operating context of the virtualization environment.
G06F 13/28 - Gestion de demandes d'interconnexion ou de transfert pour l'accès au bus d'entrée/sortie utilisant le transfert par rafale, p. ex. acces direct à la mémoire, vol de cycle
68.
DETECTION OF MALICIOUS DIRECT MEMORY ACCESS DEVICE USED FOR DIRECT DEVICE ASSIGNMENT
Detection of malicious direct memory access (DMA) device used for direct device assignment. A virtualization computer system assigns a peripheral device to an operating context within a virtualization environment. The peripheral device is DMA capable. The virtualization computer system monitors a signal source that is affected by DMA operations initiated by the peripheral device while the peripheral device is assigned to the operating context. Based on monitoring the signal source, the virtualization computer system identifies a signal pattern characterizing the DMA operations that are initiated by the peripheral device. Using the signal pattern, the virtualization computer system determines that the DMA operations initiated by the peripheral device are abnormal and the virtualization computer system identifies the peripheral device as malicious.
G06F 21/55 - Détection d’intrusion locale ou mise en œuvre de contre-mesures
G06F 13/28 - Gestion de demandes d'interconnexion ou de transfert pour l'accès au bus d'entrée/sortie utilisant le transfert par rafale, p. ex. acces direct à la mémoire, vol de cycle
69.
MONITORING PRODUCE QUALITY IN THE SUPPLY CHAIN AT THE PALLET LEVEL WITH WIRELESS SIGNALS
A data processing system implements transmitting an RF signal using a transmitter disposed at a first side of a produce container containing produce to be monitored for quality. The signal is transmitted on multiple frequencies. The system further implements receiving the signal using a receiver disposed at a second side of the produce container opposite the first side of the produce container so the signal passes through the produce; obtaining a sample signal output by the receiver responsive to receiving the signal that passed through the produce contained in the produce container; analyzing the sample signal to identify differences between the RF signal and the sample signal representative of the dielectric properties of the produce; determining an estimated quality level of the produce based on the differences between the RF signal and the sample signal; and outputting an indication of the estimated quality level of the produce.
G01N 22/00 - Recherche ou analyse des matériaux par l'utilisation de micro-ondes ou d'ondes radio, c.-à-d. d'ondes électromagnétiques d'une longueur d'onde d'un millimètre ou plus
G06Q 10/087 - Gestion d’inventaires ou de stocks, p. ex. exécution des commandes, approvisionnement ou régularisation par rapport aux commandes
70.
DEFERRING LOAD DURING A PEAK PERIOD ON A CORE NETWORK
The present disclosure generally relates to deferring one or more load actions on a network function, such as a session management function (SMF), while providing access to communication-related services in a telecommunications network (e.g., a 5G mobile network). Systems described herein involve determining whether the SMF or core network of a telecommunications network is experiencing peak load conditions (e.g., a period of high network traffic) and, based on the determined peak load condition, causing one or more processes to be deferred or otherwise modified during the period of peak load. In this manner, the systems described herein preserve as much bandwidth and processing resources as possible in an effort to avoid degradation of communication applications and services, such as during voice and/or audio calls.
Systems and methods are provided for enhancing the speech modality in a large language model (LLM) and for retaining in-context learning capabilities without overfitting to trained tasks. Systems obtain a first set of training data comprising tuples of a sample of speech combined with synthetically generated pairings of speech comprehension test questions and answers that correspond to the sample of speech and obtain a second set of training data comprising pairings of automatic speech recognition data. Systems generate and align a first set of encodings of the first set of training data and a second set of encodings of the second set of training data. Systems train the LLM on a greater amount of the first set of training data than the second set of training data and use the trained LLM to perform a natural language processing task.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G06F 40/58 - Utilisation de traduction automatisée, p. ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G10L 15/183 - Classement ou recherche de la parole utilisant une modélisation du langage naturel selon les contextes, p. ex. modèles de langage
A computer-implemented method for spatially tracking muscle activity is disclosed. A muscle activation signal is received from a muscle activation sensor. The muscle activation signal indicates an amount of muscle activation of a muscle associated with a body part. A spatial signal is received from a spatial sensor. The spatial signal indicates a location of the body part in a physical space. Activation data is data that spatially correlates the amount of muscle activation of the body part to the location of the body part in the physical space.
Encrypting a verifiable credential (VC) and generating one or more instructions, at least one of which grants a scope of permission associated with the VC to the relying entity. The scope of permission includes permission to access a subset of data contained in the VC or a portion of data that can be derived from data contained in the VC. The encrypted VC and the one or more instructions are sent to the credential issuer or the relying entity to cause the credential issuer to generate a response containing the subset of data or the derived data and a proof code. The proof code is configured to prove the validity of the subset of data or the derived data.
G06F 21/34 - Authentification de l’utilisateur impliquant l’utilisation de dispositifs externes supplémentaires, p. ex. clés électroniques ou cartes à puce intelligentes
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
74.
VISUAL ANNOTATION TO EXTEND SOLUTION SPACE FOR MULTI-MODAL MODELS
Visual annotation is able to extend solution space for multi-modal large language models (MMLLMs), enabling task performance where previously not feasible. Examples receive a task image and a task prompt; perform a localization process on the task image, based on at least the task prompt, the localization process annotating a first feature within the task image using a first image annotation; retrieve first textual information for the first feature from a first selected data source of a plurality of data sources; generate a model prompt based on at least the task prompt, the model prompt comprising the task image with the first image annotation, the first textual information, and information linking the first feature with the first textual information; and generate, with an MMLLM, a task output based on at least the model prompt.
Some embodiments enhance the security of domain name resolution and other DNS operations, by automatically intercepting the DNS operation, determining an associated device identity or ascertaining an associated user identity, and enforcing a security policy based on at least the DNS operation and based on at least one of the identities. Some securable DNS operations include resolution requests, reverse lookups from IP addresses to domain names, DNS record accesses, mail server mappings, redirection, forwarding, and DNS record cache operations. Enforcing the policy includes, e.g., preventing a result requested by the DNS operation, permitting computational progress toward the requested result, allowing a different result, modifying a DNS record, or flushing a DNS record from a cache. In some embodiments, DNS operation security functionality utilizes or implements a conditional access security functionality, thereby providing, e.g., a secure conditional domain name resolution.
H04L 61/4511 - Répertoires de réseauCorrespondance nom-adresse en utilisant des répertoires normalisésRépertoires de réseauCorrespondance nom-adresse en utilisant des protocoles normalisés d'accès aux répertoires en utilisant le système de noms de domaine [DNS]
76.
POWER STABILIZATION USING A CAPACITOR BANK CONNECTED TO A BI-DIRECTIONAL CONVERTER
Power draw stabilization is provided. A target power consumption of a source load is determined. The source load is generated by electronics supplied power by a primary power source through a power rail. The power rail is coupled to a capacitor bank by a bi-directional converter configured to smooth fluctuations in power drawn from the primary power source by performing mode switch operations. The mode switch operations include, in response to the source load exceeding a target power consumption, controllably switching to a second directional mode that directs current released from the capacitor bank to the power rail. The mode switch operations further include, in response to the source load dropping below the target power consumption, controllably switching the operational mode of the bi-directional converter to a first directional mode to direct current from the power rail into the capacitor bank.
H02J 3/32 - Dispositions pour l'équilibrage de charge dans un réseau par emmagasinage d'énergie utilisant des batteries avec moyens de conversion
H02J 9/06 - Circuits pour alimentation de puissance de secours ou de réserve, p. ex. pour éclairage de secours dans lesquels le système de distribution est déconnecté de la source normale et connecté à une source de réserve avec commutation automatique
H02J 3/46 - Dispositions pour l’alimentation en parallèle d’un seul réseau, par plusieurs générateurs, convertisseurs ou transformateurs contrôlant la répartition de puissance entre les générateurs, convertisseurs ou transformateurs
77.
FIXING USAGES OF DEPRECATED APIS USING LARGE LANGUAGE MODELS
Techniques for intelligently prompting an LLM to fix code are disclosed. A corpus of release notes for a set of libraries is accessed. The release notes include information describing deprecated or removed APIs associated with the libraries. The corpus is stored in a vector database. A code snippet is accessed. This snippet is identified as potentially using a deprecated API. The code snippet is used to identify a set of release notes from the vector database. These release notes are determined to satisfy a threshold level of similarity with the code snippet. An LLM prompt is built and is fed to the LLM. The LLM prompt instructs the LLM to update the code snippet based on the identified set of release notes. Output of the LLM is displayed. This output includes a proposed rewritten version of the code snippet.
Disclosed systems and methods identify a data record set and determine whether one or more predetermined conditions exist for triggering analysis of one or more records in the data record set. Disclosed embodiments trigger the analysis only in response to determining that the predetermined conditions have been met. Upon triggering the analysis of the data record set, disclosed embodiments identify a subset of the data record set to undergo the analysis while refraining from performing the analysis on the remaining records in the data record set. Further, embodiments identify an analysis model based on a level of analysis to be performed and apply the analysis model to the subset of the data record set to identify any presence of sensitive data. Lastly, disclosed embodiments selectively perform a security process to the data record set in response to detecting the presence of the sensitive data.
The present disclosure relates to a vector processor implemented on programmable hardware (e.g., a field programmable gate array (FPGA) device). The vector processor includes a plurality of vector processor lanes, where each vector processor lane includes a vector register file with a plurality of register file banks and a plurality of execution units. Implementations described herein include features for optimizing resource availability on programmable hardware units and enabling superscalar execution when coupled with a temporal single-instruction multiple data (SIMD).
G06F 9/30 - Dispositions pour exécuter des instructions machines, p. ex. décodage d'instructions
G06F 9/38 - Exécution simultanée d'instructions, p. ex. pipeline ou lecture en mémoire
G06F 15/78 - Architectures de calculateurs universels à programmes enregistrés comprenant une seule unité centrale
G06F 15/80 - Architectures de calculateurs universels à programmes enregistrés comprenant un ensemble d'unités de traitement à commande commune, p. ex. plusieurs processeurs de données à instruction unique
A click-to-script service enables developers of big-data job scripts to quickly see the underlying script operations from optimized execution plans. Once a big-data job is received, the disclosed examples compile it and generate tokens that are associated with each operation of the big-data job. These tokens include may include the file name of the job, the line number of the operation, and/or an Abstract Syntax Tree (AST) node for the given operations. An original execution plan is optimized into an optimized execution plan, and the tokens for the original operations of the job script are assigned to the optimized operations of the optimized execution plan. The optimized execution plan is graphically displayed in an interactive manner such that users may view the optimized execution plan and click on its optimized operations to find the original operations of the job script.
A method of selecting data for privacy preserving machine learning comprises: storing training data from a first party, storing a machine learning model, and storing criteria from the first party or from another party. The method comprises filtering the training data to select a first part of the training data to be used to train the machine learning model and select a second part of the training data. The selecting is done by computing a measure, using the criteria, of the contribution of the data to the performance of the machine learning model.
G06F 18/21 - Conception ou mise en place de systèmes ou de techniquesExtraction de caractéristiques dans l'espace des caractéristiquesSéparation aveugle de sources
G06F 18/2115 - Sélection du sous-ensemble de caractéristiques le plus significatif en évaluant différents sous-ensembles en fonction d'un critère d'optimisation, p. ex. la séparabilité des classes, la sélection en avant ou l’élimination en arrière
G06F 18/214 - Génération de motifs d'entraînementProcédés de Bootstrapping, p. ex. ”bagging” ou ”boosting”
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
82.
PRE-PROVISIONING SERVER HARDWARE FOR DEPLOYMENT ON AN EDGE NETWORK
The present disclosure relates to systems, methods, and computer-readable media for pre-provisioning server nodes of a server rack and causing the pre-provisioned server node to be deployed on an edge zone of a cloud computing system. In particular, systems described herein involve identifying server nodes based on a customer hardware deployment request and performing a series of pre-provision acts to the server nodes in accordance with the received hardware deployment request. For example, systems described herein pre-provision the hardware by configuring hardware and software on the server nodes, establishing communication with one or more control planes on a datacenter, and bringing the server nodes to a return to web (RTW) state. By pre-provisioning the hardware, the server hardware may be delivered and transitioned to a live state in an efficient manner and without jeopardizing security on the cloud.
Example implementations include a method, apparatus, and computer-readable medium configured for indexing records using a hybrid spatial index. The hybrid spatial index is an integer that indicates a spatial location of an object. The method, apparatus, or computer-readable medium may associate an integer spatial index with a record of an object. The integer spatial index indicates a stripe of cells covering the object and encodes two or more of: an indication that the spatial index indicates the stripe of cells, a direction of the stripe, a category of a width of the stripe, the width of the stripe based on the category, a start value of the stripe, or a second dimension of the stripe. The method, apparatus, or computer-readable medium may select the record based on the spatial index being within a range of spatial indices for a spatial predicate.
The disclosed embodiments provide a system for processing data. During operation, the system determines activity features for candidates that match parameters of a search from a moderator of an opportunity, wherein the activity features include an amount of interaction between a candidate and additional moderators and a frequency of visits by the candidate to a platform used to conduct the interaction between the candidate and the additional moderators. Next, the system applies a machine learning model to the activity features to produce activeness scores representing levels of activity of the candidates with respect to the platform. The system then generates a ranking of the candidates according to the activeness scores. Finally, the system outputs at least a portion of the ranking as a set of search results of the search.
Systems and methods for generating dynamic quick actions for an application in a web browser. The dynamic quick actions correspond to various functions of an application accessible via a web browser sidebar interface. When a hover event is detected in association with an icon of the application, a quick-actions card is generated that includes quick actions of the application from which the user can select. For instance, a selection of a quick action triggers the web browser to execute an action that causes the application function to be performed. Thus, application functions are able to be surfaced and controlled via a single input device selection (e.g., a mouse click).
Systems, methods, and instrumentalities are described herein related to online tuning of pen characterizations. Online tuning may be performed during use of a pen with a touch device. A digitizer may detect signals associated with the pen and noise. A touch controller may execute a signal characterization model that characterizes the detected signals and an online tuner that processes the detected signals to perform online tuning of the signal characterization model. Online testing may validate an online-tuned signal characterization model for online use. Tuning may be based on signal statistics, such as mean or average signal gradients in the detected signals. Signal characterization models may include positioning, signal locating, noise reduction, communication decoding, etc.
G06F 3/041 - Numériseurs, p. ex. pour des écrans ou des pavés tactiles, caractérisés par les moyens de transduction
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p. ex. souris 2D, boules traçantes, crayons ou palets
G06F 3/044 - Numériseurs, p. ex. pour des écrans ou des pavés tactiles, caractérisés par les moyens de transduction par des moyens capacitifs
Ghost routing is a network verification technique that uses a portion of a production network itself to verify the impact of potential network changes. Ghost routing logically partitions the production network into a main network and a ghost network. The main network handles live traffic while the ghost network handles traffic generated for diagnostic purposes. The ghost network may have a network topology identical to the production network and may use the same hardware and software as the production network. An operator may implement a network configuration change on the ghost network and then use verification tools to verify that the network configuration change on the ghost network does not result in bugs. Verifying on the ghost network may not affect the main network. If the network operator verifies the network configuration change on the ghost network, the network operator may implement the network configuration change on the main network.
Distributed computing systems, devices, and associated methods of packet processing are disclosed herein. One example method includes receiving a packet having a header with a protocol field, a source address field, a source port field, a destination address field, and a destination port field individually containing a corresponding value. The method also includes extracting the values of the protocol field, the source address field, the source port field, the destination field, and the destination port field, determining whether a first match action table (“MAT”) contains an entry indexed to the extracted values, and in response to determining that the first MAT does not contain an entry indexed to the extracted values, using a subset of the extracted values to identify an entry in a second MAT.
A computing system is disclosed that includes a processor and memory. The memory stores instructions that, when executed by the processor, cause the processor to perform several acts. The acts include receiving, by a generative model, input set forth by a user of a client computing device that is in network communication with the computing system. The acts also include generating, by the generative model, a query based upon the input set forth by the user; providing the query to a search engine. The acts further include receiving, by the generative model and from the search engine, content identified by the search engine based upon the query. The acts additionally include generating, by the generative model, an output based upon a prompt, where the prompt includes the content identified by the search engine based upon the query. The acts also include transmitting the output to the client computing device for presentment to the user.
Disclosed systems and methods identify a data record set and determine whether one or more predetermined conditions exist for triggering analysis of one or more records in the data record set. Disclosed embodiments trigger the analysis only in response to determining that the predetermined conditions have been met. Upon triggering the analysis of the data record set, disclosed embodiments identify a subset of the data record set to undergo the analysis while refraining from performing the analysis on the remaining records in the data record set. Further, embodiments identify an analysis model based on a level of analysis to be performed and apply the analysis model to the subset of the data record set to identify any presence of sensitive data. Lastly, disclosed embodiments selectively perform a security process to the data record set in response to detecting the presence of the sensitive data.
A system and method for detecting anomalies and malicious processes by analyzing current consumption profiles is disclosed. The technique involves generating current consumption profiles that characterize the expected power draw for known software applications operating in various modes on a target device. At runtime, the current being consumed by actively running applications is measured and compared to the total expected current draw determined from the individual profiles. Deviations between the observed and expected consumption indicate potential interference from malware or other unwanted processes. Additionally, current fluctuation profiles are generated to model the characteristic transient current behavior when applications transition between operational modes. By comparing runtime current measurements during state changes to these expected transitional profiles, the system can identify aberrations indicative of background malware triggering during the transitions. The current monitoring approach provides an efficient way to detect anomalous behavior from unwanted processes with minimal overhead.
Techniques for intelligently prompting an LLM to fix code are disclosed. A corpus of release notes for a set of libraries is accessed. The release notes include information describing deprecated or removed APIs associated with the libraries. The corpus is stored in a vector database. A code snippet is accessed. This snippet is identified as potentially using a deprecated API. The code snippet is used to identify a set of release notes from the vector database. These release notes are determined to satisfy a threshold level of similarity with the code snippet. An LLM prompt is built and is fed to the LLM. The LLM prompt instructs the LLM to update the code snippet based on the identified set of release notes. Output of the LLM is displayed. This output includes a proposed rewritten version of the code snippet.
Some embodiments enhance the security of domain name resolution and other DNS operations, by automatically intercepting the DNS operation, determining an associated device identity or ascertaining an associated user identity, and enforcing a security policy based on at least the DNS operation and based on at least one of the identities. Some securable DNS operations include resolution requests, reverse lookups from IP addresses to domain names, DNS record accesses, mail server mappings, redirection, forwarding, and DNS record cache operations. Enforcing the policy includes, e.g., preventing a result requested by the DNS operation, permitting computational progress toward the requested result, allowing a different result, modifying a DNS record, or flushing a DNS record from a cache. In some embodiments, DNS operation security functionality utilizes or implements a conditional access security functionality, thereby providing, e.g., a secure conditional domain name resolution.
H04L 61/4511 - Répertoires de réseauCorrespondance nom-adresse en utilisant des répertoires normalisésRépertoires de réseauCorrespondance nom-adresse en utilisant des protocoles normalisés d'accès aux répertoires en utilisant le système de noms de domaine [DNS]
94.
MEDIA SERVER PROXY THAT SWITCHES STREAMING MEDIA PROTOCOLS
A media server proxy switches streaming media protocols (“SMPs”) during streaming of media segments. The media server proxy receives a request, from a playback tool, according to a first SMP to provide information about outgoing media segments of a media sequence. The media server proxy generates the information about outgoing media segments and sends the information to the playback tool. The media server proxy also retrieves, from a remote server, incoming media content for the media sequence according to a second SMP different than the first SMP. The media server proxy assembles outgoing media segments based at least in part on the incoming media content. The media server proxy streams, to the playback tool, outgoing media segments according to the first SMP. In this way, the media server proxy can deliver media segments at very low latency, even when the first SMP typically has much higher latency.
H04L 65/61 - Diffusion en flux de paquets multimédias pour la prise en charge des services de diffusion par flux unidirectionnel, p. ex. radio sur Internet
H04L 65/65 - Protocoles de diffusion en flux de paquets multimédias, p. ex. protocole de transport en temps réel [RTP] ou protocole de commande en temps réel [RTCP]
H04L 65/80 - Dispositions, protocoles ou services dans les réseaux de communication de paquets de données pour prendre en charge les applications en temps réel en répondant à la qualité des services [QoS]
A management node local to a customer site of a private communications network stores a model. The model is a compact version of a visual language model remote from the customer site. A first screen shot of a dashboard of telemetry data measured from the private communications network is accessed. A prompt is formulated comprising the first screen shot and information to adapt the model to the private communications network via few shot learning. The prompt is submitted to the model. An output is received from the model comprising textual information about anomalies or trends depicted in the first screen shot. The output is checked against data from a statistical model of the telemetry data, the statistical model being independent of the model. In response to the check being successful, an action is triggered to manage the private communications network according to the output.
H04W 24/04 - Configurations pour maintenir l'état de fonctionnement
H04L 41/142 - Analyse ou conception de réseau en utilisant des méthodes statistiques ou mathématiques
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
A computer-implemented labeling technique generates a task description that describes a labeling task to be given to a language model. The technique then sends a prompt to the language model, which includes the task description and a particular item to be labeled. The technique receives a response provided by the language model in response to the prompt, which specifies a class assigned by the language model to the item. In some implementations, the task description specifies a group of suggested classes to be used in classifying the particular item. The task description also invites the language model to specify another class upon a finding that none of the group of suggested classes applies to the item. The technique also allows a user to stop and restart a labeling run at any point in the labeling run. Other aspects of the technique include consensus processing and weight updating.
A system and method for preventing fraudulent access to user accounts and user data resulting from SIM swapping are disclosed. A server providing a unified communications service receives a message containing an authentication code addressed to a subscriber mobile device. The server determines a threat level by analyzing recent subscriber login data for suspicious patterns. If the threat score exceeds a threshold, the server divides the authentication code into two portions before sending. One portion is transmitted via regular SMS to the native messaging app on the subscriber's device. The other portion is sent through a subscriber messaging client of the unified communications service. This dual channel delivery allows legitimate users to receive the full code while preventing unauthorized users who may have swapped the subscriber's SIM card. Additional threat detection involves monitoring the status of the native messaging app and notifying subscribers of anomalies indicating potential SIM swapping.
The techniques disclosed herein enable an autonomous agent to interpret an input dataset and orchestrate a suite of software modules to perform a computational task on a representation of a chemical material. The input dataset includes a prompt defining a computational task to be performed on a chemical material. Moreover, the input dataset includes data defining a chemical included in the chemical material, molecular descriptors describing the chemical and/or the chemical material, and an external variable. The agent analyzes the benefits and drawbacks of each model within the context of the computational task to determine a technique for performing the computational task. Accordingly, the agent formulates a chain of calls invoking the functionality of data processing tools and models to perform the computational task responsive to the prompt.
Techniques for using a sensor to perform laser signal decoding are disclosed. The sensor may be a global shutter sensor or a rolling shutter sensor. The sensor generates a first set of images while operating in a first mode. In response to detecting a laser signal in the first set of images, the sensor is caused to operate in a second mode. The laser signal includes an embedded frequency signal component and repeats at a periodic rate. While the sensor is operating in the second mode, the sensor generates a second set of images, which capture an entire period of the laser signal. From the second set of images, the embedded frequency signal component is determined. A decoding operation is performed using the embedded frequency signal component.
H04N 23/56 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande munis de moyens d'éclairage
H04N 23/68 - Commande des caméras ou des modules de caméras pour une prise de vue stable de la scène, p. ex. en compensant les vibrations du boîtier de l'appareil photo
100.
Resource-Efficient and Time-Efficient Prompting of a Language Model to Invoke Functions
A technique sends a first prompt to a language model that specifies selector information. The selector information provides a summary of a group of functions that are capable of being invoked. The language model responds by choosing one or more functions from the group of functions. The technique then sends a second prompt to the language model that specifies more detailed information regarding just the function(s) that have been identified by the language model. The language model responds by providing invocation information for each of the functions, such as properly formatted API messages. The technique then invokes the function(s) based on the invocation information. The technique reduces the size of each prompt sent to the language model, which makes efficient use of resources and improves the quality of the language model's output results.