09 - Appareils et instruments scientifiques et électriques
38 - Services de télécommunications
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable software in the nature of a mobile application
for business and social networking, employment, careers and
recruiting; downloadable computer software that enables
members to access, interact with, collect, edit, organize,
modify, bookmark, store, upload, manage, track, share, and
publish data, information, databases, and customized content
in the fields of business, social networking, employment,
careers, and recruiting; downloadable computer software for
searching, accessing, displaying, sharing, reviewing,
creating, downloading, uploading, designing, modifying,
reproducing, transmitting, and managing newsletters,
research reports, blogs, articles, images, graphics, fonts,
photographs, text, videos, audiovisual and multimedia
content, and data in the fields of business, social
networking, employment, careers, and recruiting;
downloadable job searching, sourcing and recruiting computer
software using artificial intelligence (AI) for members on a
social networking, employment, and business networking
communication platform; downloadable chatbot software using
artificial intelligence (AI) for members on a social
networking, employment, and business networking
communication platform; downloadable writing and
communication software using artificial intelligence (AI)
for assisting platform members with employment, job sourcing
and recruiting, lead generation, and business-related
inquiries; content creation software using artificial
intelligence for members on a social networking, employment,
and business networking communication platform; downloadable
computer software using artificial intelligence (AI) for
employee training and professional development; downloadable
computer software using artificial intelligence (AI) for
online courses. Electronic messaging services; providing online information,
forums, groups, and communities for transmission of messages
among members and users in the fields of employment,
staffing, recruiting, career development, professional
networking, and training, as well as concerning job
searching, and general business topics (term considered too
vague by the International Bureau pursuant to Rule 13 (2)
(b) of the Regulations); provision of online forums, groups,
and communities for transmission of messages among computer
users concerning job searching, professional networking, and
general business topics, as well as for employment,
staffing, recruiting, career development, professional
training, and educational course materials (term considered
too vague by the International Bureau pursuant to Rule 13
(2) (b) of the Regulations); providing access to computer,
electronic, and online databases in the field of business,
social networking, employment, careers and recruiting. Providing temporary use of on-line non-downloadable software
for business and social networking, employment, careers and
recruiting; providing a website featuring temporary use of
non-downloadable software for business and social
networking, employment, careers and recruiting (term
considered too vague by the International Bureau pursuant to
Rule 13 (2) (b) of the Regulations); providing an online
non-downloadable computer software platform for social
networking, employment, careers, recruiting and business;
providing customized web pages featuring member-defined
information, audio, text, video, and images; providing
temporary use of on-line non-downloadable software that
enables members to access, interact with, collect, edit,
organize, modify, bookmark, store, upload, manage, track,
share, and publish data, information, databases, and
customized content in the fields of business, social
networking, employment, careers, and recruiting; providing
temporary use of on-line non-downloadable software for
searching, accessing, displaying, sharing, reviewing,
creating, downloading, uploading, designing, modifying,
reproducing, transmitting, and managing newsletters,
research reports, blogs, articles, images, graphics, fonts,
photographs, text, videos, audiovisual and multimedia
content, and data in the fields of business, social
networking, employment, careers, and recruiting; providing a
website featuring temporary use of non-downloadable computer
software featuring electronic publications in the nature of
newsletters, research reports, articles and white papers on
topics of professional interest in the field of business,
social networking, employment, careers and recruiting (term
considered too vague by the International Bureau pursuant to
Rule 13 (2) (b) of the Regulations); providing temporary use
of on-line non downloadable computer software that provides
web-based access to applications and services through a
web-operating system and portal interface; providing
temporary use of on-line non-downloadable computer software
for use in business analytics and database management;
providing temporary use of on-line non-downloadable software
for tracking and analyzing user interaction with customized
content; providing temporary use of on-line non-downloadable
software for providing online courses, seminars, interactive
classes, educational instruction, and course materials;
providing temporary use of on-line non-downloadable software
for accessing internet search engines featuring information
for obtaining job listings, resume postings, and other job
searches; providing non-downloadable job searching, sourcing
and recruiting online software using artificial intelligence
(AI) for members on a social networking, employment, and
business networking communication platform; providing
non-downloadable chatbot online software using artificial
intelligence (AI) for members on a social networking,
employment, and business networking communication platform;
non-downloadable writing and communication online software
using artificial intelligence (AI) for assisting platform
members with writing, communicating, and with employment,
job, recruiting, lead generation, and business-related
inquiries (term considered too vague by the International
Bureau pursuant to Rule 13 (2) (b) of the Regulations);
non-downloadable content creation online software using
artificial intelligence (AI) for members on a social
networking, employment, and business networking
communication platform (term considered too vague by the
International Bureau pursuant to Rule 13 (2) (b) of the
Regulations); providing non-downloadable online software
using artificial intelligence (AI) for employee training and
professional development; providing non-downloadable online
computer software using artificial intelligence (AI) for
providing online courses, seminars, interactive classes,
educational instruction, and course materials.
A method, computer program product, and computing system for defining one or more encoded symbols for data included within each of a plurality of memory dies of a memory module to define one or more groups of encoded symbols; generating Reed-Solomon parities for each group of encoded symbols; and recovering one or more portions of the data included within each of the plurality of memory dies of the memory module in the event of data corruption or die failure using one or more of the encoded symbols and the Reed-Solomon parities.
G06F 11/10 - Détection ou correction d'erreur par introduction de redondance dans la représentation des données, p. ex. en utilisant des codes de contrôle en ajoutant des chiffres binaires ou des symboles particuliers aux données exprimées suivant un code, p. ex. contrôle de parité, exclusion des 9 ou des 11
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
3.
EVALUATING COMPUTATIONAL REASONING PERFORMANCE OF GENERATIVE ARTIFICIAL INTELLIGENCE MODELS
Systems and methods evaluate computational reasoning performance of generative artificial intelligence (GAI) models. Both a factual prompt and a counterfactual prompt are submitted to both first and second GAI models, thereby generating first factual and counterfactual outputs for the first GAI model and second factual and counterfactual outputs for the second GAI model. Probability of necessity (PN) and probability of sufficiency (PS) values are computed for both the first and second GAI models based on their associated factual output and counterfactual output. The computational reasoning performance of the first GAI model relative to the second GAI model are compared based on the PN and PS values. One of the first or the second GAI models is selected based on the comparison and submitted a target prompt using the selected one of the first and second GAI model.
The present disclosure proposes a method, an apparatus and a computer program product for entity extraction based on edge computing. A web document may be obtained. A text feature of the web document may be identified. A visual feature corresponding to the text feature may be identified. An entity type sequence corresponding to the web document may be extracted based on the text feature and the visual feature.
Systems are configured to control transitions and displays of interface objects that are selectively moved across boundary transitions of physical display screens within augmented-reality scenes. In some instances, when a virtual object instance of an interface object is moved into the bounded area of a physical display screen within an augmented-reality scene a corresponding real-world object instance of the interface object is generated and rendered within the bounded display area of the display screen. In other instances, when user input is received for moving a real-world object instance of an interface object outside of the bounded display area of a display screen within an augmented-reality scene, a corresponding virtual object instance of the interface object is generated and rendered outside of the display screen within the augmented-reality scene.
G06F 3/04812 - Techniques d’interaction fondées sur l’aspect ou le comportement du curseur, p. ex. sous l’influence de la présence des objets affichés
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Methods, systems, and apparatuses include receiving an event notification for an event associated with a node of a graph network. Event data including node state data and a timestamp is generated using the event notification. A node state change is generated for the node by applying a neural network to the node state data and the timestamp. An input sequence for a generative machine learning model is generated, the input sequence including the node state change and the node state data. Updated node state data is computed for the node by applying the generative machine learning model to the input sequence. A node encoding is generated for the node using the updated node state data. Input data for a trained machine learning model is generated using the node encoding. An output of the trained machine learning model is generated by applying the trained machine learning model to the input data.
According to examples, a thermally separating power coupling device includes a housing that thermally and electrically isolates a high-temperature superconductor (HTS) from an electrically conductive cable. The power coupling device includes a power coupling system that includes a rotatable shaft having a motor side and a generator side. On the motor side, a set of motor magnets is attached to the shaft and a set of motor coils are positioned near the set of motor magnets. On the generator side, a set of generator magnets is attached to the shaft and a set of generator coils is positioned near the generator coils. When electrical current is supplied from the HTS to the motor coils, the motor coils rotate, thus causing the shaft to rotate. In addition, as the shaft rotates, the generator coils produce an electrical current that is outputted to the electrically conductive cable.
H02G 15/34 - Accessoires de câble pour câbles cryogéniques
H01F 27/04 - Passages de conducteurs ou d'axes à travers les enveloppes, p. ex. pour dispositifs de changement de prise
H02K 55/04 - Machines dynamo-électriques comportant des enroulements qui fonctionnent à des températures cryogéniques du type synchrone avec des enroulements à champ tournant
8.
Performing Computing Tasks Using Decoupled Models for Different Data Types
A technique executes tasks using a data store of machine-trained models. The data store specifically includes a subset of encoder-type machine-trained models for converting input data items having different input data types into respective embeddings in a vector space, and a subset of decoder-type machine-trained models for converting embeddings in the same vector space into data items having respective different output data types. When executing a particular task that involves one or more data types, the technique selects one or more machine-trained models that match those data types. In some implementations, the technique provides a clipboard store for storing embeddings produced by the encoder-type machine-trained models and consumable by the decoder-type machine-trained models. The technique includes provisions for ensuring that any decoder-type machine-model is capable of processing embeddings produced by different versions of the encoder-type machine-trained models.
Techniques are described herein that are capable of provisioning a trusted execution environment (TEE) based on (e.g., based at least in part on) a chain of trust that includes a platform on which the TEE executes. Any suitable number of TEEs may be provisioned. For instance, a chain of trust may be established from each TEE to the platform on which an operating system that launched the TEE runs. Any two or more TEEs may be launched by operating system(s) running on the same platform or by different operating systems running on respective platforms. Once the chain of trust is established for a TEE, the TEE can be provisioned with information, including but not limited to policies, secret keys, secret data, and/or secret code. Accordingly, the TEE can be customized with the information without other parties, such as a cloud provider, being able to know or manipulate the information.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
G06F 21/74 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du calcul ou du traitement de l’information opérant en mode dual ou compartimenté, c.-à-d. avec au moins un mode sécurisé
A user query for information regarding data of a codebase is answered by a large language model given a prompt that includes examples of code segments from the codebase that are similar to the user query. The code segments from the codebase are associated with metadata that includes both natural language text and source code. The search for the examples of code segments from the codebase is based on embeddings of code segments and associated metadata that are closely similar to an embedding of the user query and context.
A method, computer program product, and computing system for defining one or more groups for data included within one or more groups of memory dies included within a memory module, thus defining a first group of parity bit groups; defining a parity bit for each memory die included within the one or more groups of memory dies, thus defining a plurality of parity bits; and defining one or more parity bit groups for the plurality of parity bits, thus defining a second group of parity bit groups.
G06F 11/10 - Détection ou correction d'erreur par introduction de redondance dans la représentation des données, p. ex. en utilisant des codes de contrôle en ajoutant des chiffres binaires ou des symboles particuliers aux données exprimées suivant un code, p. ex. contrôle de parité, exclusion des 9 ou des 11
A local anomaly detection model for monitoring a local entity set of a network system is generated by applying a transformation function to a global anomaly detection model for the network system, without re-training the global anomaly detection model for the local entity set. The global anomaly detection model, which may be generated via unsupervised learning methods, includes global vector(s) of metrics and a global healthy vector space. The transformation function is estimated and applied to the global anomaly detection model to generate the local anomaly detection model, which includes local vector(s) of metrics pertaining to the local entity set and a local healthy vector space. Responsive to a determination that the local vector(s) of metrics comprises one or more anomalous data points outside of the local healthy vector space, an alert regarding the one or more anomalous data points can be generated and output.
According to examples, an apparatus includes a processor that may obtain and parse a pipeline code to determine how variables of the pipeline code relate to each other, and replace the variables in the parsed pipeline code with values to which the variables respectively represent, in which the values correspond to pipeline run sources and pipeline run targets of API calls. The processor may also identify how the pipeline run targets interact with the pipeline run sources of the API calls and build a dependency graph that maps the pipeline run sources with the pipeline run targets. Runtime resources may thus be mapped to source code in a pipeline run to provide visibility into actions carried out by the pipeline. This visibility may be used to determine whether there are security vulnerabilities in the pipeline run sources and/or targets such that the vulnerabilities may be addressed/overcome.
Examples are disclosed relating to a method for calibrating a depth camera without requiring an external target. In one example, an environment is illuminated using an illumination source of the depth camera. The illumination source is configured to output modulated structured light comprising a pattern of dots. A raw depth image of illumination reflected from the environment is acquired via an optical sensor of the depth camera. Observed locations of dots in the pattern of dots are identified in the raw depth image. An objective function is applied to the observed locations of the dots in the pattern of dots in the raw image to generate a set of distortion correction parameters. A distortion corrected depth image generated based at least on translating pixel locations of pixels of the raw depth image according to the set of distortion correction parameters is output.
A data processing system implements receiving, via a user interface of a client device, an image; constructing, via a prompt construction unit, a first prompt by appending the image to a first instruction string including instructions to a generative model; providing the first prompt to the generative model; generating, by the generative model and according to the first prompt, a depth map using an intensity of darkness of each pixel of the image as a respective depth of the pixel in a digital three-dimensional (3D) transparent object; digitally engraving, by the generative model and according to the first prompt, each pixel of the image in the 3D transparent object based on the respective depth in the depth map into a digital 3D engraved object; receiving the digital 3D engraved object from the generative model; and providing the digital 3D engraved object to display on the user interface of the client device.
According to examples, an apparatus includes processing units that execute threads using a hybrid locking/queuing operation for efficient processing of work units with mutual exclusion of the work units. Under the hybrid locking/queuing operation, a processing unit determines that a first thread is to process a first work unit, in which the first work unit is under protection of a hybrid exclusion object (HEO), and in which the HEO includes an HEO queue. In addition, the processing unit places a lock on the HEO, determines whether the HEO is owned by a thread, and based on a determination that the HEO is owned by a second thread, adds the first work unit to the HEO queue, and releases the lock on the HEO. The second thread assigns ownership of the HEO to the first work unit when the first work unit reaches a top of the HEO queue.
The disclosed concepts relate to providing help sessions for video game players. For instance, a help session starting state can be obtained from a video game session by a particular video game player. The help session starting state can be loaded into a help session. During the help session, inputs received from a client device of a video game helper can be directed to the help session. After the help session, an updated help session state can be obtained. In some cases, the particular video game player can choose to accept the updated help session state and proceed with video game play from that state. In other cases, the particular video game player can choose to reject that state and return back to the help session starting state.
The disclosed concepts relate to automatically identifying conditions in a video game to trigger a help session. When a help session is triggered, another video game player or machine learning model can temporarily take over for the current video game player until an ending condition is reached. Help session triggering can be designated by evaluation of prior gameplay data of other video game players to identify in-game conditions that may tend to cause user disengagement, such as in-game conditions that are associated with difficult in-game goals or negative in-game consequences.
A63F 13/5375 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour suggérer graphiquement ou textuellement une action, p. ex. en affichant une flèche indiquant un tournant dans un jeu de conduite
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A63F 13/87 - Communiquer avec d’autres joueurs, p. ex. par courrier électronique ou messagerie instantanée
G06F 40/40 - Traitement ou traduction du langage naturel
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
The disclosed concepts relate to training a machine learning model to provide help sessions during a video game. For instance, prior video game data from help sessions provided by human users can be filtered to obtain training data. Then, a machine learning model can be trained using approaches such as imitation learning, reinforcement learning, and/or tuning of a generative model to perform help sessions. Then, the trained machine learning model can be employed at inference time to provide help sessions to video game players.
A63F 13/5375 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour suggérer graphiquement ou textuellement une action, p. ex. en affichant une flèche indiquant un tournant dans un jeu de conduite
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/87 - Communiquer avec d’autres joueurs, p. ex. par courrier électronique ou messagerie instantanée
The disclosed concepts relate to managing help sessions within a video game based on age information associated with a video game player. For example, systems and associated methods scan perform age-based restriction of a help session using a variety of techniques. For instance, automated helpers can be selected for help sessions involving children, or messaging between a human helper and a child can be restricted using a range of communication techniques described herein.
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A63F 13/5375 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour suggérer graphiquement ou textuellement une action, p. ex. en affichant une flèche indiquant un tournant dans un jeu de conduite
A63F 13/87 - Communiquer avec d’autres joueurs, p. ex. par courrier électronique ou messagerie instantanée
G06F 40/40 - Traitement ou traduction du langage naturel
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
21.
TECHNIQUES FOR COLLISION HANDLING IN GATEWAY SESSIONS IN WIRELESS COMMUNICATIONS
Described are examples for modifying a packet data network (PDN) gateway used for a session in wireless communications where the PDN gateway can receive, for a mobility management entity (MME) or evolved packet data gateway (ePDG), a request to restore an existing session with the first PDN gateway, receive, from a database, and based on attempting to restore the existing session for the MME or ePDG, a read lock failure based on a restoration of the existing session established for the MME or ePDG and a second PDN gateway, and send, for the MME or ePDG, a rejection message in response to the request, where the rejection message includes a cause code indicating to restore the existing session with the second PDN gateway. Other examples relate to the MME or ePDG receiving the rejection message and sending the request to restore the existing session with the second PDN gateway.
Providing arbitration for resource sharing using channel priority differences in processor-based devices is disclosed herein. In one exemplary embodiment, a processor-based device comprises a data allocation circuit that is communicatively coupled to one or more ingress channels and one or more egress channels. The data allocation circuit assigns an ingress channel priority to each ingress channel, and assigns an egress channel priority to each egress channel. The data allocation circuit generates one or more channel pairs by iteratively identifying an unpaired egress channel having a highest egress channel priority, calculating absolute differences between each ingress channel priority of each unpaired ingress channel and the egress channel priority of the unpaired egress channel, and allocating the unpaired egress channel to an unpaired ingress channel that corresponds to the smallest absolute difference as a channel pair. The data allocation circuit then performs one or more transactions using the corresponding one or more channel pairs.
H04L 47/6275 - Ordonnancement des files d’attente caractérisé par des critères d’ordonnancement pour des créneaux de service ou des commandes de service basé sur la priorité
23.
TECHNIQUES FOR UNIFYING CLOUD INFRASTRUCTURE MANAGEMENT
Described are examples for providing access to an on-premises resource executing via a cloud-computing environment. A client-side proxy executing on a centralized node in the cloud-computing environment can receive, from a client resource provider (RP) that communicates with the client-side proxy via a client RP virtual network established in the cloud-computing environment, a request by a requesting node to access the on-premises resource. The client-side proxy can provide, based on the request, access to the on-premises resource for the requesting node.
H04L 67/563 - Redirection de flux de réseau de données
H04L 61/59 - Utilisation de mandataires pour l’adressage
H04L 67/289 - Traitement intermédiaire fonctionnellement situé à proximité de l'application consommatrice de données, p. ex. dans la même machine, dans le même domicile ou dans le même sous-réseau
H04L 67/60 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises
An adaptive user representation (AUR) system for use with generative artificial intelligence (AI) receives queries meant for the generative AI and utilizes one or more AI models to process each query to determine query context and to identify user information from a user information repository which is relevant to the query. The system generates instructions based on the query, query context, and the relevant user information for causing the generative AI to generate a response to the query which is personalized to the user. The AUR system transforms the raw data of the query and relevant user information into a set of instructions for the generative AI which describe how to personalize the response or required searches to ensure the final response is personalized.
A computer-implemented method can generate a parallel schedule for partitioning devices included in a device cluster for parallel execution of a transformer model. The transformer model is represented by a chain of cells. Each cell includes a set of tasks of the transformer model. Generating the parallel schedule includes dividing the chain of cells into one or more sequential stages, creating one or more replicas of the transformer model or some of the cells, and mapping the set of tasks included in a cell to one or more devices of the device cluster. For a given workload, the method can execute the transformer model on the device cluster according to the parallel schedule.
A computer-implemented method can receive an internal representation of a transformer model, an internal representation of a device cluster, and an internal representation of a workload for execution of the transformer model on the device cluster. The method can generate a plurality of candidate execution plans based on the internal representation of the transformer model and the internal representation of the device cluster. Each candidate execution plan represents a unique parallel schedule for partitioning devices in the device cluster for parallel execution of the transformer model. The method can determine an optimal execution plan, including evaluating resource usage of the plurality of candidate execution plans based on the internal representation of the workload, and selecting, among the plurality of candidate execution plans, the optimal execution plan which yields the lowest resource usage. The evaluating includes simulating execution of the transformer model on the device cluster to process the workload.
Embodiments disclosed herein are directed to computing technology for programmatically sanitizing unwanted content that is shared in a meeting. The unwanted content may be sanitized in real-time and in a meeting presentation. In an implementation, the unwanted content is detected, and a determining a sensitivity mitigation action is determined for the unwanted content. The sensitivity mitigation action is applied to generate a modified presentation of a live meeting presentation such that aspects of the unwanted content are removed. A graphical user interface (GUI) tool is disclosed to enable users to control application of a sensitivity mitigation action. In this manner, embodiments disclosed herein facilitate complying with a privacy policy.
A computing system that includes one or more server computing devices including one or more processors configured to execute instructions for a domain extensibility module that provides software development tools for building domain extensions for a database platform, and a data ingestion module that provides software development tools for defining a metadata schema for extracting metadata from data files. The one or more processors are configured to receive a set of data from a user computing device, define a target metadata schema that includes one or more metadata fields that will be populated during a data ingestion process, define a target domain extension that defines one or more data types for storing the received set of data after performing the data ingestion process, and ingest the received set of data using a metadata extraction pipeline to generate metadata files based on the target metadata schema.
A computer-implemented method is described which comprises generating a representation of a digital space and a representation of the physical space using an audiovisual feed received from a camera proximate to a display located in the physical space. The representation of the digital space is generated using user information identifying a remote user associated with the display and presence information relating to the remote user and the digital representation comprises an avatar of the remote user. The representation of the digital space is output to the display located in the physical space and the representation of the physical space it output to a computing device associated with the remote user. The method further comprises dynamically updating the representation of the digital space and/or physical space in response to changes in the user information and presence information.
Methods and systems for tracing forwards of an electronic message. One method includes storing, for each of a plurality of forwarded messages sent via an electronic messaging application, a record in a data store, each record including a link to an original message for the forwarded message and calculating, with an electronic processor, a statistic for an electronic message based on records stored in the data store, wherein the statistic includes at least one selected from a group consisting of a number of forwards of the electronic message, a number of recipients of the electronic message including all forwards of the electronic message, and a number of requests to revoke the electronic message. The statistic is then output for display to a user via at least one user interface.
Systems and techniques for facilitating unified multichannel communication are provided. The described systems and techniques improve communication technology through an encompassing, channel-agnostic approach which unifies disparate communication modes into a singular coherent thread. A unified multichannel communication (“UMC”) service of a UMC platform can initialize a UMC thread for a UMC session, where the UMC thread can be used to facilitate unified multichannel communication. The UMC session can involve multiple participants, including human users and software agents (e.g., conversational bots, virtual agents, digital assistants, and other dialog interfaces). The UMC platform can facilitate creating and interacting with a digital assistant providing unified multichannel communication.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
H04L 69/14 - Protocoles multicanaux ou multi-liaisons
H04L 69/18 - Gestionnaires multi-protocoles, p. ex. dispositifs uniques capables de gérer plusieurs protocoles
The disclosed concepts relate to providing help sessions for video game players. For instance, a help session starting state can be obtained from a video game session by a particular video game player. The help session starting state can be loaded into a help session. During the help session, inputs received from a client device of a video game helper can be directed to the help session. After the help session, an updated help session state can be obtained. In some cases, the particular video game player can choose to accept the updated help session state and proceed with video game play from that state. In other cases, the particular video game player can choose to reject that state and return back to the help session starting state.
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/355 - Réalisation d’opérations pour le compte de clients ayant des capacités de traitement restreintes, p. ex. serveurs transformant une scène de jeu qui évolue en flux vidéo codé à transmettre à un téléphone portable ou à un client léger
A63F 13/493 - Reprise du jeu, p. ex. après une pause, un dysfonctionnement ou une panne de courant
A63F 13/86 - Regarder des jeux joués par d’autres joueurs
33.
PROVIDING ARBITRATION FOR RESOURCE SHARING USING CHANNEL PRIORITY DIFFERENCES IN PROCESSOR-BASED DEVICES
Providing arbitration for resource sharing using channel priority differences in processor-based devices is disclosed herein. In one embodiment, a processor-based device comprises a data allocation circuit that is communicatively coupled to one or more ingress channels and one or more egress channels. The data allocation circuit assigns an ingress channel priority to each ingress channel, and assigns an egress channel priority to each egress channel. The data allocation circuit generates one or more channel pairs by iteratively identifying an unpaired egress channel having a highest egress channel priority, calculating absolute differences between each ingress channel priority of each unpaired ingress channel and the egress channel priority of the unpaired egress channel, and allocating the unpaired egress channel to an unpaired ingress channel corresponding to the smallest absolute difference as a channel pair. The data allocation circuit then performs one or more transactions using the corresponding one or more channel pairs.
A computer-implemented method can generate a parallel schedule for partitioning devices included in a device cluster for parallel execution of a transformer model. The transformer model is represented by a chain of cells. Each cell includes a set of tasks of the transformer model. Generating the parallel schedule includes dividing the chain of cells into one or more sequential stages, creating one or more replicas of the transformer model or some of the cells, and mapping the set of tasks included in a cell to one or more devices of the device cluster. For a given workload, the method can execute the transformer model on the device cluster according to the parallel schedule.
A data processing system implements receiving, via a user interface of a client device, an image; constructing, via a prompt construction unit, a first prompt by appending the image to a first instruction string including instructions to a generative model: providing the first prompt to the generative model; generating, by the generative model and according to the first prompt, a depth map using an intensity of darkness of each pixel of the image as a respective depth of the pixel in a digital three-dimensional (3D) transparent object; digitally engraving, by the generative model and according to the first prompt, each pixel of the image in the 3D transparent object based on the respective depth in the depth map into a digital 3D engraved object; receiving the digital 3D engraved object from the generative model; and providing the digital 3D engraved object to display on the user interface.
An intelligent router for generative artificial intelligence (GAI) model instances optimizes request routing to reduce latency. The system predicts output lengths using a trained response-length predictor and assesses the state of multiple GAI instances, including prompt and decode distributions. It estimates the workload mixing impact of routing requests to each instance and determines selection probabilities using a machine-learning routing model. The router either assigns the request to the most suitable instance or delays routing if conditions are suboptimal. This approach improves end-to-end latency, Time-To-First-Token (TTFT), and Time-Between-Tokens (TBT) by considering the distinct characteristics of GAI workload phases.
A computer-implemented method can receive an internal representation of a transformer model which defines one or more repeating blocks, each block including a sequence of cells, and each cell including a set of tasks of the transformer model. The method can search for a plurality of parallel schedules for partitioning devices included in a device cluster for parallel execution of the transformer model. The searching includes determining a number of model replicas, determining a number of stages that divide the one or more repeating blocks, determining a number of cell replicas for each cell in a block, and for each cell replica of a cell, generating a task mapping which maps the set of tasks included in the cell to devices partitioned into the cell replica.
Methods, systems, and computer storage media for providing blended settings management using a blended remote-local settings management engine are described. The blended remote-local settings management engine integrates different settings controllers into blended settings management via remote clients. In operation, an indication to initiate settings configuration is accessed at a remote client. The indication is processed using a blended remote-local settings management engine that integrates management of remote settings of remote clients and local settings of local clients. Based on the indication, a request for a local setting of a local client associated with the remote client is generated. The request is communicated to the local client using a dynamic virtual channel between the remote client and the local client. Based on the request, the local setting of the local client is retrieved. Display of the local setting is caused on a blended remote-local settings interface associated with blended settings management.
H04L 41/0806 - Réglages de configuration pour la configuration initiale ou l’approvisionnement, p. ex. prêt à l’emploi [plug-and-play]
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
39.
AI-BASED STRUCTURED META PROMPT GENERATION WITH OPTIONAL USER INPUTS
A data processing system implements receiving, via a user interface, report context and a request to generate insights of a data report; constructing a first prompt by appending the request a default system prompt, the report context, and the data report as a first instruction string; validating the first prompt using a second generative model by checking whether the first prompt is structured according to sections that contain one or more predetermined purposes and whether the default system prompt is responsive to the report context; when the first prompt is validated by the second generative model, providing the first prompt to the first generative model; generating, by the first generative model and according to the first prompt, an insight output; receiving the insight output from the first generative model; and providing the insight output to display on the user interface.
The disclosed concepts relate to tracking and representing help sessions for video games where video game players are assisted by a helper, e.g., another video game player and/or a trained machine learning model. For instance, the disclosed implementations can graphically modify a controllable entity, such as a character or vehicle, to convey that the current video game player is being assisted by a helper. As another example, the disclosed implementations can graphically modify game achievements to indicate when a given achievement was earned with assistance from a helper.
A63F 13/5375 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p. ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour suggérer graphiquement ou textuellement une action, p. ex. en affichant une flèche indiquant un tournant dans un jeu de conduite
A63F 13/493 - Reprise du jeu, p. ex. après une pause, un dysfonctionnement ou une panne de courant
A63F 13/69 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en permettant l'utilisation ou la mise à jour d'éléments spécifiques du jeu, p. ex. déblocage d’options, d’éléments, de niveaux ou de versions cachés
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A63F 13/87 - Communiquer avec d’autres joueurs, p. ex. par courrier électronique ou messagerie instantanée
The disclosed concepts relate to managing help sessions in video games. A system or method receives control inputs from a helper during a help session in a video game. The system or method obtains a game state of the video game and determines whether to provide the control inputs to the video game based on the game state. In at least one instance, the system or method at least temporarily prevents the video game from receiving a particular control input.
Systems, methods, and computer readable storage media described herein for dynamically routing jobs to job service architectures and consolidating data. In an aspect, a job request associated with a user account is received. A migration status of the user account is determined to indicate the user account is migrating from a first job service architecture to a second job service architecture. A determination of whether or not the migration state is enabled is made. If the migration state is enabled, the job request is routed to the second job service architecture, causing the second job service architecture to schedule a corresponding job. If the migration state is not, the job request is routed to the first job service architecture, causing the first job service architecture to schedule the job. In a further aspect, the job request comprises a script and the job comprises a step to execute the script.
Processor-based system supporting in-field testing using external dynamic random access memory (DRAM) for storing and accessing test scan data. The processor-based system includes a processor that includes one or more central processing units (CPUs) that each have access to resources, such as cache memory, a memory controller to access system memory (e.g., DRAM), interfaces circuits, to perform tasks by executing of program code. The processing-based system includes an internal, built-in testing system that allows the processor-based system to be placed into test mode to perform in-field testing of the processor-based system. To support larger-sized scan data, the processor-based system is configured for the built-in-test system to access test scan data stored in DRAM in the processor-based system in a test mode. In this manner, the DRAM supports storing larger-sized test scan data so that greater in-field test coverage can be performed in the processor-based system.
A computing device including a hardware accelerator. The hardware accelerator includes a generalized matrix-vector multiplication (GEMV) circuit configured to compute a product vector over a plurality of streaming iterations. At each of the streaming iterations, the GEMV circuit receives an input vector element and an input matrix row. The GEMV circuit multiplies the input vector element by input matrix elements included in the input matrix row to obtain an intermediate product row. The GEMV circuit adds the intermediate product row to a current-iteration row sum. The product vector is equal to the current-iteration row sum computed in a final streaming iteration. The GEMV circuit transmits the product vector as a streaming output to a post-processing circuit included in the hardware accelerator. The post-processing circuit performs a vector processing operation on the product vector to compute vector processing result, and outputs the vector processing result.
A computer-implemented method can receive an internal representation of a transformer model which defines one or more repeating blocks, each block including a sequence of cells, and each cell including a set of tasks of the transformer model. The method can search for a plurality of parallel schedules for partitioning devices included in a device cluster for parallel execution of the transformer model. The searching includes determining a number of model replicas, determining a number of stages that divide the one or more repeating blocks, determining a number of cell replicas for each cell in a block, and for each cell replica of a cell, generating a task mapping which maps the set of tasks included in the cell to devices partitioned into the cell replica.
Embodiments of the present disclosure include techniques for managing dirty data. An agent receives a request for data. If the data is dirty data, the agent may use a replacement policy to determine if the data should be passed clean or dirty to the requestor. The replacement policy may correspond to how long the dirty data being stored in a cache line is to be maintained. In one embodiment, the replacement policy is a circuit, such as an SRAM and a logic circuit, for example.
G06F 12/0891 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache utilisant des moyens d’effacement, d’invalidation ou de réinitialisation
G06F 12/0864 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache utilisant des moyens pseudo-associatifs, p. ex. associatifs d’ensemble ou de hachage
G06F 12/123 - Commande de remplacement utilisant des algorithmes de remplacement avec listes d’âge, p. ex. file d’attente, liste du type le plus récemment utilisé [MRU] ou liste du type le moins récemment utilisé [LRU]
47.
PERFORMING A SECURITY ACTION WITH REGARD TO AN ACCESS TOKEN BASED ON CLUSTERING OF ACCESS REQUESTS
Techniques are described herein that are capable of performing a security action with regard to an access token based on clustering of access requests. Subsets of access requests are clustered into respective clusters, which correspond to respective requestor types, based at least on the access requests in the subsets having respective attributes that indicate the respective requestor types. The access requests request access to cloud resources. Access behavior(s) associated with the access requests that are included in respective cluster(s) are identified. A security action is performed with regard to an access token based at least on at least one of the access behavior(s).
According to examples, an apparatus includes a processor that receives a request from a requester apparatus to access a target apparatus. The processor may provide a token valid to the requester apparatus upon determining that the requester apparatus is authenticated to access the target apparatus, in which the token complies with and is sent via a centralized authentication and authorization protocol. The processor may also receive an access check message from the target apparatus, in which the access check message includes the token and the identity of the requester apparatus. In addition, the processor may enable the target apparatus to control access to the requester apparatus. The apparatus disclosed herein enable for the retrofitting of secure multi-factor or one-time password authentication into systems that rely on a centralized authentication and authorization protocol, such as the TACACS+ or the RADIUS protocol.
The disclosed concepts relate to automatically identifying conditions in a video game to trigger a help session. When a help session is triggered, another video game player or machine learning model can temporarily take over for the current video game player until an ending condition is reached. Help session triggering can be designated by evaluation of prior gameplay data of other video game players to identify in-game conditions that may tend to cause user disengagement, such as in-game conditions that are associated with difficult in-game goals or negative in-game consequences.
A63F 13/422 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p. ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p. ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel mappage automatique pour assister le joueur, p. ex. freinage automatique dans un jeu de conduite automobile
A63F 13/49 - Sauvegarde de l’état du jeuPause ou fin du jeu
A63F 13/497 - Répétition partielle ou entière d'actions de jeu antérieures
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
The disclosed concepts relate to training a machine learning model to provide help sessions during a video game. For instance, prior video game data from help sessions provided by human users can be filtered to obtain training data. Then, a machine learning model can be trained using approaches such as imitation learning, reinforcement learning, and/or tuning of a generative model to perform help sessions. Then, the trained machine learning model can be employed at inference time to provide help sessions to video game players.
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
A63F 13/86 - Regarder des jeux joués par d’autres joueurs
A63F 13/497 - Répétition partielle ou entière d'actions de jeu antérieures
Systems, methods, and computer readable storage media described herein for dynamically routing jobs to job service architectures and consolidating data. In an aspect, a job request associated with a user account is received. A migration status of the user account is determined to indicate the user account is migrating from a first job service architecture to a second job service architecture. A determination of whether or not the migration state is enabled is made. If the migration state is enabled, the job request is routed to the second job service architecture, causing the second job service architecture to schedule a corresponding job. If the migration state is not, the job request is routed to the first job service architecture, causing the first job service architecture to schedule the job. In a further aspect, the job request comprises a script and the job comprises a step to execute the script.
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
52.
HARDWARE ACCELERATOR WITH GENERALIZED MATRIX-VECTOR MULTIPLICATION AND POST-PROCESSING CIRCUITS
aii Bii Bijij ij ) included in the input matrix row to obtain an intermediate product row (44). The GEMV circuit adds the intermediate product row to a current-iteration row sum (45). The product vector is equal to the current-iteration row sum computed in a final streaming iteration. The GEMV circuit transmits the product vector as a streaming output to a post-processing circuit (30) included in the hardware accelerator. The post-processing circuit performs a vector processing operation (50) on the product vector to compute vector processing result (52), and outputs the vector processing result.
A computer-implemented method can receive an internal representation of a transformer model, an internal representation of a device cluster, and an internal representation of a workload for execution of the transformer model on the device cluster. The method can generate a plurality of candidate execution plans based on the internal representation of the transformer model and the internal representation of the device cluster. Each candidate execution plan represents a unique parallel schedule for partitioning devices in the device cluster for parallel execution of the transformer model. The method can determine an optimal execution plan, including evaluating resource usage of the plurality of candidate execution plans based on the internal representation of the workload, and selecting, among the plurality of candidate execution plans, the optimal execution plan which yields the lowest resource usage. The evaluating includes simulating execution of the transformer model on the device cluster to process the workload.
Embodiments of the disclosed technologies are capable of generating, using a machine learning model and a prompt, first content recommendations. The prompt comprises a search query and historic information associated with an entity. The first content recommendations are presented. The embodiments describe receiving a selection of a content recommendation of the first content recommendations. The embodiments describe generating, using the machine learning model and a second prompt, second content recommendations. The second prompt comprises a second search query and second historic information associated with the entity. The embodiments describe generating a ranked order of the second content recommendations using a history of entity interactions including the selection of the content recommendation of the first content recommendations. The embodiments describe determining context-aware recommendations by optimizing a permutation of the ranked order of the second content recommendations. The embodiments describe causing the context-aware recommendations to be presented.
Passive devices may be embedded into a cavity in a package substrate, with electrical contacts of the passive device on a contact surface orthogonal to a surface of the package substrate and extending through the package substrate. The electrical contacts of the passive device may be coupled to vias coupled to a power supply to provide capacitive decoupling. One or more through-hole vias (THVs), which provide current to ICs on the package substrate, may be excluded from the package substrate to accommodate the passive device. Embedding the passive devices in the cavity of the package substrate with the contact surface orthogonal to, rather than parallel to, the surface of the package substrate, reduces an area occupied by the passive device. In this manner, a number of the THVs excluded from the package substrate is reduced, which results in a smaller impact to the resistance of the power supply network.
H01L 23/522 - Dispositions pour conduire le courant électrique à l'intérieur du dispositif pendant son fonctionnement, d'un composant à un autre comprenant des interconnexions externes formées d'une structure multicouche de couches conductrices et isolantes inséparables du corps semi-conducteur sur lequel elles ont été déposées
H01L 23/528 - Configuration de la structure d'interconnexion
56.
CONTROLLING COMPLEXITY OF CAPTIONING THAT USES A VISION LANGUAGE MODEL
A vision language model (“VLM”) generates text captions from video content. Innovations in controlling the complexity of captioning that uses a VLM are described. For example, a training tool updates a training set so that text captions are more concise, then fine-tunes a VLM using the updated training set. Or, as another example, a generative artificial intelligence model such as a VLM dynamically adjusts the probability of an end-of-sentence (“EOS”) token so that the probability of the EOS token increases in successive iterations of output token generation, which tends to make generated text captions more concise. Or, as another example, a captioning tool identifies and ranks representative units (such as keyframes) of video, then selectively applies captioning (using a VLM) to representative units of the video based on ranking information. Together or individually, the innovations can improve the computational efficiency and accuracy of captioning that uses a VLM.
A lattice-based cryptography engine includes an interface configured to receive a lattice-based cryptographic operation request including corresponding operands. A register map is configured to store the operands and response to the request. A controller is coupled to receive the operands and output a sequence of instructions responsive to the request. A plurality of hardware units is coupled to receive and execute the instructions to generate the response. Each instruction is designated for one of the plurality of hardware units. A memory is coupled to the hardware units.
A lattice-based cryptographic engine includes a MakeHint unit to generate hints for polynomial coefficients. Logic hardware is coupled to the MakeHint unit and includes a hint sum unit configured to add hints for coefficients of a polynomial, compare a hint sum to a threshold, and generate an invalid signal in response to the hint sum exceeding the threshold. The logic hardware also includes a sample buffer configured to receive the hints, a hint bitpack coupled to store indices of non-zero hints, and a controller coupled to control transfer of hints to output registers.
H04L 9/30 - Clé publique, c.-à-d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
59.
PROMOTION OF MEETING ENGAGEMENT BY TRANSITIONING VIEWING PERSPECTIVES TO A TEMPORARY VIEWING PERSPECTIVE SHOWING GROUP ACTIVITY
The techniques disclosed herein provide promotion of meeting engagement by transitioning viewing perspectives to a temporary viewing perspective showing group activity. A system can show each person a view of a large virtual environment, e.g., in a stadium full of representations of meeting attendees. Each person sees the virtual environment from a point of view originating from each person's representation, e.g., a first-person avatar view. When a group activity meets one or more conditions, the system generates a new virtual environment model that shows detailed view of all people in a group, without showing members of other teams that may be intermingled with the group in an original environment. The system may transition each group member's view from the first-person view to a temporary view of the newly generated model that only includes group members. The temporary view can remain until the group activity drops below a threshold.
Systems and methods are provided that introduce an approach for executing a multi-query workload that leverages live execution feedback from nodes to detect resourcing issues and anomalies, and deploy real-time corrective measures for the multi-query workload. Leveraging live execution feedback from the nodes as the queries are executing make it possible to detect various resourcing issues and anomalies, and enable the system to perform corrective actions “live” or in “real-time” during an execution of a query, and more specifically during execution of the tasks within a query.
Semiconductor-superconductor hybrid devices with a horizontally-confined channel and methods of forming the same are described. An example semiconductor-superconductor hybrid device includes a semiconductor heterostructure formed over a substrate. The semiconductor-superconductor hybrid device may further include a superconducting layer formed over the semiconductor heterostructure. The semiconductor-superconductor hybrid device may further include a first gate, having a first top surface, formed adjacent to a first side of the semiconductor heterostructure. The semiconductor-superconductor hybrid device may further include a second gate, having a second top surface, formed adjacent to a second side, opposite to the first side, of the semiconductor heterostructure, where each of the first top surface of the first gate and the second top surface of the second gate is offset vertically from a selected surface of the semiconductor heterostructure by a predetermined offset amount.
The disclosure herein describes training a text processing model to generate model output text data using input text data and a sentence count. A training data entry including input text data and output text data is obtained. A sentence count of the output text data is determined, and the output text data is labeled with a sentence count label and a sentence number label. Model output text data is generated with a text processing model using the input text data and determined sentence count as input data. Loss data associated with a difference between the generated model output text data and the labeled output text data is determined and the text processing model is adjusted using the determined loss data. The use of labeled output text data enables the model to be trained to produce output text data with a target sentence count in a computationally efficient manner.
Some embodiments provide proxies or other servers in a computing network with independent certificate chains which facilitate mitigation of certificate problems. Independence criteria are enforced against two or more installed certificate chains on a given server, identifying and avoiding dependencies such as cross-certification, shared certificate authorities, shared revocation lists, or shared certificate status protocol endpoints between the certificate chains. Some embodiments serve independent certificates concurrently in an active-active certificate server configuration. The certificate chains' coexistence and their independence from one another facilitates transitioning the network from a failing issuer or a failed chain to a chain that works better, thereby improving network resilience and limiting damage from certificate problems. By dynamically updating certificate bindings, some embodiments also facilitate safe deployment of new certificates during migration from one issuer to another. Certificate distributions are computed from issuer ratios, network topology, or both.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
A multiplex assay for nucleic acid detection includes a substrate, a sample, and a fluorophore-labeled oligonucleotide. The substrate has a plurality of physically separated assay locations, each of which includes a nucleotide-targeting enzyme configured to cleave nucleic acids, a guide ribonucleic acid (gRNA), and a quencher-labeled oligonucleotide. A portion of the sample is distributed to each assay location. The gRNA recognizes target nucleic acid in the sample, thereby activating the nucleotide-targeting enzyme to cleave nucleic acids, including the quencher-labeled oligonucleotide. The fluorophore-labeled oligonucleotide is subsequently added to each assay location, which facilitates identification of a presence of the target nucleic acid in the sample via detection of unquenched light emitted by the fluorophore in one or more of the plurality of assay locations.
Approaches to classifying text-based content are described herein. For example, a classification system performs operations that include receiving text-based content comprising a plurality of characters, generating a plurality of character category sequences using the plurality of characters and based on a plurality of predefined character categories, calculating a frequency distribution of the plurality of character category sequences, and classifying the text-based content based on the calculated frequency distribution. The classifying uses a machine learning model that has been trained using a plurality of examples of text-based content. Responsive to the classification, the system can take appropriate actions. For example, responsive to classifying the text-based content as unsolicited, the system can restrict distribution of the text-based content or generate an alert for the text-based content.
Techniques for managing user presence based on an operation state of an external peripheral device connected to a computing device are described. The method determines whether the external peripheral device is being utilized when performing an activity relating to a first software application executing on the computing device. In response to determining utilization, state information for the user is updated to indicate an active state based on the activity performed on the computing device or with respect to a user identity associated with the first software application. During activity performance, a second software application obtains the state information and updates a user presence status to indicate that the user is currently in an active operation state according to the state information indicating the active state. The method enables cross-application presence management by leveraging peripheral device usage patterns to maintain accurate user availability status across multiple software applications and computing environments.
H04L 12/18 - Dispositions pour la fourniture de services particuliers aux abonnés pour la diffusion ou les conférences
67.
CONJOINED MEMORY SYSTEMS SUPPORTING DATA STORAGE IN LARGER MEMORY SYSTEM WHEN SMALLER MEMORY SYSTEM IS UNAVAILABLE AND WITH SMALLER MEMORY SYSTEM READ LATENCY, AND RELATED PROCESSOR-BASED SYSTEMS AND METHODS
Conjoined memory system that includes a larger memory system conjoined with a smaller memory system to support data storage in the larger memory system when the smaller memory system is unavailable, and related methods of performing memory accesses and computer-readable media are also disclosed. The conjoined memory system is configured to selectively direct new, incoming memory write requests for incoming data (e.g., incoming data packets to be stored) through a bypass data path to be written to memory entries in the smaller memory system if available for data storage (e.g., memory entry(ies) are free). Memory access latency and dynamic power expended for such memory accesses is reduced. However, if the smaller memory system is not available for data storage (e.g., memory entries are full), the conjoined memory system can selectively direct new, incoming memory write requests instead to the larger memory system to be stored in memory entries therein.
Passive devices may be embedded into a cavity in a package substrate, with electrical contacts of the passive device on a contact surface orthogonal to a surface of the package substrate and extending through the package substrate. The electrical contacts of the passive device may be coupled to vias coupled to a power supply to provide capacitive decoupling. One or more through-hole vias (THVs), which provide current to ICs on the package substrate, may be excluded from the package substrate to accommodate the passive device. Embedding the passive devices in the cavity of the package substrate with the contact surface orthogonal to, rather than parallel to, the surface of the package substrate, reduces an area occupied by the passive device. In this manner, a number of the THVs excluded from the package substrate is reduced, which results in a smaller impact to the resistance of the power supply network.
H01L 23/50 - Dispositions pour conduire le courant électrique vers le ou hors du corps à l'état solide pendant son fonctionnement, p. ex. fils de connexion ou bornes pour des dispositifs à circuit intégré
H01L 21/48 - Fabrication ou traitement de parties, p. ex. de conteneurs, avant l'assemblage des dispositifs, en utilisant des procédés non couverts par l'un uniquement des groupes ou
H01L 23/528 - Configuration de la structure d'interconnexion
H01L 23/538 - Dispositions pour conduire le courant électrique à l'intérieur du dispositif pendant son fonctionnement, d'un composant à un autre la structure d'interconnexion entre une pluralité de puces semi-conductrices se trouvant au-dessus ou à l'intérieur de substrats isolants
H01L 23/498 - Connexions électriques sur des substrats isolants
This disclosure describes a framework for performing user-requested tasks automatically across an interactive interface using various types of machine learning models. Specifically, this disclosure outlines and describes a task execution system that utilizes a generative artificial intelligence (AI) action model and retrieval-augmented generation (RAG) to complete user-requested actions across an interactive interface. The task execution system solves many of the current limitations of LAMs by using a generative AI action model to determine a session plan, which includes a set of actions for accomplishing stages of the actionable task across the interactive interface, obtaining visual context information of each interactive interface segment, integrates RAG results to improve the accuracy of both the session plan and individual actions, and self-corrects when faced with unexpected obstacles.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
70.
CONJOINED MEMORY SYSTEMS SUPPORTING DATA STORAGE IN LARGER MEMORY SYSTEM WHEN SMALLER MEMORY SYSTEM IS UNAVAILABLE AND WITH SMALLER MEMORY SYSTEM READ LATENCY, AND RELATED PROCESSOR-BASED SYSTEMS AND METHODS
Conjoined memory system that includes a larger memory system conjoined with a smaller memory system to support data storage in the larger memory system when the smaller memory system is unavailable, and related methods of performing memory accesses and computer-readable media are also disclosed. The conjoined memory system is configured to selectively direct new, incoming memory write requests for incoming data (e.g., incoming data packets to be stored) through a bypass data path to be written to memory entries in the smaller memory system if available for data storage (e.g., memory entry(ies) are free). Memory access latency and dynamic power expended for such memory accesses is reduced. However, if the smaller memory system is not available for data storage (e.g., memory entries are full), the conjoined memory system can selectively direct new, incoming memory write requests instead to the larger memory system to be stored in memory entries therein.
A vision language model (“VLM”) generates text captions from video content. Innovations in controlling the complexity of captioning that uses a VLM are described. For example, a training tool updates a training set so that text captions are more concise, then fine-tunes a VLM using the updated training set. Or, as another example, a generative artificial intelligence model such as a VLM dynamically adjusts the probability of an end-of-sentence (“EOS”) token so that the probability of the EOS token increases in successive iterations of output token generation, which tends to make generated text captions more concise. Or, as another example, a captioning tool identifies and ranks representative units (such as keyframes) of video, then selectively applies captioning (using a VLM) to representative units of the video based on ranking information. Together or individually, the innovations can improve the computational efficiency and accuracy of captioning that uses a VLM.
Enabling efficient hash-based signature verification in processor-based devices is disclosed herein. In one exemplary embodiment, a processor-based device includes a processor device and a hash compute core circuit. The hash compute core circuit receives, from a process executing on the processor device, a digit of a plurality of digits of a message digest, a signature value corresponding to the digit, and an initialized context value. The hash compute core circuit generates a hash chain by being configured to, for Y times wherein Y is an integer value calculated using a value of the digit, update the context value, and perform a hash operation on the signature value. The hash compute core circuit then transmits an ending value of the hash chain to the process, which stores the ending value of the hash chain.
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
73.
PERFORMING TRANSFORMER-BASED DESIGN VERIFICATION FOR COVERAGE CLOSURE IN PROCESSOR DEVICES
Performing transformer-based design verification for coverage closure in processor devices is disclosed herein. In one exemplary embodiment, a processor device trains an online decision transformer (ODT) using initial trajectories based on regression testing of a Design-Under-Test (DUT). The processor device then performs an online learning phase using the ODT by first generating a plurality of new trajectories. For each new trajectory, the processor device uses the ODT to generate a sequence of actions based on maximizing coverage, transmits the sequence of actions to a testbench environment, receives a corresponding sequence of observed states and a corresponding sequence of coverage metrics from the testbench environment, and generates the new trajectory. The processor device identifies a subset of the new trajectories having a final coverage metric that exceeds a coverage threshold, adds the subset to a replay buffer of the ODT, and retrains the ODT using the replay buffer.
The techniques disclosed herein provide a system for constructing an automated telecommunications network operation model prior to deployment in a telecommunications network for completing downstream tasks. In general, the performance of artificial intelligence agents such as large language models can degrade when applied to highly specific and/or complex domains such as telecommunications network operations resulting in erroneous outputs and potentially leading to network outages. As such, the present techniques finetune a large language model using a domain specific dataset to establish a specialized context directed to telecommunications network operations. That is, the large language model is pre-trained to establish the specialized context prior to deployment in the operation of a telecommunications network. In this way, the automated telecommunications network operation model can support a broad range of tasks within the context of a telecommunications network such as generating network configurations and question answering while also achieving strong performance.
A system may generate embedded subsets by projecting each subset of the subsets into a vector space to generate a corresponding embedded subset. A system may encode the embedded subsets into an encoded image using a dataset encoder including a gated spectral state space model, the gated spectral state space model being a gated neural network that includes a spectral state space model, the spectral state space model being a state space model that represents features of the input dataset using at least a spectral transformation of each embedded subset of the embedded subsets. A system may predict a classification for the input dataset using the encoded image.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
Devices, systems, and methods for reconfigurable butterfly architectures are provided. A reconfigurable butterfly operator circuit includes a single multiplier configured to receive a first variable and a twiddle factor and produce a product, first and second modular subtractors coupled to the multiplier, the first modular subtractor coupled to receive input coefficients and provide a modular difference, and the second modular subtractor coupled to receive the product, a modular adder coupled to receive the input coefficients and provide a modular sum, and multiplexers coupled to (i) provide the input coefficients to the modular adder, (ii) provide the first variable and the twiddle factor to the multiplier, (iii) and receive the modular difference from the first modular subtractor, respectively, each of the multiplexers coupled to receive a control signal that selects whether the circuit is configured as a Gentleman-Sande butterfly operator circuit or a Cooley-Tukey butterfly operator circuit.
The disclosed techniques pertain to training large language models (“LLMs”) using table data. Specifically, the disclosed techniques pertain to training LLMs for table-related tasks using two models, each model reserved for different functions. A first model is reserved for generator functions and a second model is reserved for validator functions. The first model receives table data and generates training data. The training data is fed to the second model, which identifies instances of training data meeting or exceeding at least one validity threshold. Instances of training data meeting or exceeding the at least one validity threshold are output as validated training data. The validated training data is used to iteratively fine-tune the two models by increasing or decreasing one or more numeric weight parameters in each of the models that control how the models process input data and produce outputs.
Technologies are disclosed for performing imaging operations via a direct secure wireless connection to an imaging device. An imaging device, such as a printer or scanner, obtains a signed certificate defining a security policy from an identity and access management (“IAM”) service. A computing device, such as a laptop or smartphone, obtains a signed certificate from the IAM service that defines access rights associated with the computing device. The imaging device and the computing device exchange the signed certificates. The imaging device approves or denies a request from the computing device to perform imaging operations by way of a direct secure wireless communication channel between the imaging device and the computing device based on the security policy and the access rights.
A plurality of training examples is accessed, each training example comprising an image of a scene and a pose of a viewpoint from which the image was captured. A neural radiance field is trained using the training examples. A plurality of generated images is computed, by, for each of a plurality of randomly selected viewpoints, generating a color image and a depth image of the scene from the neural radiance field. A neural network is trained using the generated images.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
80.
METRICS-BASED COMPUTATIONAL METHOD SELECTION FOR THE PREDICTION OF A PHYSICAL PROPERTY
Examples are disclosed that relate to the selection of a method to compute a physical property based upon metrics obtained using a universal machine learning force field. One disclosed example provides a computing system comprising a logic subsystem, and a storage subsystem comprising instructions executable by the logic subsystem. The instructions are executable to obtain one or more metrics computed based upon an energy and a force determined for a material using a universal machine learning force field (MLFF), and based at least upon the one or more metrics, determine to use one of a first computational method or a second computational method to compute a predicted physical property value for the material.
G16C 10/00 - Chimie théorique computationnelle, c.-à-d. TIC spécialement adaptées aux aspects théoriques de la chimie quantique, de la mécanique moléculaire, de la dynamique moléculaire ou similaires
G16C 60/00 - Science informatique des matériaux, c.-à-d. TIC spécialement adaptées à la recherche des propriétés physiques ou chimiques de matériaux ou de phénomènes associés à leur conception, synthèse, traitement, caractérisation ou utilisation
This disclosure describes a framework for performing user-requested tasks automatically across an interactive interface using various types of machine learning models. Specifically, this disclosure outlines and describes a task execution system that utilizes a generative artificial intelligence (AI) action model and retrieval-augmented generation (RAG) to complete user-requested actions across an interactive interface. The task execution system solves many of the current limitations of LAMs by using a generative AI action model to determine a session plan, which includes a set of actions for accomplishing stages of the actionable task across the interactive interface, obtaining visual context information of each interactive interface segment, integrates RAG results to improve the accuracy of both the session plan and individual actions, and self-corrects when faced with unexpected obstacles.
Enabling efficient hash-based signature verification in processor-based devices is disclosed herein. In one exemplary embodiment, a processor-based device includes a processor device and a hash compute core circuit. The hash compute core circuit receives, from a process executing on the processor device, a digit of a plurality of digits of a message digest, a signature value corresponding to the digit, and an initialized context value. The hash compute core circuit generates a hash chain by being configured to, for Y times wherein Y is an integer value calculated using a value of the digit, update the context value, and perform a hash operation on the signature value. The hash compute core circuit then transmits an ending value of the hash chain to the process, which stores the ending value of the hash chain.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
83.
ADJUSTING PROBABILITY OF AN END-OF-SENTENCE TOKEN IN A GENERATIVE ARTIFICIAL INTELLIGENCE MODEL
A vision language model ("VLM") generates text captions from video content. Innovations in controlling the complexity of captioning that uses a VLM are described. For example, a training tool updates a training set so that text captions are more concise, then fine-tunes a VLM using the updated training set. Or, as another example, a generative artificial intelligence model such as a VLM dynamically adjusts the probability of an end-of-sentence ("EOS") token so that the probability of the EOS token increases in successive iterations of output token generation, which tends to make generated text captions more concise. Or, as another example, a captioning tool identifies and ranks representative units (such as keyframes) of video, then selectively applies captioning (using a VLM) to representative units of the video based on ranking information. Together or individually, the innovations can improve the computational efficiency and accuracy of captioning that uses a VLM.
A method securely erasing data on a storage drive includes transmitting a communication that initiates an erasure operation on a storage drive and receiving a drive erasure attestation generated in association with erasure operation and by a root-of-trust of the storage drive. The drive erasure attestation includes a first claim that contains cryptographic evidence of a measured state of the storage drive following the erasure operation. The method further includes verifying the first claim and instructing a ledger service to record the drive erasure attestation in a ledger in response to the verification. Verification of the first claim depends upon confirmation of a match between first measurement values in the first claim and a first set of stored values previously-verified as corresponding to a correct implementation of the erasure operation.
G06F 21/78 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du stockage de données
Systems and methods are provided for automatic recovery of node resource memory devices. A platform basic input/output system (“BIOS”) of a node collects, from a node resource of the node, operational state information for memory components of a memory device, and determines whether at least one memory component is undetected. If so, the platform BIOS sends a notification of the undetected memory component(s) to a controller of the node that relays the notification to a control plane fabric (“CPF”) agent in a control plane. The CPF agent automatically determines a potential cause and a potential resolution, including memory device reset, firmware updates, etc. The CPF agent sends commands to the controller that cause the platform BIOS to initiate a recovery process for the plurality of memory components of the memory device, based on the potential resolution.
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Disclosed are methods for managing execution of plugins of a machine-learning based system. A plugin configuration defines inputs required by the plugin and capabilities provided by the plugin. Capabilities describe the plugin’s functionality, such as how the plugin affects the response, what type of content the plugin generates, etc. In some configurations, when responding to a prompt, a collection of relevant plugins is identified. Configurations of these plugins may be analyzed to optimize execution, including determining optimal execution order or enabling parallel execution. Plugin configurations may also be analyzed to improve security by conditionally preventing one plugin from accessing the output of another. Plugin configurations may also be used to inform a client what plugins will run and what results they may yield. This enables the client to optimize and streamline how the response is displayed.
G06F 21/51 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade du chargement de l’application, p. ex. en acceptant, en rejetant, en démarrant ou en inhibant un logiciel exécutable en fonction de l’intégrité ou de la fiabilité de la source
A computer-implemented method for compressed compact data storage and processing within a cloud-based environment is disclosed. In one aspect, the method for processing data signals, includes receiving a plurality of data signals corresponding to a user, the plurality of data signals includes a plurality of user raw records at corresponding time values, compressing the plurality of data signals using an incremental compression algorithm to form a single compressed iterative record, organizing the single compressed iterative record into hierarchical segments based on predefined time intervals using a waterfall data model, and storing the single compressed iterative record in a first cloud storage system.
A computer-implemented method for managing data in a computing environment. In one aspect, a method includes receiving a declarative input that indicates an outcome for data handling, identifying, from the declarative input, a predefined data filter configuration and a predefined data propagation configuration, filtering incoming data according to the predefined data filter configuration to generate filtered data, and replicating the filtered data to a data storage according to the predefined data propagation configuration.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
A network element has a replication process between a first instance and a second instance of the network element, such that the second instance is able to take over functioning of the network element in the event of failure of the first instance. The first instance receives a desired configuration to be applied to the network element. The second instance also receives the desired configuration. The second instance drops the desired configuration it received. The desired configuration is mapped from a declarative form to an imperative form at the first instance and the imperative form of the desired configuration is executed at the first instance such that the desired configuration is applied at the network element.
H04L 41/082 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant des mises à jour ou des mises à niveau des fonctionnalités réseau
H04L 41/0663 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau en réalisant des actions prédéfinies par la planification du basculement, p. ex. en passant à des éléments de réseau de secours
A computing system including memory storing a table including a plurality of entries arranged in a plurality of rows and a plurality of columns. The memory may further store a knowledge graph in which semantic data is stored. The computing system may further include a processor configured to, at a metadata inference machine learning model, generate inferred table metadata based at least in part on the entries included in the table and the semantic data included in the knowledge graph. The inferred table metadata may include one or more row type classifications of one or more respective rows or one or more column type classifications of one or more respective columns. The processor may be further configured to generate a metadata display interface element that visually represents the inferred table metadata and output the metadata display interface element for display at a graphical user interface (GUI).
Some storage systems are configured with VDL (valid data length) type controls that are implemented on a per cluster basis and, in some instances, on a sub-cluster basis, rather than simply a per file basis. In some instances, per-cluster VDL metadata for the storage clusters is stored and referenced at the edge data volume nodes of a distributed network for the storage system rather than, and/or without, storing or synchronizing the per-cluster VDL metadata at a master node that manages the corresponding storage clusters for the different data volume nodes. Sequence controls are also provided and managed by the master node and synchronized with the edge data volume nodes to further control access to data contained in the storage clusters.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
A method, computer program product, and computing system for estimating noise spectrum from a target audio signal segment. An acoustic neural embedding is generated from the target audio signal segment. An augmented audio signal segment is generated with background acoustic properties of the target audio signal segment by processing an input audio signal segment with the noise spectrum and the acoustic neural embedding using a neural network.
G10L 21/0264 - Filtration du bruit caractérisée par le type de mesure du paramètre, p. ex. techniques de corrélation, techniques de passage par zéro ou techniques prédictives
G10L 25/18 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information spectrale de chaque sous-bande
G10L 25/30 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux
According to examples, an apparatus may include a memory on which is stored machine-readable instructions that may cause a processor to determine, for each of a plurality of members in a group, a respective least privilege level for a resource and determine, based on the determined respective least privilege levels, a privilege level to be assigned to the group for the resource. The instructions may also cause the processor to assign the determined privilege level to the group for the resource and apply the assigned privilege level to the members of the group for the resource.
G06F 12/14 - Protection contre l'utilisation non autorisée de mémoire
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06F 9/46 - Dispositions pour la multiprogrammation
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
A system for providing a personalized assistant for network-based communication services utilizes one or more processors and memory to enhance user interaction through intelligent query processing. The system receives queries from computing devices and processes them using an intermediate model that analyzes communication session transcripts, user data, and session metadata alongside shared content from the communication service. The intermediate model generates prompt templates with content selection criteria to identify relevant transcript portions and shared content, constructing targeted prompts for a generative language model. The system handles various content types including files, screen sharing, and chat messages through rule-based engines, while employing transcript partitioning and rolling summary techniques for extended sessions. Advanced features include predictive follow-up query generation with response caching, role-based prompt customization, and feedback-driven learning for continuous improvement. The generative language model output is translated into personalized responses and presented to users.
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
95.
POLYNUCLEOTIDE ENCAPSULATION AND PRESERVATION USING SELF-ASSEMBLING MEMBRANES
Polynucleotides such as DNA are stored inside vesicles formed from self-assembling membranes. The vesicles may be protocells, liposomes, micelles, colloidosomes, proteinosomes, or coacervates. The vesicles may include surface functionalization to improve polynucleotide encapsulation and/or to bind polynucleotides having specific sequences. Encapsulation in vesicles provides protection for the polynucleotides. Additional protection is provided by addition of one or more stabilizers. The stabilizer may be nucleic-acid stabilizers that stabilize the polynucleotides or may be a protective structural layer around the vesicles such as a layer of silica. A process for stably storing polynucleotides in vesicles and a process for recovering stored polynucleotides from vesicles are both disclosed. The polynucleotides may be used for storage of digital information.
A61K 31/7088 - Composés ayant au moins trois nucléosides ou nucléotides
A61K 47/14 - Esters d’acides carboxyliques, p. ex. acides gras monoglycérides, triglycérides à chaine moyenne, parabènes ou esters d’acide gras de PEG
A61K 47/18 - AminesAmidesUréesComposés d’ammonium quaternaireAcides aminésOligopeptides ayant jusqu’à cinq acides aminés
A61K 47/26 - Hydrates de carbone, p. ex. polyols ou sucres alcoolisés, sucres aminés, acides nucléiques, mono-, di- ou oligosaccharidesLeurs dérivés, p. ex. polysorbates, esters d’acide gras de sorbitan ou glycyrrhizine
A61K 47/36 - PolysaccharidesLeurs dérivés, p. ex. gommes, amidon, alginate, dextrine, acide hyaluronique, chitosane, inuline, agar-agar ou pectine
A61K 47/62 - Préparations médicinales caractérisées par les ingrédients non actifs utilisés, p. ex. les supports ou les additifs inertesAgents de ciblage ou de modification chimiquement liés à l’ingrédient actif l’ingrédient non actif étant chimiquement lié à l’ingrédient actif, p. ex. conjugués polymère-médicament l’ingrédient non actif étant un agent de modification l’agent de modification étant une protéine, un peptide ou un acide polyaminé
A61K 47/69 - Préparations médicinales caractérisées par les ingrédients non actifs utilisés, p. ex. les supports ou les additifs inertesAgents de ciblage ou de modification chimiquement liés à l’ingrédient actif l’ingrédient non actif étant chimiquement lié à l’ingrédient actif, p. ex. conjugués polymère-médicament le conjugué étant caractérisé par sa forme physique ou sa forme galénique, p. ex. émulsion, particule, complexe d’inclusion, stent ou kit
96.
REAL-TIME MULTILINGUAL INTERPRETER FOR ONLINE MEETINGS
The techniques disclosed herein provide a real-time natural language processing (NLP) system for translating a speech audio input containing multiple natural languages (e. g., English, Mandarin, and French) into a translated audio output in a specific language (e. g., English). In a real-time translation context such as online meetings, feasibility can be dependent on achieving low latency to minimize the perceptible delay between the original speaker and the translated output. As such, the proposed techniques utilize an end-to-end (E2E) model in a translation module that implements the aspects of automatic speech recognition (ASR) in one machine learning model. In this way, the size of the end-to-end model, often referred to as the model footprint, is significantly smaller than that of a cascaded system that utilizes multiple distinct machine learning models. Consequently, the computing resource consumption of the end-to-end model is likewise reduced in relation to a cascaded system.
Techniques are described herein that are capable of responding to a query in a developer tool using semantically related keywords in relevant code chunks. A user-generated query regarding a location of an element in a codebase of a software development project is received. The codebase is parsed into code chunks. Semantically related keywords, including keywords from the user-generated query and other keywords that are semantically related to the keywords, are identified. Relevant code chunks are selected from the code chunks based on satisfaction of a relevancy criterion regarding the user-generated query. Execution of an instruction is triggered, which causes a visual representation of a response to the user-generated query to be generated. The execution of the instruction causes the visual representation to include at least portions of the relevant code chunks and further causes at least a subset of the semantically related keywords to be highlighted in the portions.
A computer-implemented method for compressed compact data storage and processing within a cloud-based environment is disclosed. In one aspect, the method for processing data signals, includes receiving a plurality of data signals corresponding to a user, the plurality of data signals includes a plurality of user raw records at corresponding time values, compressing the plurality of data signals using an incremental compression algorithm to form a single compressed iterative record, organizing the single compressed iterative record into hierarchical segments based on predefined time intervals using a waterfall data model, and storing the single compressed iterative record in a first cloud storage system.
A method of analysing a hollow core antiresonant optical fibre having an inner cladding comprising at least one capillary defined by a wall with a wall thickness comprises: directing light onto the fibre for interaction with at least one surface of the capillary wall; detecting a portion of the light which has interacted with the at least one surface of the capillary wall to determine a power level of the detected portion; and using the power level to deduce information regarding the wall thickness.
G01B 11/06 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur pour mesurer l'épaisseur
C03B 37/012 - Fabrication d'ébauches d'étirage de fibres ou de filaments
C03B 37/027 - Fibres constituées de différentes variétés de verre, p. ex. fibres optiques
Systems and methods are provided for automatic recovery of node resource memory devices. A platform basic input/output system ("BIOS") of a node collects, from a node resource of the node, operational state information for memory components of a memory device, and determines whether at least one memory component is undetected. If so, the platform BIOS sends a notification of the undetected memory component(s) to a controller of the node that relays the notification to a control plane fabric ("CPF") agent in a control plane. The CPF agent automatically determines a potential cause and a potential resolution, including memory device reset, firmware updates, etc. The CPF agent sends commands to the controller that cause the platform BIOS to initiate a recovery process for the plurality of memory components of the memory device, based on the potential resolution.
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat