Systems and methods are described for generating an AR image are described herein. A physical camera is used to capture a video of a physical object in front of a physical background. The system then accesses data defining a virtual environment and selects a first position of a virtual camera in the virtual environment. While capturing the video, the system displays captured video of the physical object, such that the physical background is replaced with a view of the virtual environment from the first position of the virtual camera. In response to detecting a movement of the physical camera, the system selects a second position of the virtual camera in the virtual environment based on the detected movement. The system then displays the captured video of the physical object, wherein the view of the physical background is replaced with a view of the virtual environment from the second position of the virtual camera.
Systems and methods are presented herein for providing a user with a notification, or access to content, based on the user's factual discourse during a conversation with other users. A first user may provide a first statement. A second user may provide a second statement. An application determines the first and the second statement are associated with first and second user profiles, respectively. The application analyzes the elements of each respective statement and determines there is a conflict between the user statements. In response to determining there is a conflict between the respective statements, the application generates a respective search query to verify each respective statement. When the application determines there is an answer that resolves the conflict between the respective statements, the application generates a notification for the users that comprises the answer that resolves the conflict and may include access to content affirming the answer.
G10L 25/51 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
3.
SYSTEMS AND METHODS FOR RECOMMENDING CONTENT ITEMS BASED ON AN IDENTIFIED POSTURE
Systems and methods are provided for generating a content item recommendation based on an identified posture. An input associated with a content item delivery service is received at a computing device. A capture of a user is received, and a digital representation of the user is generated based on the capture of the user. A posture of the user is determined based on the digital representation of the user, and a content item genre is identified based on the determined posture. A content item recommendation that is based on the identified genre is generated and output.
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
4.
SYSTEMS AND METHODS FOR DISAMBIGUATING A VOICE SEARCH QUERY
Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
G06F 16/635 - Filtrage basé sur des données supplémentaires, p. ex. sur des profils d'utilisateurs ou de groupes
G06F 16/638 - Présentation des résultats des requêtes
G06F 16/683 - Recherche de données caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
G10L 25/51 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
5.
SYSTEMS AND METHODS FOR DECENTRALIZED GENERATION OF A SUMMARY OF A VIRTUAL MEETING
Systems, methods and apparatuses are described for providing a summary associated with a virtual meeting. In response to detecting a break in presence (BIP) at a first computing device for a first user in the virtual meeting, each of one or more second computing devices participating in the virtual meeting and corresponding to at least one second user may be caused to locally monitor reactions of the corresponding at least one second user to the virtual meeting during the BIP. The server may receive one or more parameters associated with the locally monitored reactions and corresponding to a portion of the virtual meeting during the BIP. In response to determining to generate a summary associated with a corresponding portion of the virtual meeting during the BIP, based on the received one or more parameters, the summary may be generated and provided to the first computing device.
Systems and methods for presenting user-selectable options for parental control in response to detecting a triggering action by a user are disclosed. A system generates for output a first content item on a device. The system identifies a first user and a second user in proximity to the device and determines that a first gesture is performed by the first user wherein the first gesture is covering the eyes of the second user. In response to determining that the first gesture is performed, the system presents a selectable option for a user input such as (a) skipping a portion of the first content item; (b) lowering the volume; (c) removing the video of the first content item; or (d) presenting a second content item instead of presenting the first content item. In response to receiving a user input selecting the selectable option, the system performs an action corresponding to the selectable option.
Systems and methods for generating a graphically animated audience are disclosed. Biometric data is captured via a sensor during display of content via a first device. The biometric data is stored in association with metadata for the content, and is mapped to a graphical representation. Based on the mapping of the biometric data to the graphical representation and the metadata, a graphical animation is generated for display in synchronization with displaying of the content via a second device.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/84 - Génération ou traitement de données de description, p. ex. descripteurs de contenu
8.
SUPPLEMENTAL AUDIO GENERATION SYSTEM IN AN AUDIO-ONLY MODE
Systems and methods for generating supplemental audio for an audio-only mode are disclosed. For example, a system generates for output a content item that includes video and audio. In response to determining that an audio-only mode is activated, the system determines that a portion of the content item is not suitable to play in the audio-only mode. In response to determining that the portion of the content item is not suitable to play in the audio-only mode, the system generates for output supplemental audio associated with the content item during the portion of the content item.
H04N 21/439 - Traitement de flux audio élémentaires
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
Methods and systems are provided for streaming a media asset with an adaptive bitrate transcoder. A server receives, from a client device, a first request for a first portion of the plurality of portions to be transcoded at a first bitrate. The server then starts to transcode the plurality of portions at the requested first bitrate to generate a plurality of corresponding transcoded portions. The server updates a header of a transcoded portion to include: 1) a transcode latency value; and 2) a count value indicating a number of available pre-transcoded portions of the media asset at the time the first request was received. The server then transmits the transcoded portion to the client. The client device then determines a second bitrate based on the transcode latency value included in the header of the transcoded portion corresponding to the first portion.
H04N 21/462 - Gestion de contenu ou de données additionnelles, p. ex. création d'un guide de programmes électronique maître à partir de données reçues par Internet et d'une tête de réseau ou contrôle de la complexité d'un flux vidéo en dimensionnant la résolution ou le débit en fonction des capacités du client
H04L 65/75 - Gestion des paquets du réseau multimédia
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 21/2662 - Contrôle de la complexité du flux vidéo, p. ex. en mettant à l'échelle la résolution ou le débit binaire du flux vidéo en fonction des capacités du client
H04N 21/433 - Opération de stockage de contenu, p. ex. opération de stockage en réponse à une requête de pause ou opérations de cache
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
H04N 21/4402 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
10.
SYSTEMS AND METHODS FOR PROVIDING A SLOW MOTION VIDEO STREAM CONCURRENTLY WITH A NORMAL-SPEED VIDEO STREAM UPON DETECTION OF AN EVENT
Methods and systems for providing a video stream along with a slow motion video showing a particular event depicted in the video stream are described herein. The method includes generating a first video stream and generating a second video stream, which is a slow motion video stream, from the first video stream by modifying a playback speed of the first video stream. The method includes monitoring content of the first video stream to identify an event trigger of a predefined set of event triggers. Each event trigger indicates a presence in the first video stream of an event that is to be generated for display using the second video stream. The method includes determining, based on the identifying of the event trigger, to transmit the second video stream along with the first video stream, and simultaneously transmitting both the first video stream and the second video stream.
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
G06F 16/78 - Recherche de données caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
G11B 27/00 - MontageIndexationAdressageMinutage ou synchronisationContrôleMesure de l'avancement d'une bande
H04N 21/234 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé
H04N 21/2365 - Multiplexage de plusieurs flux vidéo
H04N 21/24 - Surveillance de procédés ou de ressources, p. ex. surveillance de la charge du serveur, de la bande passante disponible ou des requêtes effectuées sur la voie montante
Systems and methods are presented for providing to filter unwanted sounds from a media asset. Voice profiles of a first character and a second character are generated based on a first voice signal and a second voice signal received from the media device during a presentation. The user provides a selection to avoid a certain sound or voice in association with the second character. During a presentation of the media asset, a second audio segment is analyzed to determine, based on the voice profile of the second character, whether the second voice signal includes the voice of a second character. If so, the second voice signal output characteristics are adjusted to reduce the sound.
H04N 21/439 - Traitement de flux audio élémentaires
G06F 16/635 - Filtrage basé sur des données supplémentaires, p. ex. sur des profils d'utilisateurs ou de groupes
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
12.
SYSTEMS AND METHODS FOR IMPROVING MEDIA CONTENT PLAYBACK AT A PLAYBACK DEVICE
Systems and methods for improving audio playback at a playback device are described herein. In some embodiments, a system transitions between causing playback of a radio broadcast stream to causing playback of device music, such as in response to determining that a quality of the radio broadcast stream is below a threshold value. In some embodiments, a system selects songs to play based on device preferences of a plurality of different media devices. In some embodiments, a system selects a device from which to retrieve songs for playback based on one or more rules.
Systems and methods for generating and presenting content recommendations to new users during or immediately after the onboarding process, before any history of the new user's viewed content is available. A machine learning or other model may be trained to determine clusters of content genre values corresponding to genres of content watched by viewers. Clusters are thus associated with popular groupings of content genres viewed by many users. Clusters representing popular groupings of content genres may be selected for new users, and content corresponding to the selected clusters may be recommended to the new users as part of their onboarding process. A sufficient amount of content may be selected to fully populate any content recommendation portion of a new user onboarding page.
G06F 16/9535 - Adaptation de la recherche basée sur les profils des utilisateurs et la personnalisation
G06F 18/23213 - Techniques non hiérarchiques en utilisant les statistiques ou l'optimisation des fonctions, p. ex. modélisation des fonctions de densité de probabilité avec un nombre fixe de partitions, p. ex. K-moyennes
H04N 21/45 - Opérations de gestion réalisées par le client pour faciliter la réception de contenu ou l'interaction avec le contenu, ou pour l'administration des données liées à l'utilisateur final ou au dispositif client lui-même, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films ou résolution de conflits d'ordonnancement
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
H04N 21/482 - Interface pour utilisateurs finaux pour la sélection de programmes
14.
METHODS AND SYSTEMS FOR IMPLEMENTING A LOCKED MODE FOR VIEWING MEDIA ASSETS
Methods and systems that provide an interactive media guidance application having a locked mode for viewing media assets. In the locked mode, the interactive media guidance application may provide media assets suited to a certain audience. The interactive media guidance application may determine suitable media assets for the locked mode based on media assets viewed by other users having characteristics similar to the user of the interactive media guidance application. In the locked mode, the interactive media guidance application may allow access to only certain media assets and/or limit the time period for which the media assets are presented.
H04N 21/454 - Filtrage de contenu, p. ex. blocage des publicités
H04N 21/258 - Gestion de données liées aux clients ou aux utilisateurs finaux, p. ex. gestion des capacités des clients, préférences ou données démographiques des utilisateurs, traitement des multiples préférences des utilisateurs finaux pour générer des données collaboratives
H04N 21/45 - Opérations de gestion réalisées par le client pour faciliter la réception de contenu ou l'interaction avec le contenu, ou pour l'administration des données liées à l'utilisateur final ou au dispositif client lui-même, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films ou résolution de conflits d'ordonnancement
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
H04N 21/475 - Interface pour utilisateurs finaux pour acquérir des données d'utilisateurs finaux, p. ex. numéro d'identification personnel [PIN] ou données de préférences
H04N 21/482 - Interface pour utilisateurs finaux pour la sélection de programmes
H04N 21/6543 - Transmission du serveur vers le client pour forcer certaines opérations du client, p. ex. l´enregistrement
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
15.
SYSTEMS AND METHODS TO CURATE NOTIFICATIONS FROM UNSUBSCRIBED SOCIAL MEDIA ACCOUNTS
Methods and systems for curating notifications from unfollowed accounts are described herein. The system tracks that a first account previously followed a second account and subsequently unfollowed the second account. The system identifies an interest of the first account and monitors the activities of the second account for activity that matches the interest. If there is a match between the interest and an activity of the second account, the system notifies the first account of the activity. These methods and systems provide the user with relevant information from unfollowed accounts.
G06F 16/9536 - Personnalisation de la recherche basée sur le filtrage social ou collaboratif
H04L 51/212 - Surveillance ou traitement des messages utilisant un filtrage ou un blocage sélectif
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p. ex. des poussées de notifications des messages reçus
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel pour la prise en charge des services des réseaux sociaux
16.
SYSTEMS AND METHODS FOR PROVIDING BINGE-WATCHING RECOMMENDATIONS
Systems and methods are provided for generating and presenting content series recommendations to a particular user that has just completed binge-watching a particular content series. The recommendations are based on content series consumed by other users who have also consumed the content series just completed by the user and that share behavioral attributes to that of the user.
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/45 - Opérations de gestion réalisées par le client pour faciliter la réception de contenu ou l'interaction avec le contenu, ou pour l'administration des données liées à l'utilisateur final ou au dispositif client lui-même, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films ou résolution de conflits d'ordonnancement
Systems and associated methods are described for providing content recommendations. The system accesses content item consumption data for a plurality of users subscribed to a media service. Then, the system determines that a first subset of the plurality of users has unsubscribed from the media service and that a second subset of the plurality of users has not unsubscribed from the media service. The system identifies a time slot typical for the first subset of users and atypical for the second subset of users based on content item consumption data of the first subset of users and content item consumption data of the second subset of users. In response to determining that a user is consuming a first content item at the identified time slot, the system generates for display a recommendation for a second content item that is scheduled for a different time slot.
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
G06Q 30/0201 - Modélisation du marchéAnalyse du marchéCollecte de données du marché
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/458 - Ordonnancement de contenu pour créer un flux personnalisé, p. ex. en combinant une publicité stockée localement avec un flux d'entréeOpérations de mise à jour, p. ex. pour modules de système d'exploitation
18.
METHODS AND SYSTEMS TO PROVIDE A PLAYLIST FOR SIMULTANEOUS PRESENTATION OF A PLURALITY OF MEDIA ASSETS
Systems and methods are described herein for generating a playlist for a simultaneous presentation of a plurality of media assets. The system retrieves a user preference associated with a user profile and receives a selection of a first media asset and a second media asset from the plurality of media assets for presentation on a user device. The system parses the respective audio streams of the first media asset and the second media asset to identify one or more preferred audio segments based on the user preference and generates the playlist of the identified one or more preferred audio segments. Based on a generated audio playlist, the system generates, for presentation on the user device, the video stream for each of the first media asset and the second media asset and the playlist of the identified one or more preferred audio segments.
H04N 21/45 - Opérations de gestion réalisées par le client pour faciliter la réception de contenu ou l'interaction avec le contenu, ou pour l'administration des données liées à l'utilisateur final ou au dispositif client lui-même, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films ou résolution de conflits d'ordonnancement
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/439 - Traitement de flux audio élémentaires
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/482 - Interface pour utilisateurs finaux pour la sélection de programmes
H04N 21/488 - Services de données, p. ex. téléscripteur d'actualités
H04N 21/84 - Génération ou traitement de données de description, p. ex. descripteurs de contenu
19.
METHODS AND SYSTEMS FOR MANAGING LOCAL AND REMOTE DATA
Embodiments of the present disclosure includes systems for and methods of playing media based on local and remote data. A method falling within the disclosure includes: storing in a particular entry of a data structure stored in the local memory of the device: a media title, metadata, and an assigned identifier of the media streaming application via which a content item is received at the device; receiving a search request for the media title; searching the internet for a web page via which media associated with the media title can be played; identifying, a particular entry in the data structure that comprises the media title to access the metadata stored in the data structure; comparing the metadata in the data structure and the metadata of the web page; and launching of the media streaming application or opening a web application to play the media.
G06Q 30/0207 - Remises ou incitations, p. ex. coupons ou rabais
G06F 16/9535 - Adaptation de la recherche basée sur les profils des utilisateurs et la personnalisation
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation électromagnétique, p. ex. lecture optiqueMéthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p. ex. lecture de la lumière blanche réfléchie
20.
METHODS AND SYSTEMS FOR SELECTING A 3D OBJECT FOR DISPLAY IN AN EXTENDED REALITY ENVIRONMENT
Systems and methods are described for selecting a 3D object for display in an extended reality environment. A space in an extended reality environment is determined for placement of a 3D object. A set of space parameters are determined comprising: an amount of memory available for generating the display of the extended reality environment and an amount of computing power available for generating the display of the extended reality environment. The 3D object is selected for display in the space based on the amount of memory and the amount of computing power available.
Systems and methods are described for identifying a plurality of candidate interactive sessions for a user with a user profile to join, each candidate interactive session being associated with a plurality of user profiles. A digital representation of the user may be generated, and the digital representation of the user may be caused to join each of the plurality of candidate interactive sessions. The systems and methods may monitor, in each candidate interactive session, behavior of digital representations of each of the plurality of user profiles associated with the candidate interactive session in relation to the digital representation of the user. The systems and methods may generate, based on the monitoring, a social inclusivity score for each of the plurality of candidate interactive sessions. A recommended interactive session may be selected and provided based on the corresponding social inclusivity score for each candidate interactive session.
H04L 67/131 - Protocoles pour jeux, simulations en réseau ou réalité virtuelle
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04L 65/401 - Prise en charge des services ou des applications dans laquelle les services impliquent une session principale en temps réel et une ou plusieurs sessions parallèles additionnelles en temps réel ou sensibles au temps, p. ex. accès partagé à un tableau blanc ou mise en place d’une sous-conférence
22.
Ecosystem for NFT Trading in Public Media Distribution Platforms
A computer-implemented method and an apparatus are provided for presenting an option to purchase an NFT based on a scene of a media asset to an advertiser. One example computer-implemented method includes obtaining, from a first source, a scene of a media asset, determining that the scene comprises a product, obtaining, from a second source, a non-fungible token (NFT) based on the scene, matching the NFT to an advertiser based on the product, and presenting an option to purchase the matched NFT to the advertiser.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
23.
Systems and methods for decentralized generation of a summary of a vitrual meeting
Systems, methods and apparatuses are described for providing a summary associated with a virtual meeting. In response to detecting a break in presence (BIP) at a first computing device for a first user in the virtual meeting, each of one or more second computing devices participating in the virtual meeting and corresponding to at least one second user may be caused to locally monitor reactions of the corresponding at least one second user to the virtual meeting during the BIP. The server may receive one or more parameters associated with the locally monitored reactions and corresponding to a portion of the virtual meeting during the BIP. In response to determining to generate a summary associated with a corresponding portion of the virtual meeting during the BIP, based on the received one or more parameters, the summary may be generated and provided to the first computing device.
Systems and methods are described for identifying a plurality of candidate interactive sessions for a user with a user profile to join, each candidate interactive session being associated with a plurality of user profiles. A digital representation of the user may be generated, and the digital representation of the user may be caused to join each of the plurality of candidate interactive sessions. The systems and methods may monitor, in each candidate interactive session, behavior of digital representations of each of the plurality of user profiles associated with the candidate interactive session in relation to the digital representation of the user. The systems and methods may generate, based on the monitoring, a social inclusivity score for each of the plurality of candidate interactive sessions. A recommended interactive session may be selected and provided based on the corresponding social inclusivity score for each candidate interactive session.
H04L 67/131 - Protocoles pour jeux, simulations en réseau ou réalité virtuelle
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04L 65/401 - Prise en charge des services ou des applications dans laquelle les services impliquent une session principale en temps réel et une ou plusieurs sessions parallèles additionnelles en temps réel ou sensibles au temps, p. ex. accès partagé à un tableau blanc ou mise en place d’une sous-conférence
25.
SYSTEMS AND METHODS FOR ENABLING NON-FUNGIBLE TOKENS (NFTs) IN A VIRTUAL/METAVERSE ENVIRONMENT
Systems and methods for enabling non-fungible tokens (NFTs) in a virtual or metaverse environment by connecting an NFT marketplace to the virtual or metaverse environment and providing tools to display, sell, broker, and trade NFTs based on matching of user interests is disclosed. The methods generate a weighted taxonomy based on NFTs displayed in the virtual environment. A new user's interest in the NFTs is determined in multiple ways, including if the new user owns NFTs that share characteristics with the taxonomy associated NFTs. A match is determined based on new user's interests in relation to the weighed taxonomy. Upon a match, guidance is provided for avatars of the new user and other NFT owners to virtually meet. Separate servers, locations, and ingress points may be determined to facilitate meetings and buy/sell discussions. If an NFT sale/trade is executed, the sale/trade is recorded in the blockchain.
G06Q 20/12 - Architectures de paiement spécialement adaptées aux systèmes de commerce électronique
G06Q 20/36 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des portefeuilles électroniques ou coffres-forts électroniques
G06Q 40/04 - TransactionsOpérations boursières, p. ex. actions, marchandises, produits dérivés ou change de devises
26.
SYSTEMS AND METHODS FOR ENHANCING GROUP MEDIA SESSION INTERACTIONS
Systems and methods are provided for enabling enhanced group media session interactions. A group session for consuming a media content item is initiated between first and second computing devices, and a portion of the media content item is received at the computing devices. A reaction of a first user is captured based at least in part on receiving the portion of the media content item. A trigger condition is identified, and it is determined that the captured reaction satisfies the trigger condition. In response to determining that the captured reaction satisfies the trigger condition, a prompt that is based on the portion of the media content item and the captured reaction is generated. A computing device is identified, and at least one of the portion of the media content item, the captured reaction, or the prompt is transmitted to the identified computing device and is generated for output.
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04L 65/1089 - Procédures en session en ajoutant des médiasProcédures en session en supprimant des médias
H04L 65/403 - Dispositions pour la communication multipartite, p. ex. pour les conférences
27.
SYSTEMS AND METHODS FOR IMPROVING GROUPCAST MEDIA STREAMING USING METRIC INFORMATION IN DEVICE-TO-DEVICE COMMUNICATIONS
Systems and methods are provided for improving communications between computing devices. A content item is received at a first computing device, and a sidelink channel is initiated between the first computing device and a second computing device. A first portion of the content item is transmitted from the first computing device to the second computing device via the sidelink channel. Feedback is generated, based on a condition of the sidelink channel, at the second computing device, and the feedback is transmitted from the second computing device to the first computing device. An action to perform is identified based on the feedback, and the action is performed.
H04W 4/06 - Répartition sélective de services de diffusion, p. ex. service de diffusion/multidiffusion multimédiaServices à des groupes d’utilisateursServices d’appel sélectif unidirectionnel
H04L 1/1812 - Protocoles hybridesDemande de retransmission automatique hybride [HARQ]
H04W 4/02 - Services utilisant des informations de localisation
H04W 76/14 - Établissement de la connexion en mode direct
28.
METHODS AND SYSTEMS FOR AMENDING SENT TEXT-BASED MESSAGES
A computer-implemented method and a system are provided for amending sent text-based messages. One example computer-implemented method includes obtaining, from a source, a text-based message and receiving, at a user device, an inquiry of a portion of the text-based message. The computer-implemented method further includes requesting, from a network, data based on the inquiry of the portion of the text-based message, amending at least the portion of the text-based message based on the data, and presenting the amended portion of the text-based message at the user device.
H04M 1/72436 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité avec des moyens interactifs de gestion interne des messages pour la messagerie textuelle, p. ex. services de messagerie courte [SMS] ou courriels
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p. ex. des poussées de notifications des messages reçus
29.
MULTI-CAMERA MULTIVIEW IMAGING WITH FAST AND ACCURATE SYNCHRONIZATION
There is provided a method comprising: receiving a communication from one or more devices capable of recording content, determining, using a wireless communication transceiver, a geographical location of the one or more devices, determining an orientation of the one or more devices, receiving content capturing an event and recorded on the one or more devices, storing the content capturing an event and recorded on the one or more devices, and creating, from a collection of recordings comprising at least the stored content capturing an event and recorded on the one or more devices, a single representation of the event by combining segments of the collection of recordings.
Systems and methods are described for determining a position of a user device in a field of view of a user in an XR environment. One or more display elements are generated for display in the XR environment relative to the position of the user device in the field of view. Each display element comprises a user interface of an executable application for controlling the user device
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
Systems and methods for encoding/decoding a 3D image are provided. The system accesses an image data that comprises a texture data and a depth map. The system decomposes the depth map into a plurality of component depth maps (CDMs) for a plurality of depth ranges, wherein each component depth map corresponds to a focal plane of multiple focal planes (MFPs) decomposition of the image data. The system generates a plurality of encoded CDM data streams for the plurality of depth ranges, wherein each respective CDM data stream is based at least in part on a respective CDM. The system then transmits the plurality of encoded CDM data streams to a client device to cause the client device to: (a) reconstruct the depth map, and (b) generate for display or for further processing an image based on the reconstructed depth map.
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/184 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant des bits, p. ex. de flux vidéo compressé
H04N 19/436 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p. ex. la mise en œuvre de logiciels spécialisés utilisant des dispositions de calcul parallélisées
32.
Systems and Methods to Provide Otherwise Obscured Information to a User
Systems and methods are presented for enhancing media or information consumption. An example method includes identifying a movement of a first user, identifying an object referenced by the movement of the first user external to the first user, and in response to determining that the object is at least partially obstructed from a field of view of a second user, generating, for display on a display that is within the field of view of the second user, a view of the object.
B60R 1/23 - Dispositions de visualisation en temps réel pour les conducteurs ou les passagers utilisant des systèmes de capture d'images optiques, p. ex. des caméras ou des systèmes vidéo spécialement adaptés pour être utilisés dans ou sur des véhicules pour visualiser une zone extérieure au véhicule, p. ex. l’extérieur du véhicule avec un champ de vision prédéterminé
B60R 1/28 - Dispositions de visualisation en temps réel pour les conducteurs ou les passagers utilisant des systèmes de capture d'images optiques, p. ex. des caméras ou des systèmes vidéo spécialement adaptés pour être utilisés dans ou sur des véhicules pour visualiser une zone extérieure au véhicule, p. ex. l’extérieur du véhicule avec un champ de vision réglable
Methods and systems for conversion of imagery and video for three-dimensional (3D) displays, four-dimensional experiences, next-generation user interfaces, virtual reality, augmented reality, mixed reality experiences, and interactive experiences into imagery and video suitable for a two-dimensional (2D) display. A 2D display is configured to generate a 3D-like effect. 3D images are analyzed and represented by parameters including movement, depth, motion, shadow, focus, sharpness, intensity, and color. Using the parameters, the 3D images are converted to 2D images that include the 3D-like effect. The 2D images are presented to users to generate feedback. The feedback informs changes to the conversion. Artificial intelligence systems, including neural networks, are trained for improving the conversion. Models are developed for improving the conversion. Related apparatuses, devices, techniques, and articles are also described.
H04N 13/122 - Raffinement de la perception 3D des images stéréoscopiques par modification du contenu des signaux d’images, p. ex. par filtrage ou par ajout d’indices monoscopiques de profondeur
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
H04N 21/41 - Structure de clientStructure de périphérique de client
Systems and methods for enabling non-fungible tokens (NFTs) in a virtual or metaverse environment by connecting an NFT marketplace to the virtual or metaverse environment and providing tools to display, sell, broker, and trade NFTs based on matching of user interests is disclosed. The methods generate a weighted taxonomy based on NFTs displayed in the virtual environment. A new user's interest in the NFTs is determined in multiple ways, including if the new user owns NFTs that share characteristics with the taxonomy associated NFTs. A match is determined based on new user's interests in relation to the weighed taxonomy. Upon a match, guidance is provided for avatars of the new user and other NFT owners to virtually meet. Separate servers, locations, and ingress points may be determined to facilitate meetings and buy/sell discussions. If an NFT sale/trade is executed, the sale/trade is recorded in the blockchain.
There is provided a method comprising: receiving a communication from one or more devices capable of recording content, determining, using a wireless communication transceiver, a geographical location of the one or more devices, determining an orientation of the one or more devices, receiving content capturing an event and recorded on the one or more devices, storing the content capturing an event and recorded on the one or more devices, and creating, from a collection of recordings comprising at least the stored content capturing an event and recorded on the one or more devices, a single representation of the event by combining segments of the collection of recordings.
H04N 13/282 - Générateurs de signaux d’images pour la génération de signaux d’images correspondant à au moins trois points de vue géométriques, p. ex. systèmes multi-vues
A host server of a digital platform, such as a virtual world or augmented reality platform, receives a request to graphically indicate an affiliation between a user of an avatar and an organization or other entity. The host server queries an authenticating server to authenticate the affiliation between the user and the entity. Accordingly, the host server generates for display a logo, or other indication, to indicate the authenticated affiliation. Other users of the digital platform can learn whether the user of the avatar is actually affiliated with the entity. Access to digital spaces, virtual objects and some interactions of the avatar may be controlled according to an access policy of the entity.
Systems and methods for encoding/decoding a 3D image are provided. The system decomposes depth map into a plurality of component depth maps (CDMs) for a plurality of depth ranges, wherein each component depth map corresponds to a focal plane of a multiple focal plane (MFP) decomposition of the image data. The system generates a plurality of component depth map focal planes (CDMFPs) by combining each respective CDM with the depth map. The system scales data in each CDMFP by a respective scaling factor. The system generates for transmission a plurality of encoded scaled CDMFP data streams for the plurality of depth ranges, wherein each respective scaled CDMFP data stream is based at least in part on a respective scaled CDMFP.
H04N 13/282 - Générateurs de signaux d’images pour la génération de signaux d’images correspondant à au moins trois points de vue géométriques, p. ex. systèmes multi-vues
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/161 - Encodage, multiplexage ou démultiplexage de différentes composantes des signaux d’images
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
Systems and methods are described for encrypting and decrypting data in a distributed storage environment. Such systems and methods for encryption may divide a data payload into slices, the slices including a first slice and a subsequent slice, employ a content encryption key and an initialization vector, encrypt the first slice using the content encryption key and the initialization vector, generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the unencrypted content of the first slice, and encrypt the subsequent slice using the subsequent initialization vector and the content encryption key. The systems and methods may then generate a list of the encrypted slices into which the data payload has been generated, and publish to a secure storage location, the slice list, the content encryption key and the initialization vector for the first slice in the slice list, with the slices outputted to the distributed storage environment. Systems and methods for decryption may receive, from a secure storage location, a slice list, a content encryption key, and an initialization vector, determine the encrypted slices to be received from the distributed storage environment. The systems and methods may receive, from the distributed storage environment, at least encrypted first slice and the encrypted subsequent slice, and decrypt the first slice using the content encryption key and the initialization vector, to generate a decrypted first slice, and generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the decrypted first slice, decrypt the subsequent slice using the subsequent initialization vector and the content encryption key, and combine the first slice and the subsequent slice into a data payload.
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
Systems and methods are described for encrypting and decrypting data in a distributed storage environment. Such systems and methods for encryption may divide a data payload into slices, the slices including a first slice and a subsequent slice, employ a content encryption key and an initialization vector, encrypt the first slice using the content encryption key and the initialization vector, generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the unencrypted content of the first slice, and encrypt the subsequent slice using the subsequent initialization vector and the content encryption key. The systems and methods may then generate a list of the encrypted slices into which the data payload has been generated, and publish to a secure storage location, the slice list, the content encryption key and the initialization vector for the first slice in the slice list, with the slices outputted to the distributed storage environment. Systems and methods for decryption may receive, from a secure storage location, a slice list, a content encryption key, and an initialization vector, determine the encrypted slices to be received from the distributed storage environment. The systems and methods may receive, from the distributed storage environment, at least encrypted first slice and the encrypted subsequent slice, and decrypt the first slice using the content encryption key and the initialization vector, to generate a decrypted first slice, and generate a subsequent initialization vector for the subsequent slice based upon the initialization vector and the decrypted first slice, decrypt the subsequent slice using the subsequent initialization vector and the content encryption key, and combine the first slice and the subsequent slice into a data payload.
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
40.
MULTI-FORMAT REPRESENTATION AND CODING OF VISUAL INFORMATION
System and methods are provided for using multi-format representation and coding of visual information. The system accesses an image data that comprises a texture data and a depth map; decomposes the depth map into a plurality of component depth maps (CDMs); and generates multiple focal planes (MFPs) comprising a plurality of focal planes. Each respective focal plane is based on the texture data and a respective CDM of the plurality of CDMs. The system selects a data subset including one or more of: (a) the texture data, (b) the depth map, (c) the plurality of CDMs, or (d) the plurality of focal planes; generates encoded data based on the selected data subset; and transmits, over a communication network, the encoded data to a client device to cause the client device to: generate for display or for further processing an image based on the encoded data.
Systems and methods are provided herein for detecting key words provided by ancillary devices and acquiring virtual objects based on the detected key words. This may be accomplished by a system displaying an augmented reality view to a user and detecting a received message. The system can determine whether a portion of the message corresponds to an augmented reality object. In response to detecting that the portion of the message corresponds to the augmented reality object, the system can display the augmented reality object in the augmented reality view in a first format. The first format can be based on the environment around the user.
Systems and methods are provided for enabling payments in an extended reality environment. A virtual space is mapped to a physical space at an extended reality device, and a virtual payment location is identified in the virtual space, where the virtual payment location corresponds to a location in the physical space. A collision with the virtual payment location is detected and a payment is initiated based on the collision. A payment request is transmitted from the extended reality device, and confirmation of the payment is received at the extended reality device. Confirmation of the payment is generated for output.
G06Q 20/30 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06Q 20/40 - Autorisation, p. ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasinExamen et approbation des payeurs, p. ex. contrôle des lignes de crédit ou des listes négatives
There is provided systems and methods for estimating quality of experience, QoE, for a media stream. The systems and methods comprise receiving a first window of frames of the media stream, receiving a second window of frames of the media stream, measuring a plurality of metrics relating to the first and the second windows of frames, aggregating the plurality of metrics for each window of frames, and determining a window quality of experience value based on the aggregated plurality of metrics.
H04N 21/24 - Surveillance de procédés ou de ressources, p. ex. surveillance de la charge du serveur, de la bande passante disponible ou des requêtes effectuées sur la voie montante
H04N 21/414 - Plate-formes spécialisées de client, p. ex. récepteur au sein d'une voiture ou intégré dans un appareil mobile
44.
SYSTEMS AND METHODS FOR CONCEPTUALIZING A VIRTUAL OR LIVE OBJECT
Systems and methods for conceptualizing an initial object by using a reference object are described. The methods identify an initial object such as a virtual object or a live object. Characteristic(s) of the initial object may be obtained and used in a search query to search for a reference object with which a user has interacted. In some instances, the characteristic(s) may be obtained upon determining an interest in the initial object. Reference objects that include the characteristic(s) and have a user interaction may be identified and scored. The reference objects, if more than one identified, may be ranked based on the score and a reference object may be selected for display. The display may include both the selected reference object and the initial object and provide context to the initial object by such that the user may be able to relate to it based on their interactions with the reference object.
G06V 20/20 - ScènesÉléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 20/30 - ScènesÉléments spécifiques à la scène dans les albums, les collections ou les contenus partagés, p. ex. des photos ou des vidéos issus des réseaux sociaux
G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
45.
SYSTEMS AND METHODS FOR CONFIGURING ADAPTIVE STREAMING OF CONTENT ITEMS BASED ON USER COMFORT LEVELS
Systems and methods are described for configuring adaptive streaming of content items (e.g., extended reality experiences, or any videos, including 360° videos), and selecting a version of the content item based on desired content comfort rating(s) which may be determined based on monitoring discomfort trends of a user. A determination is made of whether the discomfort trend exceeds a threshold, and if so, a version of the same content item that is rated for the desired discomfort rating is used, where the selected version is more comfortable for the user than the originally scheduled content item. Alternatively, the user's actual discomfort is measured during consumption of content item and used to select a version of the content that is more comfortable. In a live setting, specific enhancement of the content item can be selected, such as view from a specific camera, an angle of the camera, zoomed in/out image, etc.
Systems and methods are disclosed herein for temporally predictive coding of three-dimensional (3D) dynamic point cloud attributes. A first frame and a second frame of point cloud data are accessed. The point cloud data points include 3D spatial coordinates and one or more graphic attributes. A block tree data structure comprising a plurality of blocks is generated based on a tree partitioning of the second frame of point cloud data. Matching block pairs between the first frame and the second frame are identified from the plurality of blocks based on block-wise searching. Frequency-domain projections are generated for each matching block pair via a graph Fourier transform (GFT) algorithm. A bitstream of motion-compensated residuals is generated based on differences in the frequency-domain projections for each matching block pair.
A method and an apparatus are provided for assigning users to virtual world servers based on social connectedness. One example method includes receiving a request to connect a client device to one of a plurality of virtual world servers and accessing social network connectivity data of a user account associated with the client device. The method further includes identifying a plurality of other user accounts based on the social network connectivity data, ranking the plurality of virtual world servers based on connections with devices associated with the plurality of other user accounts, and connecting the client device to a virtual world server of the plurality of virtual world servers based on the ranking.
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
G06F 16/901 - IndexationStructures de données à cet effetStructures de stockage
G06F 16/908 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
Systems and methods are described for determining an amount of time for charging a vehicle battery and selecting one or more media content items for display on a user device based on the amount of time for charging the vehicle battery. A level of driving autonomy of a vehicle is determined. An audio and/or video setting of the media content item is adjusted based on the level of driving autonomy
B60L 58/12 - Procédés ou agencements de circuits pour surveiller ou commander des batteries ou des piles à combustible, spécialement adaptés pour des véhicules électriques pour la surveillance et la commande des batteries en fonction de l'état de charge [SoC]
B60L 53/62 - Surveillance et commande des stations de charge en réponse à des paramètres de charge, p. ex. courant, tension ou charge électrique
H04N 21/414 - Plate-formes spécialisées de client, p. ex. récepteur au sein d'une voiture ou intégré dans un appareil mobile
Systems and methods are described for determining an amount of time for charging a vehicle battery and selecting one or more media content items for display on a user device based on the amount of time for charging the vehicle battery. A level of driving autonomy of a vehicle is determined. An audio and/or video setting of the media content item is adjusted based on the level of driving autonomy
A virtual reality play area is defined. Movement of an object in the vicinity of the play area may then be detected. Based on the movement of the object, it may be determined whether the object is projected to enter the play area. If the object is projected to enter the play area, a representation of the object is generated for display to the user.
G08B 21/02 - Alarmes pour assurer la sécurité des personnes
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G08B 7/06 - Systèmes de signalisation selon plus d'un des groupes Systèmes d'appel de personnes selon plus d'un des groupes utilisant une transmission électrique
G08C 17/02 - Dispositions pour transmettre des signaux caractérisées par l'utilisation d'une voie électrique sans fil utilisant une voie radio
51.
SYSTEMS AND METHODS FOR ENABLING A VIRTUAL ASSISTANT IN DIFFERENT ENVIRONMENTS
Systems and methods are provided for enabling the protection of user privacy when adding a virtual assistant to a conference. A conference is initiated between a first computing device and at least a second computing device and a virtual assistant is added to the conference. At the virtual assistant, it is identified that the virtual assistant is in the conference and a guest mode is activated in response. A query is received at the virtual assistant and based on the query and the guest mode, an action is identified. The identified action is performed via the virtual assistant.
Systems and methods are described for generating a virtual reality (VR) environment comprising an interactive object, wherein the interactive object is associated with a service provider and is generated based on a user profile associated with a current VR session in the VR environment. The systems and methods may detect user input in association with one or more options associated with the interactive object, and, based on the detecting, cause an action to be performed in association with the user profile and the service provider associated with the interactive object, wherein the action comprises accessing a service provided by the service provider, the service being external to the VR environment.
Methods and systems for video compression at scene changes provide an improved, low latency interactive experience in cloud computing environments. Exemplary use cases include all forms of cloud gaming including cloud-enabled interactive sporting events, e-sports, fantasy sports, gaming, and enhancements. Improvements in performance and experience are achieved with at least one of an extreme low latency rate controller, an extreme low latency rate controller method, frame partitioning at scene changes, preventive (relatively early) termination of encoding at scene changes, interactive signaling between a decoder and an encoder, or interactive signaling. Related apparatuses, devices, techniques, and articles are also described.
H04N 19/146 - Débit ou quantité de données codées à la sortie du codeur
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
Systems and methods are provided for using a Multi Focal Plane (MFP) prediction in predictive coding. The system detects a camera viewpoint change between a current frame from a current camera viewpoint to a previous frame from a previous camera viewpoint, decomposes a reconstructed previous frame to a plurality of focal planes, adjusts the plurality of focal planes from the previous camera viewpoint to correspond with the current camera viewpoint, generates an MFP prediction by summing pixel values of the adjusted plurality of focal planes along a plurality of optical axes from the current camera viewpoint, determines an MFP prediction error between the MFP prediction and the current frame, quantizes and codes the MFP prediction error, and transmits, to a receiver over a communication network, the camera viewpoint change and the coded quantized MFP prediction error for reconstruction of the current frame and display of the 3D scene.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
H04N 19/103 - Sélection du mode de codage ou du mode de prédiction
H04N 19/164 - Retour d’information en provenance du récepteur ou du canal de transmission
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/503 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre la prédiction temporelle
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
Systems and methods are provided for using a Multiple Depth Plane (MDP) prediction in predictive coding. The system detects a camera viewpoint change between a current frame and a previous frame, decomposes a reconstructed depth map of the previous frame to a plurality of depth planes, adjusts the plurality of depth planes from a previous camera viewpoint to correspond with a current camera viewpoint, generates an MDP prediction by summing pixel values of the adjusted plurality of depth planes along a plurality of optical axes from the current camera viewpoint, determines an MDP prediction error between the MDP prediction and a depth map of the current frame, quantizes and codes the MDP prediction error, and transmits, to a receiver over a communication network, the camera viewpoint change and the coded quantized MDP prediction error for reconstruction of a depth map of the current frame.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/166 - Retour d’information en provenance du récepteur ou du canal de transmission concernant la quantité d’erreurs de transmission, p. ex. le taux d’erreur binaire
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/89 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre des procédés ou des dispositions de détection d'erreurs de transmission au niveau du décodeur
56.
SLANTED BURIED DIFFRACTIVE GRATINGS FOR OPTICAL ELEMENTS OF AUGMENTED REALITY AND VIRTUAL REALITY DISPLAYS
Head-mounted displays (HMD) or other suitable optical equipment with waveguides comprising one or more slanted buried diffractive gratings and methods for fabricating said waveguides are described herein. In an embodiment, an HMD comprises an optical element and an image source that provides an image beam to the optical element. The optical element may comprise a first flat surface, a second flat surface, and a buried diffractive grating disposed between the first surface and the second surface. The buried diffractive grating may be positioned in a slanted arrangement at a particular angle relative to the first flat surface and the second flat surface.
Systems and methods are provided for creating enhanced VR content. The system generates, for display on a user device, a view of a virtual 3D environment. The system detects that a location in the view of virtual 3D environment matches one or more criteria. In response to the detecting that the location in the virtual 3D environment matches the one or more criteria, the system automatically stores an image of the virtual 3D environment.
Systems and methods for mitigating cybersickness that is caused due to display of content, such as a 360° video or a virtual reality experience, are disclosed. The methods measure biometrics of a user to determine a cybersickness score. The score is associated with a cybersickness severity level. A determination is made whether the user's cybersickness severity level exceeds a threshold, and, if so, mitigation or remedial actions are automatically performed. The mitigation options range from altering content, changing device configuration, and automating home automation devices to automating body electronics worn by the user. The type of mitigation option selected is based on the user's cybersickness severity level. The methods also determine demographics of a plurality of users who encountered cybersickness due to engagement with the content. A match between the user's demographics and the plurality of users is determined and accordingly mitigation options are selected on the basis of the match.
A61M 21/00 - Autres dispositifs ou méthodes pour amener un changement dans l'état de conscienceDispositifs pour provoquer ou arrêter le sommeil par des moyens mécaniques, optiques ou acoustiques, p. ex. pour mettre en état d'hypnose
A63F 13/211 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs d’inertie, p. ex. des accéléromètres ou des gyroscopes
A63F 13/212 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs portés par le joueur, p. ex. pour mesurer le rythme cardiaque ou l’activité des jambes
A63F 13/65 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu automatiquement par des dispositifs ou des serveurs de jeu, à partir de données provenant du monde réel, p. ex. les mesures en direct dans les compétitions de course réelles
59.
SYSTEMS AND METHODS FOR EMULATING A USER DEVICE IN A VIRTUAL ENVIRONMENT
A user device associated with a user interacting with a virtual environment is identified. Using an emulation application, a virtual instance of the user device is launched. User preferences for the user device, retrieved from user data, are then applied to the virtual instance of the user device. A graphical representation of the virtual instance of the user device is then generated for presentation to the user within the virtual environment.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Systems and methods for mitigating cybersickness that is caused due to display of content, such as a 360° video or a virtual reality experience, are disclosed. The methods measure biometrics of a user to determine a cybersickness score. The score is associated with a cybersickness severity level. A determination is made whether the user's cybersickness severity level exceeds a threshold, and, if so, mitigation or remedial actions are automatically performed. The mitigation options range from altering content, changing device configuration, and automating home automation devices to automating body electronics worn by the user. The type of mitigation option selected is based on the user's cybersickness severity level. The methods also determine demographics of a plurality of users who encountered cybersickness due to engagement with the content. A match between the user's demographics and the plurality of users is determined and accordingly mitigation options are selected on the basis of the match.
A61M 21/00 - Autres dispositifs ou méthodes pour amener un changement dans l'état de conscienceDispositifs pour provoquer ou arrêter le sommeil par des moyens mécaniques, optiques ou acoustiques, p. ex. pour mettre en état d'hypnose
A63F 13/212 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs portés par le joueur, p. ex. pour mesurer le rythme cardiaque ou l’activité des jambes
A63F 13/65 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu automatiquement par des dispositifs ou des serveurs de jeu, à partir de données provenant du monde réel, p. ex. les mesures en direct dans les compétitions de course réelles
61.
SYSTEMS AND METHODS FOR NAVIGATING AN EXTENDED REALITY HISTORY
A plurality of snapshots from XR sessions are retrieved. A plurality of entities within the plurality of snapshots are identified. Based on the identified plurality of entities, a plurality of salient snapshots is identified. The plurality of snapshots is partitioned into contiguous clusters, with each cluster containing a salient snapshot. The salient snapshots are generated for presentation to the user and, in response to selection of a salient snapshot, a subset of the plurality of entities from within a cluster containing the selected salient snapshot is generated for presentation to the user. In response to selection of a presented entity of the presented subset of the plurality of entities, snapshots including the selected entity are generated for presentation. In response to selection of a snapshot, an XR scene corresponding to the selected snapshot is generated for presentation.
Systems and methods are described for providing a pace indicator in an extended reality environment. First route data of a first route, wherein the first route data comprises a pace of a first user moving along the first route. Second route data of a second route is determined, wherein the second route data comprises a pace of the second user moving along the second route. A pace indicator is provided to the first user moving along the first route based on the first route data and second route data, wherein the pace indicator comprises an avatar moving along the first route in an extended reality environment, the avatar representing the second user moving along the second route.
Systems and methods are described for modifying a media guidance application. Such systems and methods may aid a user in selecting media content for viewing which may be of particular interest to them. Such systems and methods may receive programming information from one or more program guide sources, generate a media guidance application for display based upon the received programming information, receive behavior information from at least one further source, and generate parameters for modifying the media guidance application in response to the behavior information. The systems and methods may then modify the media guidance application based upon the generated parameters and display the modified media guidance application.
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/462 - Gestion de contenu ou de données additionnelles, p. ex. création d'un guide de programmes électronique maître à partir de données reçues par Internet et d'une tête de réseau ou contrôle de la complexité d'un flux vidéo en dimensionnant la résolution ou le débit en fonction des capacités du client
Systems, methods and apparatuses are described herein for encoding image data comprising two-dimensional (2D) perspective images that exhibit parallax for presentation on a three-dimensional (3D) display. The image data may be accessed and encoded by generating a group of pictures (GOP) that comprises the 2D perspective images and ordering the 2D perspective images within the GOP in a particular order based on a set of evaluated metrics derived from content of the plurality of 2D perspective images or based on characteristics associated with equipment used to capture the plurality of 2D perspective images. The encoded image data may be transmitted for display.
Systems and methods for bandwidth-adaptive light field video transmission on mobile and portable devices is disclosed. An upstream bandwidth is estimated. A request for a service tier for capture and transmission of light field content is received, wherein the light field content comprises an image array of a plurality of sub-aperture images. When the requested service tier is greater than the estimated upstream bandwidth, a reduced service tier is determined based on the estimated upstream bandwidth. A number of sub-aperture images comprising a reduced image array is determined based on the reduced service tier. The image array is reduced to the reduced image array based on feature saliency and adjacency of sub-aperture images. Resources corresponding to the reduced service tier are provided for capture and transmission of the reduced image array.
H04N 21/2385 - Allocation de canauxAllocation de bande passante
H04N 21/24 - Surveillance de procédés ou de ressources, p. ex. surveillance de la charge du serveur, de la bande passante disponible ou des requêtes effectuées sur la voie montante
Systems and methods for broadcasting images identifying a destination device as it appears in an environment for content transfer are disclosed. Systems include a first device which selects a profile image of itself as it appears in its environment and embeds the profile image in its identification profile. The first device transmits the identification profile over a network during a discovery phase and the identification profile is received by a second device located in proximity to the first device. The identification profile is verified as corresponding to the first device by comparing the profile image to a real-time image of the first device. When the profile image substantially matches the real-time image of the first device, pairing between the first device and second device is initiated or content is sent over the network from the second device to the first device.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques
Systems and methods are described for modifying a media guidance application. Such systems and methods may aid a user in selecting media content for viewing which may be of particular interest to them. Such systems and methods may receive programming information from one or more program guide sources, generate a media guidance application for display based upon the received programming information, receive behavior information from at least one further source, and generate parameters for modifying the media guidance application in response to the behavior information. The systems and methods may then modify the media guidance application based upon the generated parameters and display the modified media guidance application.
Improved content recommendations are generated based on a knowledge graph of a content item, which is based on an attribute of the content item, metadata regarding the content item, a viewing history, and user preferences determined by analysis and selected by a user. An option for selecting attributes of interest from a plurality of attributes is generated for display. A content recommendation based on the selected attributes is generated and displayed in a user interface, which changes as user preference selections change. As a result, a user quickly identifies and consumes a customized list of content items related to the user's favorite actor, character, title, depicted object, depicted setting, actual setting, type of action, type of interaction, genre, release date, release decade, director, MPAA rating, critical rating, plot origin point, plot end point, and the like. Related apparatuses, devices, techniques, and articles are also described.
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/45 - Opérations de gestion réalisées par le client pour faciliter la réception de contenu ou l'interaction avec le contenu, ou pour l'administration des données liées à l'utilisateur final ou au dispositif client lui-même, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films ou résolution de conflits d'ordonnancement
H04N 21/482 - Interface pour utilisateurs finaux pour la sélection de programmes
69.
FIELD OF VISION AUDIO CONTROL FOR PHYSICAL OR MIX OF PHYSICAL AND EXTENDED REALITY MEDIA DISPLAYS IN A SPATIALLY MAPPED SPACE
Systems and methods for controlling the volume of content displayed on displays, such as physical and extended reality displays, based on the pose of an extended reality (XR) headset, or the gaze therefrom, are disclosed. The methods spatially map displays and audio devices on which the content is to be outputted. The methods also monitor 6DOF of the XR headset worn by the user to consume the displayed content. Based on a user's current pose or gaze, the methods determine a field of view (FOV) from the XR headset and the displays that fall within the FOV. The volume of the displays is controlled based on the where the display is located relative to the pose or gaze. The volume of a display that is within a threshold angle of the gaze is increased and volume of other displays is minimized, muted, and/or the content is displayed as closed captioning.
Systems and methods are described for selecting a 3D object for display in an extended reality environment. A space in an extended reality environment is determined for placement of a 3D object. A set of space parameters are determined comprising: an amount of memory available for generating the display of the extended reality environment and an amount of computing power available for generating the display of the extended reality environment. The 3D object is selected for display in the space based on the amount of memory and the amount of computing power available.
Systems and methods are provided for dynamically adjusting a personal boundary of an avatar in an XR environment. The system identifies a first avatar in an extended reality (XR) environment based on rule data stored in a storage. In response to the system detecting that the first avatar has entered a portion of the XR environment at a communicable distance from a second avatar, the system does the following steps. The system determines an offensiveness rating of the first avatar. The system retrieves, from the storage, an offensiveness tolerance of the second avatar. The system compares the offensiveness rating of the first avatar and offensiveness tolerance of the second avatar. In response to determining, based on the comparing, that the offensiveness rating of the first avatar exceeds the offensiveness tolerance of the second avatar, the system automatically censors one or more messages from the first avatar to the second avatar.
G06F 40/166 - Édition, p. ex. insertion ou suppression
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G10L 15/18 - Classement ou recherche de la parole utilisant une modélisation du langage naturel
G10L 25/63 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour estimer un état émotionnel
72.
METHODS AND SYSTEM FOR PARAPHRASING COMMUNICATIONS
Systems and methods for paraphrasing communications are disclosed. A first communication input is received and a context of the first communication input is determined. Based on the context of the first communication input, a plurality of linguistic elements are selected and a plurality of paraphrasing pairs are identified, each pair having one of the plurality of linguistic elements and a paraphrasing candidate of the linguistic element. The paraphrasing candidate is based on an emotional state of a sender of the first communication input and at least one of the plurality of paraphrasing pairs are displayed to the sender for selection.
An augment reality (AR) system captures an image of a physical environment. The AR system identifies an object in the captured image to serve as an anchor point. The AR system calculates a distance between the identified object and an AR display device that comprises left and right displays. The AR system identifying a virtual object associated with the anchor point. The AR system then generates for simultaneous display: (a) a first separate image of the virtual object on the left display of the AR device, and (b) a second separate image of the virtual object on the right display of the AR device, such that apparent distance of the virtual object of the composite image of the first separate image and the second separate image is set to the calculated distance between the identified object and the AR display device.
An augment reality (AR) system captures an image of a physical environment. The AR system identifies an object in the captured image to serve as an anchor point. The AR system calculates a distance between the identified object and an AR display device that comprises left and right displays. The AR system identifying a virtual object associated with the anchor point. The AR system then generates for simultaneous display: (a) a first separate image of the virtual object on the left display of the AR device, and (b) a second separate image of the virtual object on the right display of the AR device, such that apparent distance of the virtual object of the composite image of the first separate image and the second separate image is set to the calculated distance between the identified object and the AR display device.
Systems and methods are provided for generating a soundmoji for output. A content item is generated for output at a computing device, and a first input associated with the selection of a soundmoji menu is received. One or more soundmojis are generated for output, and a second input associated with the selection of a first soundmoji of the one or more soundmojis is received. A first timestamp of the content item associated with the selection of the first soundmoji is identified. An indication of a second timestamp of the content item and a second soundmoji is received, and a user interface element associated with the content item is updated to indicate the second soundmoji when the content item is being generated for output at the second timestamp.
An augment reality (AR) system captures an image of a physical environment. The AR system identifies an object in the captured image to serve as an anchor point. The AR system calculates a distance between the identified object and an AR display device that comprises left and right displays. The AR system identifying a virtual object associated with the anchor point. The AR system then generates for simultaneous display: (a) a first separate image of the virtual object on the left display of the AR device, and (b) a second separate image of the virtual object on the right display of the AR device, such that apparent distance of the virtual object of the composite image of the first separate image and the second separate image is set to the calculated distance between the identified object and the AR display device.
An AR display compensates for excessive light levels in a location in which the AR display is being used. AR objects are rendered for display on the AR display. Light levels in a location at which an AR object is being rendered for display are monitored. If the light level in the location exceeds the threshold light level, a light source in the location is identified and light emissions from the identified light source are mitigated.
Systems and methods are described herein for controlling network devices in an augmented reality environment. A user may point a second network device at a first network device to determine a network activity the first network device. The second network device may display a user control interface to enable the user to control the network activity of the first network device (e.g., a pinch gesture control). In response to receiving the user input, the second network device causes the modification of the network activity based on the user input.
H04L 67/131 - Protocoles pour jeux, simulations en réseau ou réalité virtuelle
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
79.
Systems and methods for media delivery in cooperative device-to-device communications
A sidelink connection is created between each device of a plurality of devices. A first device connected to a content source retrieves a manifest file for the media from the content source. The first device then notifies other devices, including a second device not connected to the content source, that the manifest file is available from the first device. Based on connection metrics of each device, it is determined which of the devices has the highest quality connection to the content source. If, for example, the first device is determined to have the highest quality connection, then the first devices retrieves a segment of the media from the content source, stores the segment in a cache of the first device, and delivers the segment to other devices in response to requests for the segment received from each device.
G06F 13/00 - Interconnexion ou transfert d'information ou d'autres signaux entre mémoires, dispositifs d'entrée/sortie ou unités de traitement
H04L 65/752 - Gestion des paquets du réseau multimédia en adaptant les médias aux capacités du réseau
H04L 65/80 - Dispositions, protocoles ou services dans les réseaux de communication de paquets de données pour prendre en charge les applications en temps réel en répondant à la qualité des services [QoS]
H04N 21/24 - Surveillance de procédés ou de ressources, p. ex. surveillance de la charge du serveur, de la bande passante disponible ou des requêtes effectuées sur la voie montante
80.
Systems and methods for media delivery in cooperative device-to-device communications
A sidelink connection is created between each device of a plurality of devices. If the sidelink connection quality is not sufficient to transmit a first version of a segment encoded at a first quality level, a lower quality version of the segment may also be retrieved. The lower quality version of the segment may be retrieved by a different device than the device that retrieved the first version of the segment. If the segment is requested from a first device by a second device and the sidelink connection between the first device and the second device is not sufficient to transmit the first version of the segment, the first device may cause transmission of the lower quality version of the segment retrieved by a third device to the second device.
G06F 13/00 - Interconnexion ou transfert d'information ou d'autres signaux entre mémoires, dispositifs d'entrée/sortie ou unités de traitement
H04L 65/752 - Gestion des paquets du réseau multimédia en adaptant les médias aux capacités du réseau
H04L 65/80 - Dispositions, protocoles ou services dans les réseaux de communication de paquets de données pour prendre en charge les applications en temps réel en répondant à la qualité des services [QoS]
H04N 21/231 - Opération de stockage de contenu, p. ex. mise en mémoire cache de films pour stockage à court terme, réplication de données sur plusieurs serveurs, ou établissement de priorité des données pour l'effacement
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
81.
Ecosystem for NFT trading in public media distribution platforms
A computer-implemented method and an apparatus are provided for presenting an option to purchase an NFT based on a scene of a media asset to an advertiser. One example computer-implemented method includes obtaining, from a first source, a scene of a media asset, determining that the scene comprises a product, obtaining, from a second source, a non-fungible token (NFT) based on the scene, matching the NFT to an advertiser based on the product, and presenting an option to purchase the matched NFT to the advertiser.
G06Q 30/0273 - Détermination des frais de publicité
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
82.
Intelligent delivery and playout to prevent stalling in video streaming
Systems and methods are disclosed to mitigate stalling of streaming content due to rebuffering so that, e.g., the content consumer does not experience gaps in playback. In some embodiments, by buffering streaming content simultaneously at two bitrate levels—e.g., one of the lowest bitrates and a better-quality bitrate, within the bandwidth limitations—rebuffering-caused gaps in playback of a higher quality (HQ) stream may be filled with a lower quality (LQ) stream. For instance, client-side dual buffers may store n segments from the HQ stream during a given time and a multiple of n number of segments from the LQ stream, thus allowing for many of the LQ segments to be output if the HQ stream is rebuffering. If a segment of content is beginning to be played back as an LQ segment, there is no reason to buffer the same segment from the HQ stream. Moreover, after a segment of content is played back (or decoded) as either HQ or LQ, the corresponding HQ segment and/or LQ segment may be discarded from the dual buffer, e.g., to create buffer space for upcoming segments.
H04L 65/612 - Diffusion en flux de paquets multimédias pour la prise en charge des services de diffusion par flux unidirectionnel, p. ex. radio sur Internet pour monodiffusion [unicast]
H04L 65/752 - Gestion des paquets du réseau multimédia en adaptant les médias aux capacités du réseau
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
83.
METRIC-DRIVEN IMPROVEMENTS OF VIRTUAL REALITY EXPERIENCES
Systems and methods for obtaining metrics relating to an extended reality experience and using the obtained metrics to perform remedial actions, such as managing user motion sickness, determining user performance relating to a designed game difficulty, and performing home automation are disclosed. The methods include determining a starting and ending checkpoint in an extended reality experience. Data from a plurality of users as they navigate between the determined checkpoints is obtained and used to determine a metric, such as a median, average, or other representative data. The current user's navigation through the same checkpoints is monitored and compared with the metric. The results from the comparison are used to enhance extended reality experience, which includes customizing the experience for motion sickness, game difficulty level, and home automation.
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p. ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
A63F 13/212 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs portés par le joueur, p. ex. pour mesurer le rythme cardiaque ou l’activité des jambes
84.
System and method for preprocessing of focal planes data for rendering
Systems and methods for rendering a 3D image are provided. The system receives texture and depth data (depth map) for an image. The system generates, based on the image data, a plurality of folded focal planes matrices. For each respective pixel matrix, the system preprocesses pixel values in the respective focal pixel plane matrix to generate a respective preprocessed matrix, wherein the respective preprocessed matrix clusters together pixel values of the respective folded focal plane matrix based on the depth data for the image. The system generates phase functions based on a plurality of the preprocessed matrices. The system configures a spatial light modulator of the SLM device in accordance with the generated phase functions. The system then provides the plurality of the preprocessed matrices as input to the SLM device to generate for display a 3D representation of the received image data.
H04N 13/395 - Affichages volumétriques, c.-à-d. systèmes où l’image est réalisée à partir d’éléments répartis dans un volume avec échantillonnage de la profondeur, c.-à-d. construction du volume à partir d’un ensemble ou d’une séquence de plans d’image 2D
H04N 13/312 - Reproducteurs d’images pour visionnement sans avoir recours à des lunettes spéciales, c.-à-d. utilisant des affichages autostéréoscopiques utilisant des barrières de parallaxe les barrières de parallaxe étant situées derrière l’affichage, p. ex. entre la source de rétroéclairage et le modulateur spatial de lumière [MSL]
G02B 30/52 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p. ex. des voxels le volume 3D étant construit à partir d'une pile ou d'une séquence de plans 2D, p. ex. systèmes d'échantillonnage en profondeur
85.
Systems and methods for reducing a number of focal planes used to display three-dimensional objects
Systems and methods for reducing a number of focal planes used to display a three-dimensional object are disclosed herein. In an embodiment, data defining a three-dimensional image according to a first plurality of focal planes are received. Pixel luminance values from the first plurality of focal planes are mapped to a second plurality of focal planes comprising fewer focal planes than the first plurality of focal planes. Data is stored identifying initial focal distances of the mapped pixel luminance values in the first plurality of focal planes. The second plurality of focal planes are then displayed on a near eye device which uses the data identifying initial focal distances of the mapped pixel luminance values to adjust a wavelength of light produced by the second plurality of images to cause the pixels to appear at their original focal distances.
A content provision system is disclosed. The advent of potential interactivity in advertisements and other content items means that the time for which those advertisements absorb the attention of the user cannot be known in advance. This presents a challenge when the interactive advertisements or other content items are to be accommodated in a scheduled slot for such items. To address this challenge, the duration of interaction of each interactive content item is estimated, statistically measured or modelled in advance of the scheduled slot, and at least one of the interactive items is provided in the slot at a time which accords with the duration of interaction. Where there are a plurality of interactive content items for inclusion in the slot, the interactive contents items can be ordered such that those having a longer duration of interaction are provided closer to the start of the slot.
H04N 21/262 - Ordonnancement de la distribution de contenus ou de données additionnelles, p. ex. envoi de données additionnelles en dehors des périodes de pointe, mise à jour de modules de logiciel, calcul de la fréquence de transmission de carrousel, retardement de la transmission de flux vidéo, génération de listes de reproduction
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
87.
SYSTEMS AND METHODS FOR COMPOSING A MEDIA FEED FOR A TARGET USER BY SELECTING MEDIA ASSETS THAT SHARE CONGRUENT OBJECTS WITH A SECONDARY CONTENT ITEM
Systems and methods for determining shared congruent objects between warming up media assets and a secondary content item are described. The warming up media assets, along with their reactions, are posted on a social media platform. A secondary content item is identified for display and congruent objects displayed in the secondary content item are used as recommendations to identify the warming up media assets. In some embodiments, the warming media assets do not include a display of the product or service that is the focus of the secondary content item. The congruent objects used as recommendations are determined based on selection of a reinforcement strategy which ranges from a soft to a hard reinforcement. Once a warming up media asset is identified, instructions are transmitted for its display, which may include displaying it at the top of a social media feed and auto-playing the warming up media asset.
G06Q 30/02 - MarketingEstimation ou détermination des prixCollecte de fonds
G06F 16/9536 - Personnalisation de la recherche basée sur le filtrage social ou collaboratif
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
G06F 16/9538 - Présentation des résultats des requêtes
88.
Systems and methods for enabling communications in an extended reality environment
Systems and methods are described herein for generating for display, at an extended reality (XR) device executing an application, an XR environment comprising an entity and associated with an XR session of a user profile. Execution of the application enables the XR device, while generating for display the XR environment, to transmit data to, and receive data from, a particular device using a first communication protocol, and transmit data to, and receive data from, the entity using a second communication protocol. Based on monitored activity of the user profile during the XR session, the XR device may be enabled to transmit data, received at the XR device from the entity using the second communication protocol, to the particular device using the first communication protocol, and transmit data, received at the XR device from the particular device using the first communication protocol, to the entity using the second communication protocol.
Systems and methods are described for generating an AR image are described herein. A physical camera is used to capture a video of a physical object in front of a physical background. The system then accesses data defining a virtual environment and selects a first position of a virtual camera in the virtual environment. While capturing the video, the system displays captured video of the physical object, such that the physical background is replaced with a view of the virtual environment from the first position of the virtual camera. In response to detecting a movement of the physical camera, the system selects a second position of the virtual camera in the virtual environment based on the detected movement. The system then displays the captured video of the physical object, wherein the view of the physical background is replaced with a view of the virtual environment from the second position of the virtual camera.
Systems and methods for reducing a number of focal planes used to display a three-dimensional object are disclosed herein. In an embodiment, data defining a three-dimensional image according to a first plurality of focal planes are received. Pixel luminance values from the first plurality of focal planes are mapped to a second plurality of focal planes comprising fewer focal planes than the first plurality of focal planes. Data is stored identifying initial focal distances of the mapped pixel luminance values in the first plurality of focal planes. The second plurality of focal planes are then displayed on a near eye device which uses the data identifying initial focal distances of the mapped pixel luminance values to adjust a wavelength of light produced by the second plurality of images to cause the pixels to appear at their original focal distances.
Systems and methods are presented herein for selectively preventing display of brand and business identifiers in content that is considered offensive, objectionable, disturbing, and/or unsettling, to prevent an unintended association by viewers of the content between the content and the brand or business. A request to display media content, comprising at least one frame with objectionable subject matter, is received. In response to determining the at least one frame comprises objectionable subject matter, a determination is made as to whether an entity identifier is visible in the at least one frame. In response to determining at least one entity identifier is visible in the at least one frame, a first context of the objectionable subject matter and a second context of the entity identifier are determined. The first context is compared to the second context and, based on the comparing, a masking action is determined to be required for the entity identifier.
H04N 21/454 - Filtrage de contenu, p. ex. blocage des publicités
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
H04N 21/4402 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
Systems and methods for generating a graphically animated audience are disclosed. Biometric data is captured via a sensor during display of content via a first device. The biometric data is stored in association with metadata for the content, and is mapped to a graphical representation. Based on the mapping of the biometric data to the graphical representation and the metadata, a graphical animation is generated for display in synchronization with displaying of the content via a second device.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/84 - Génération ou traitement de données de description, p. ex. descripteurs de contenu
Systems and methods for adaptive content streaming based on bandwidth are disclosed. According to one example method, content is requested for delivery. An indication of complexity of a plurality of media content items associated with the content is received. Based on the indication of complexity and an available bandwidth at the user device, at least one of the plurality media content items is selected and retrieved from the media server.
H04L 12/911 - Contrôle d’admission au réseau et allocation de ressources, p.ex. allocation de bande passante ou renégociation en cours de communication
H04L 29/08 - Procédure de commande de la transmission, p.ex. procédure de commande du niveau de la liaison
H04L 65/75 - Gestion des paquets du réseau multimédia
H04L 67/60 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises
H04N 21/8549 - Création de résumés vidéo, p. ex. bande annonce
G06F 12/00 - Accès à, adressage ou affectation dans des systèmes ou des architectures de mémoires
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
G06F 15/16 - Associations de plusieurs calculateurs numériques comportant chacun au moins une unité arithmétique, une unité programme et un registre, p. ex. pour le traitement simultané de plusieurs programmes
94.
Systems and methods for improved audio/video conferences
Systems and methods for efficient management of an audio/video conference are disclosed. The methods comprise recording voice data of a first user connected to a conference while the user is in a first state, determining the first user is talking while in the first state, and initiating playback of the recorded voice data of the first user to a plurality of other users, wherein a playback rate of the recorded voice data is variable.
Systems and methods are described for a media guidance application (e.g., implemented on a user device) that aggregates related media content, associated with media content selected by a user, by comparing keywords.
H04N 21/482 - Interface pour utilisateurs finaux pour la sélection de programmes
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
H04N 21/488 - Services de données, p. ex. téléscripteur d'actualités
H04N 21/8405 - Génération ou traitement de données de description, p. ex. descripteurs de contenu représentés par des mots-clés
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
96.
Systems and methods for avoiding spoilers in presenting recording prompts relating to a competition
Systems and methods are described for a media guidance application (e.g., implemented on a user device) that avoids spoilers in presenting recording prompts relating to a competition by scheduling presentation of notifications according to the user profile.
H04N 21/258 - Gestion de données liées aux clients ou aux utilisateurs finaux, p. ex. gestion des capacités des clients, préférences ou données démographiques des utilisateurs, traitement des multiples préférences des utilisateurs finaux pour générer des données collaboratives
H04H 60/27 - Dispositions d'enregistrement ou d'accumulation des informations radiodiffusées ou des informations relatives à la radiodiffusion
H04N 21/4722 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés pour la requête de données additionnelles associées au contenu
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
H04N 21/482 - Interface pour utilisateurs finaux pour la sélection de programmes
H04H 20/40 - Dispositions de radiodiffusion spécialement adaptées à des récepteurs du type à accumulation
H04H 60/65 - Dispositions pour des services utilisant les résultats du contrôle, de l'identification ou de la reconnaissance, couverts par les groupes ou pour utiliser les résultats côté utilisateurs
H04H 60/37 - Dispositions d'identification ou de reconnaissance de caractéristiques en liaison directe avec les informations radiodiffusées ou le créneau spatio-temporel de radiodiffusion, p. ex. pour identifier les stations de radiodiffusion ou pour identifier les utilisateurs pour identifier des segments des informations radiodiffusées, p. ex. des scènes, ou extraire des identificateurs [ID] de programmes
H04H 60/46 - Dispositions d'identification ou de reconnaissance de caractéristiques en liaison directe avec les informations radiodiffusées ou le créneau spatio-temporel de radiodiffusion, p. ex. pour identifier les stations de radiodiffusion ou pour identifier les utilisateurs pour reconnaître les préférences des utilisateurs
97.
System and method for selection of displayed objects by path tracing
Systems and methods for selecting objects by tracing the paths the objects traverse on a display. An object moving across a display screen does so along a particular path. Users may trace the shape of this path, such as by outlining the shape of the path with their finger or other device on a touch sensitive screen, moving a cursor with, e.g., a mouse, moving a motion-sensitive screen, or the like. The display may match the shape of the user's traced path to the shape of an object's path. Objects whose paths are shaped sufficiently similar to the user's traced path may then be selected. In this manner, users may select an object by tracing the path it takes, rather than directly picking or touching the object itself. This allows users an additional method for selecting displayed objects, improving the flexibility of many displays and programs run thereon.
Systems and methods for efficiently loading interactive media guide data based on user history are provided. A plurality of asset descriptions associated with a first media asset is received. A second media asset related to the first media asset is identified. An asset description is selected form the plurality of asset descriptions based on user history of accessing the second media asset. Data for the first media asset is loaded. Data for the first media asset comprises the selected asset description.
H04N 5/445 - Circuits de réception pour visualisation d'information additionnelle
H04N 21/482 - Interface pour utilisateurs finaux pour la sélection de programmes
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/8405 - Génération ou traitement de données de description, p. ex. descripteurs de contenu représentés par des mots-clés
H04N 21/45 - Opérations de gestion réalisées par le client pour faciliter la réception de contenu ou l'interaction avec le contenu, ou pour l'administration des données liées à l'utilisateur final ou au dispositif client lui-même, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films ou résolution de conflits d'ordonnancement
H04N 21/466 - Procédé d'apprentissage pour la gestion intelligente, p. ex. apprentissage des préférences d'utilisateurs pour recommander des films
H04N 21/433 - Opération de stockage de contenu, p. ex. opération de stockage en réponse à une requête de pause ou opérations de cache
H04N 21/475 - Interface pour utilisateurs finaux pour acquérir des données d'utilisateurs finaux, p. ex. numéro d'identification personnel [PIN] ou données de préférences
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
99.
Systems and methods to automatically join conference
Systems and methods are described to enable a device of a user to automatically join an ongoing conference, where the device is not currently joined to the conference. A first audio signature is generated based on voices of users already in the conference, and a second audio signature is generated based on an audio signal captured by a microphone of the device associated with the first user when the device associated with the first user was not joined to the conference. The first audio signature and the second audio signature are compared, and in response to determining that first audio signature matches the second audio signature, the device associated with the first user is joined to the conference.
Systems and methods are described for translating a non-playback command into a playback modification to encourage a recipient of the command to execute on the command. A media guidance application may determine a command from a first user to a second user. The media guidance application may compare the command to a set of playback operation commands for a media asset that is currently being played back. The media guidance application may determine that the command is not contained within the set of playback operation commands. In response to determining that the command is not contained within the set of commands, the media guidance application may determine whether the second user executes a desired outcome of the command. And in response to determining that the second user does not execute the desired outcome of the command, the media guidance application may determine whether to modify playback of the media asset.
H04N 21/422 - Périphériques d'entrée uniquement, p. ex. système de positionnement global [GPS]
H04L 12/28 - Réseaux de données à commutation caractérisés par la configuration des liaisons, p. ex. réseaux locaux [LAN Local Area Networks] ou réseaux étendus [WAN Wide Area Networks]
H04W 4/21 - Signalisation de servicesSignalisation de données auxiliaires, c.-à-d. transmission de données par un canal non destiné au trafic pour applications de réseaux sociaux
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
H04N 21/439 - Traitement de flux audio élémentaires