An IEEE 802.11 wireless protocol which allows a non-Access Point Multi-Link Device (non-AP MLD) to access a soft AP MLD (e.g., an MLD which is not configured for Simultaneous Transmit and Receive (STR)) over a conditional link (Link2) when the primary/basic link (Link1) is occupied by another STA/MLD. Additional frame exchanges are described between MLDs to facilitate the Non-AP MLD's ability to accessing the soft AP MLD. Utilizing the approach can increase overall network throughput in a single BSS or OBSS.
H04W 40/02 - Sélection d'itinéraire ou de voie de communication, p. ex. routage basé sur l'énergie disponible ou le chemin le plus court
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
2.
NEW SYNTAX ELEMENTS AND OPTIMIZATION METHODS FOR ORTHOATLAS
Improvements to orthoAtlas include: new syntax elements to indicate the mapping method used by orthoAtlas; a method to adjust projection parameters to compensate for compression artifacts in encoded vertex positions; a syntax modification to efficiently encode projection parameters taking advantage of temporal correlation; a method to reduce signaling by using the calculated bounding box to derive projection parameters; and a method to avoid bounding box estimation at decoder side, therefore reducing the decoder complexity.
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
3.
NEW SYNTAX ELEMENTS AND OPTIMIZATION METHODS FOR ORTHOATLAS
Improvements to orthoAtlas include: new syntax elements to indicate the mapping method used by orthoAtlas; a method to adjust projection parameters to compensate for compression artifacts in encoded vertex positions; a syntax modification to efficiently encode projection parameters taking advantage of temporal correlation; a method to reduce signaling by using the calculated bounding box to derive projection parameters; and a method to avoid bounding box estimation at decoder side, therefore reducing the decoder complexity.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
4.
SHARING-BASED CHANNEL ACCESS PROCEDURE FOR NEXT GENERATION OF WIRELESS LAN
A channel access process to meet increasingly stringent low-latency WLAN requirements by mitigating channel access contention, through a cooperative TXOP sharing process between stations, while maintaining channel access flexibility for non-AP stations. When a non-AP STA obtains a Transmit Opportunity (TXOP) as 'TXOP holder', its TXOP can be shared in part with other non-AP STAs, as shared non-AP STAs; allowing the shared non-AP STAs to more efficiently transmit and utilize the channel during the shared TXOP without contending with each other for the channel. Simulations show the process provides significant mitigation of End-to-End (E2E) delay and allows improved spectrum utilization by reducing overall network congestion.
H04L 5/00 - Dispositions destinées à permettre l'usage multiple de la voie de transmission
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
5.
SHARING-BASED CHANNEL ACCESS PROCEDURE FOR NEXT GENERATION OF WIRELESS LAN
A channel access process to meet increasingly stringent low-latency WLAN requirements by mitigating channel access contention, through a cooperative TXOP sharing process between stations, while maintaining channel access flexibility for non-AP stations. When a non-AP STA obtains a Transmit Opportunity (TXOP) as ‘TXOP holder’, its TXOP can be shared in part with other non-AP STAs, as shared non-AP STAs; allowing the shared non-AP STAs to more efficiently transmit and utilize the channel during the shared TXOP without contending with each other for the channel. Simulations show the process provides significant mitigation of End-to-End (E2E) delay and allows improved spectrum utilization by reducing overall network congestion.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
Methods, systems, and devices for enumerating the number of cells present in each event for cell-secretion based sorting applications are described herein. The cell secretion applications use carrier to encapsulate cells and collect the biomolecules they secrete. The carriers are then sorted using flow based particle sorting. The total number of cells present in a carrier is very important for several applications. A convolutional neural network is used to count the number of cells present from a brightfield image, and outputs the information to be used as part of the sort logic.
A secretion analysis and sorting system where cell(s) are deposited into a carrier that will capture any secretions from the cell, and is small enough to be sorted using flow cytometry based cell sorting, is described herein. The use of an image classification workflow that identifies the event types present in a sample of carriers allows the user to select which event types they would like to purify, and then train a supervised classification system that will be used to make real time sort decisions to purify the carriers which contain cells that secrete the desired product. A flow cytometer is then able to be used to screen the individual carriers to see if each one has the desired secreted product.
A secretion analysis and sorting system where cell(s) are deposited into a carrier that will capture any secretions from the cell, and is small enough to be sorted using flow cytometry based cell sorting, is described herein. The use of an image classification workflow that identifies the event types present in a sample of carriers allows the user to select which event types they would like to purify, and then train a supervised classification system that will be used to make real time sort decisions to purify the carriers which contain cells that secrete the desired product. A flow cytometer is then able to be used to screen the individual carriers to see if each one has the desired secreted product.
G01N 15/1433 - Traitement du signal utilisant la reconnaissance d’image
G01N 15/14 - Techniques de recherche optique, p. ex. cytométrie en flux
G06N 3/00 - Agencements informatiques fondés sur des modèles biologiques
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
G01N 15/00 - Recherche de caractéristiques de particulesRecherche de la perméabilité, du volume des pores ou de l'aire superficielle effective de matériaux poreux
9.
AUTOMATIC ANNOTATION OF EVENT TYPES IN IACS WORKFLOW
An automatic annotation method is implemented as part of an image activated cell sorter. A user inputs descriptive information about the events the user is trying to purify. Unsupervised clustering is used to group events with similar image features. Once clustering is complete, the automatic annotation algorithm uses the prior information and the features extracted during clustering to predict the identity of the events in each cluster and annotate the events.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
10.
AUTOMATIC ANNOTATION OF EVENT TYPES IN IACS WORKFLOW
An automatic annotation method is implemented as part of an image activated cell sorter. A user inputs descriptive information about the events the user is trying to purify. Unsupervised clustering is used to group events with similar image features. Once clustering is complete, the automatic annotation algorithm uses the prior information and the features extracted during clustering to predict the identity of the events in each cluster and annotate the events.
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/77 - Traitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
Methods, systems, and devices for enumerating the number of cells present in each event for cell-secretion based sorting applications are described herein. The cell secretion applications use carrier to encapsulate cells and collect the biomolecules they secrete. The carriers are then sorted using flow based particle sorting. The total number of cells present in a carrier is very important for several applications. A convolutional neural network is used to count the number of cells present from a brightfield image, and outputs the information to be used as part of the sort logic.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
Controlling channel access for UHR devices performing a Coordinated-Restricted Target Wake Time (C-RTWT) to prevent interference between Service Periods (SPs) on overlapping BSSs (OBSS). The coordination mechanism creates different rules based on the type of network topology and scenarios detected. In one aspect, a back-to-back CF-End transmission is used to truncate the basic Network Allocation Vector (NAV) of the OBSS. New channel access rules control TXOP termination, overlapping quiet interval and C-RTWT SP sharing for stations across BSSs.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 16/14 - Dispositions de partage du spectre de fréquence
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
Controlling channel access for UHR devices performing a Coordinated-Restricted Target Wake Time (C-RTWT) to prevent interference between Service Periods (SPs) on overlapping BSSs (OBSS). The coordination mechanism creates different rules based on the type of network topology and scenarios detected. In one aspect, a back-to-back CF-End transmission is used to truncate the basic Network Allocation Vector (NAV) of the OBSS. New channel access rules control TXOP termination, overlapping quiet interval and C-RTWT SP sharing for stations across BSSs.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
Performing non-primary channel access procedures when channel bonding or multi-channel operation is intended to be performed by a station, and yet it is determined that the intended primary channel is busy. The station may then choose a non-primary channel to act in the manner of the primary channel, to perform wideband transmissions in an immediately subsequent transmission interval.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
15.
METHODS AND APPARATUS FOR NON-PRIMARY CHANNEL ACCESS PROCEDURE
Performing non-primary channel access procedures when channel bonding or multi-channel operation is intended to be performed by a station, and yet it is determined that the intended primary channel is busy. The station may then choose a non-primary channel to act in the manner of the primary channel, to perform wideband transmissions in an immediately subsequent transmission interval.
H04W 72/02 - Sélection de ressources sans fil par un utilisateur ou un terminal
H04W 74/0808 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA]
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
A zippering algorithm includes following steps: find border vertices, according to the method, either derive the matching borders from distance search or use the matches provided in the SEI message, and fuse the matched borders. A fast search algorithm ignores vertices that are already matches, ignores vertices with a distance greater than a limit and is only utilized for border vertices.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
A system and method for normal and mesh detail separation for photometric tangent map creation is provided. The system acquires a base three-dimensional (3D) mesh of an object and a photometric surface normal corresponding to the object. The system computes a mesh density map based on the base 3D mesh and a base normal map based on vertex normal information included in the base 3D mesh. The system determines a correction on the photometric surface normal based on the base normal map and the mesh density map. The system generates a corrected photometric surface normal based on an application of the correction.
An electronic apparatus and method for generation of subsurface scattering texture maps for three-dimensional (3D) objects is provided. The electronic apparatus controls a set of light sources to generate a set of lighting patterns. The electronic apparatus controls a set of image capture devices to capture a set of images of an object that is illuminated by the generated set of lighting patterns. The electronic apparatus determines pixel-level offset information based on application of an image registration operation on the set of images. The electronic apparatus generates a set of corrected images based on the pixel-level offset information and the set of images. The electronic apparatus estimates subsurface scattering (SSS) parameters in a form of texture maps corresponding to a 3D mesh of the object. The estimation is performed based on a fitting of a scattering model on the set of corrected images.
A zippering algorithm includes following steps: find border vertices, according to the method, either derive the matching borders from distance search or use the matches provided in the SEI message, and fuse the matched borders. A fast search algorithm ignores vertices that are already matches, ignores vertices with a distance greater than a limit and is only utilized for border vertices.
A channel access procedure (protocol) for WLAN communications in controlled Ultra-High Reliability (UHR) scenarios. An initiating station (AP or non-AP) determines a stations to be idle and acquires a Transmit Opportunity (TxOP) within a specific fixed period, and may transmit for a defined portion of that period, and be idle for the remaining period. Options are described in which initiator can pause and restart, or fully terminate or truncate, transmission within that TxOP. A Global hybrid coordinator (GHC) paradigm is established for coordinating a group of APs belonging to different Basic Service Sets (BSSs).
Ultra High Reliability (UHR) operation under IEEE 802.11be provides an inner connection between a central controller and multiple AP or AP MLDs through a backhaul connection. Enhancements are described herein for overcoming roaming conflict issues using new link management functionalities directed to the new UHR architecture including a Global Link ID, Global AID, Global AID Bitmap and changes to link management negotiation and operations toward attaining seamless roaming. In addition, a Partial Virtual bitmap in TIM element, and a Global Multi-Link Traffic Indication element support these operations.
A system and method for normal and mesh detail separation for photometric tangent map creation is provided. The system acquires a base three-dimensional (3D) mesh of an object and a photometric surface normal corresponding to the object. The system computes a mesh density map based on the base 3D mesh and a base normal map based on vertex normal information included in the base 3D mesh. The system determines a correction on the photometric surface normal based on the base normal map and the mesh density map. The system generates a corrected photometric surface normal based on an application of the correction.
An electronic apparatus and method for generation of subsurface scattering texture maps for three-dimensional (3D) objects is provided. The electronic apparatus controls a set of light sources to generate a set of lighting patterns. The electronic apparatus controls a set of image capture devices to capture a set of images of an object that is illuminated by the generated set of lighting patterns. The electronic apparatus determines pixel-level offset information based on application of an image registration operation on the set of images. The electronic apparatus generates a set of corrected images based on the pixel-level offset information and the set of images. The electronic apparatus estimates subsurface scattering (SSS) parameters in a form of texture maps corresponding to a 3D mesh of the object. The estimation is performed based on a fitting of a scattering model on the set of corrected images.
G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
G06T 7/586 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir de plusieurs sources de lumière, p. ex. stéréophotométrie
Ultra High Reliability (UHR) operation under IEEE 802.11 be provides an inner connection between a central controller and multiple AP or AP MLDs through a backhaul connection. Enhancements are described herein for overcoming roaming conflict issues using new link management functionalities directed to the new UHR architecture including a Global Link ID, Global AID, Global AID Bitmap and changes to link management negotiation and operations toward attaining seamless roaming. In addition, a Partial Virtual bitmap in TIM element, and a Global Multi-Link Traffic Indication element support these operations.
H04W 36/18 - Exécution d'une resélection à des fins spécifiques pour permettre une resélection sans coupure, p. ex. une resélection en douceur
H04W 8/26 - Adressage ou numérotation de réseau pour support de mobilité
H04W 28/16 - Gestion centrale des ressourcesNégociation de ressources ou de paramètres de communication, p. ex. négociation de la bande passante ou de la qualité de service [QoS Quality of Service]
A channel access procedure (protocol) for WLAN communications in controlled Ultra-High Reliability (UHR) scenarios. An initiating station (AP or non- AP) determines a stations to be idle and acquires a Transmit Opportunity (TxOP) within a specific fixed period, and may transmit for a defined portion of that period, and be idle for the remaining period. Options are described in which initiator can pause and restart, or fully terminate or truncate, transmission within that TxOP. A Global hybrid coordinator (GHC) paradigm is established for coordinating a group of APs belonging to different Basic Service Sets (BSSs).
Quantization-related parameters are removed from the lifting transform parameters structure and have an independent structure referred to as quantization parameters that will contain the quantization-related information. The modified implementation has no performance difference from a reference implementation, and the modified implementation now can address displacement quantized values, even if they were not transformed.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
Quantization-related parameters are removed from the lifting transform parameters structure and have an independent structure referred to as quantization parameters that will contain the quantization-related information. The modified implementation has no performance difference from a reference implementation, and the modified implementation now can address displacement quantized values, even if they were not transformed.
An electronic device and a method for implementation for signal denoising based on adaptable deep neural networks. The electronic device receives training data comprising a first bio-signal and a second bio-signal that is different from the first bio-signal. The electronic device computes a weighted sum of the first bio-signal and the second bio- signal. The electronic device generates a mixed signal based on the weighted sum. The electronic device generates an output signal based on application of a denoising neural network (DNN) on the mixed signal. Further, the electronic device computes a loss based on a comparison of the output signal with the first bio-signal and trains the DNN for a number of epochs until the computed loss is below a threshold.
An electronic device and a method for implementation for signal denoising based on adaptable deep neural networks. The electronic device receives training data comprising a first bio-signal and a second bio-signal that is different from the first bio-signal. The electronic device computes a weighted sum of the first bio-signal and the second bio-signal. The electronic device generates a mixed signal based on the weighted sum. The electronic device generates an output signal based on application of a denoising neural network (DNN) on the mixed signal. Further, the electronic device computes a loss based on a comparison of the output signal with the first bio-signal and trains the DNN for a number of epochs until the computed loss is below a threshold.
A61B 5/00 - Mesure servant à établir un diagnostic Identification des individus
A61B 5/243 - Détection de champs biomagnétiques, p. ex. de champs magnétiques produits par des courants bioélectriques spécialement adaptée aux signaux magnétocardiographiques [MCG]
Providing conflict resolution between beacon transmission and Restricted-Target Wake Time (R-TWT) Service Periods (SPs) to enhance IEEE 802.11 protocols, especially for Extra High Throughput (EHT) Access Points (APs). After determining if a beacon will overlap the start of a TXOP and whether the beacon or R-TWT SP transmission has higher priority; the AP resolves the conflict such as to start or continue the R-TWT TID and embed the beacon frame within the transmitted frame, or to limit the TXOP to provide for transmitting the beacon frame at the proper timing.
Providing conflict resolution between beacon transmission and Restricted-Target Wake Time (R-TWT) Service Periods (SPs) to enhance IEEE 802.11 protocols, especially for Extra High Throughput (EHT) Access Points (APs). After determining if a beacon will overlap the start of a TXOP and whether the beacon or R-TWT SP transmission has higher priority; the AP resolves the conflict such as to start or continue the R-TWT TID and embed the beacon frame within the transmitted frame, or to limit the TXOP to provide for transmitting the beacon frame at the proper timing.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
A novel method of classifying point cloud data by extending the Gray-level Co-occurrence Matrix (GLCM) technique from the 2D to the sparse 3D domain is described herein. The method is able to be applied to point clouds derived from a mesh collection/meshes (such as, the Real-World Textured Things (RWTT) mesh collection). Implementations designed for multiple purposes are described herein: sampling and quantization of RWTT meshes, generation of GLCMs and corresponding texture descriptors, and the selection of potential candidate point clouds based on these extracted descriptors.
A novel method of classifying point cloud data by extending the Gray-level Co-occurrence Matrix (GLCM) technique from the 2D to the sparse 3D domain is described herein. The method is able to be applied to point clouds derived from a mesh collection/meshes (such as, the Real-World Textured Things (RWTT) mesh collection). Implementations designed for multiple purposes are described herein: sampling and quantization of RWTT meshes, generation of GLCMs and corresponding texture descriptors, and the selection of potential candidate point clouds based on these extracted descriptors.
An electronic device and a method for generation of three-dimensional (3D) blend-shapes from 3D scans using neural network is disclosed. The electronic device acquires a set of 3D scans including a body portion of an object. The electronic device determines a set of segments of the body portion from each 3D scan. The electronic device applies a neural network model on the acquired set of 3D scans. The electronic device determines a set of vertex difference vectors. Each vector of the determined set of vertex difference vectors corresponds to a 3D blend-shape. Each segment of the determined set of segments is configured to move independently in the 3D blend-shape. The electronic device reconstructs a 3D mesh sequence. The electronic device re-trains the neural network model. The re-trained neural network model is configured to determine a set of 3D blend-shapes based on a set of input 3D scans.
A wireless protocol for providing smooth roaming when a non-Access Point (non-AP) Multi-Link Device (MLD) roams between Basic Service Sets (BSSs). One or more links of a roaming non-AP MLD can be in the process of communicating latency sensitive traffic during a R-TWT SP of a first BSS while roaming to a target BSS. Negotiation is made with the AP MLD of the target BSS so that upon completion of roaming to the target BSS, one or more links of the roaming non-AP MLD are allowed to use a predetermined/enhanced R-TWT SP of the target BSS without further negotiation.
A wireless protocol for providing smooth roaming when a non-Access Point (non-AP) Multi-Link Device (MLD) roams between Basic Service Sets (BSSs). One or more links of a roaming non-AP MLD can be in the process of communicating latency sensitive traffic during a R-TWT SP of a first BSS while roaming to a target BSS. Negotiation is made with the AP MLD of the target BSS so that upon completion of roaming to the target BSS, one or more links of the roaming non-AP MLD are allowed to use a predetermined/enhanced R-TWT SP of the target BSS without further negotiation.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04L 5/00 - Dispositions destinées à permettre l'usage multiple de la voie de transmission
A coordinated form of R-TWT SP scheduling taking into account the needs of non-AP STAs in adjacent BSS. A flag in the BSSID field is used to indicate if the AP is represented by the BSSID is a UHR AP. Mechanisms are described to support coordinated R-TWTs and the operation of UHR devices supporting these coordinated R-TWT. The protocol for UHR AP and UHR non-AP STAs allows for identifying possible interference with the OBSS, directly rescheduling, or performing negotiation with the UHR AP of the OBSS for scheduling the R-TWT SP.
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
H04W 72/541 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de qualité en utilisant le niveau d’interférence
38.
COORDINATED R-TWT SP SCHEDULING AMONG MULTIPLE ADJACENT BSS ACCORDING TO NON-AP STA NEEDS
A coordinated form of R-TWT SP scheduling taking into account the needs of non-AP STAs in adjacent BSS. A flag in the BSSID field is used to indicate if the AP is represented by the BSSID is a UHR AP. Mechanisms are described to support coordinated R-TWTs and the operation of UHR devices supporting these coordinated R-TWT. The protocol for UHR AP and UHR non-AP STAs allows for identifying possible interference with the OBSS, directly rescheduling, or performing negotiation with the UHR AP of the OBSS for scheduling the R-TWT SP.
Described herein is a method to segment meshes into sub-meshes based on triangle properties. The triangles are first classified according to some characteristic using their respective areas. A filtering process may change the classification according to the neighboring triangles. Then connected components are generated, and neighboring connected components are merged following a certain criteria. In some embodiments, connected components that share the most amount of edges are merged. With this technique, sub-meshes can be automatically generated without any previous knowledge of the mesh generation stage.
G06T 7/187 - DécoupageDétection de bords impliquant des croissances de zonesDécoupageDétection de bords impliquant des fusions de zonesDécoupageDétection de bords impliquant un étiquetage de composantes connexes
G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
A new SEI message for the V-DMC standard is described herein, the zippering SEI. The zippering SEI message can be used by the decoder for the mesh reconstruction, where in the case of multiple sub-meshes, the zippering SEI provides ways to reduce common artifacts caused by independent sub-mesh encoding, such as holes and cracks on the mesh surface.
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
Described herein is a method to segment meshes into sub-meshes based on triangle properties. The triangles are first classified according to some characteristic using their respective areas. A filtering process may change the classification according to the neighboring triangles. Then connected components are generated, and neighboring connected components are merged following a certain criteria. In some embodiments, connected components that share the most amount of edges are merged. With this technique, sub-meshes can be automatically generated without any previous knowledge of the mesh generation stage.
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
A new high-level syntax element referred to as a basemesh patch data unit allows the transmission of syntax elements related to the basemesh component of a V3C bitstream. Previously, the V3C standard did not have any additional patches targeting the basemesh only. A basemesh patch data unit works with the previously disclosed sub-patch concept to provide syntax elements to be used with basemesh bitstream. The basemesh patch data unit allows mesh data and other types of data (e.g., point clouds) to be mixed together at the patch level and generate a richer representation of 3D objects. Furthermore, the basemesh patch data unit expands the concept of sub-meshes and patches by providing a flexible way to arrange the data in 3D (at the basemesh level) and in 2D (at the texture map level) allowing different configurations, instance, multiple attribute images and multiple basemeshes.
A new high-level syntax element referred to as a basemesh patch data unit allows the transmission of syntax elements related to the basemesh component of a V3C bitstream. Previously, the V3C standard did not have any additional patches targeting the basemesh only. A basemesh patch data unit works with the previously disclosed sub-patch concept to provide syntax elements to be used with basemesh bitstream. The basemesh patch data unit allows mesh data and other types of data (e.g., point clouds) to be mixed together at the patch level and generate a richer representation of 3D objects. Furthermore, the basemesh patch data unit expands the concept of sub-meshes and patches by providing a flexible way to arrange the data in 3D (at the basemesh level) and in 2D (at the texture map level) allowing different configurations, instance, multiple attribute images and multiple basemeshes.
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
An Imaging-activated cell sorter (IACS) provides spatial resolution to facilitate single EVs sorting based on fluorescence markers. The imaging is used to detect if the fluorescence (FL) signal is located in a single spot or is in multiple spots (e.g., a swarm). The images can be a single image, or multiple spatially registered FL images.
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
45.
SINGLE EXTRACELLULAR VESICLE SORTING BASED ON SURFACE BIOMARKERS
An Imaging-activated cell sorter (IACS) provides spatial resolution to facilitate single EVs sorting based on fluorescence markers. The imaging is used to detect if the fluorescence (FL) signal is located in a single spot or is in multiple spots (e.g., a swarm). The images can be a single image, or multiple spatially registered FL images.
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
The single Level-of-Detail (LoD) per block displacement packing method enables slice decoding, scalability and other video processing implementations. The sequence displacement bitrate is only minimally changed by using the LoD per block packing implementation.
H04N 19/46 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
47.
Inter-frame attribute coding in geometry-based dynamic point clouds compression
An attribute prediction and compensation scheme for geometry-based dynamic point cloud compression is described herein. A combination of multiple reference frames are able to be used as a predictor for current frames. The method described herein improves efficiency and accuracy.
H04N 7/12 - Systèmes dans lesquels le signal de télévision est transmis par un canal ou une pluralité de canaux parallèles, la bande passante de chaque canal étant inférieure à la largeur de bande du signal de télévision
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
The single Level-of-Detail (LoD) per block displacement packing method enables slice decoding, scalability and other video processing implementations. The sequence displacement bitrate is only minimally changed by using the LoD per block packing implementation.
An electronic device and method for a variable rate compression of a point cloud geometry is provided. The electronic device stores a set of RD operation points and coding modes associated with the set of RD operation points. The electronic device receives a 3D point cloud geometry and partitions the 3D geometry into a set of blocks. After the partition, the electronic device selects a block and computes a set of loss values associated with one or more compression metrics. Such loss values correspond to a set of coding modes associated with at least a subset of the set of RD operation points. From the set of coding modes, the electronic device selects a coding mode for which a loss value of the set of loss values is below a loss threshold for that coding mode. Thereafter, the electronic device encodes the block based on the coding mode.
H04N 19/147 - Débit ou quantité de données codées à la sortie du codeur selon des critères de débit-distorsion
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
An electronic device and method for a variable rate compression of a point cloud geometry is provided. The electronic device stores a set of RD operation points and coding modes associated with the set of RD operation points. The electronic device receives a 3D point cloud geometry and partitions the 3D geometry into a set of blocks. After the partition, the electronic device selects a block and computes a set of loss values associated with one or more compression metrics. Such loss values correspond to a set of coding modes associated with at least a subset of the set of RD operation points. From the set of coding modes, the electronic device selects a coding mode for which a loss value of the set of loss values is below a loss threshold for that coding mode. Thereafter, the electronic device encodes the block based on the coding mode.
An attribute prediction and compensation scheme for geometry-based dynamic point cloud compression is described herein. A combination of multiple reference frames are able to be used as a predictor for current frames. The method described herein improves efficiency and accuracy.
Parts-based blendshape generation involves establishing a 4D capture sequence. A face, hands and legs are able to be established for the meshes using a template to generate tracked and templated meshes. Specific muscle deformations are extracted. Pose-specific spatial deformations are integrated into a pose to generate specific flesh deformations. A muscle deformation method enables muscle part-based approach deformation (e.g., just biceps if flexing arms). The muscle deformation also enables generating a novel pose not captured.
Wireless protocol enhancements are provided for allowing inter-BSS stations/APs to support, negotiate and participate in cooperative R-TWT SPs that are overlapped in time and are scheduled on different links or different channels to avoid the inter-BSS interference of prioritized traffic that has been scheduled to transmitted during the corresponding R-TWT SPs. The protocol makes use of new subfield elements toward informing, negotiating and supporting the cooperative R-TWT SPs.
Present TID-to-link (T2L) mapping is limited in to only mapping the UL and/or DL direction. Mechanisms are taught for providing T2L mapping for TDLS link, while R-TWT features can be extended to setup R-TWT membership on a TDLS link. Using the T2L mapping for a TDLS link, different traffic with different priorities can be differentiated and directed on different TDLS direct links to reduce interference on any single link and to allocate the RTA traffic to channels which have wider bandwidths and less traffic congestion.
Present TID-to-link (T2L) mapping is limited in to only mapping the UL and/or DL direction. Mechanisms are taught for providing T2L mapping for TDLS link, while R-TWT features can be extended to setup R-TWT membership on a TDLS link. Using the T2L mapping for a TDLS link, different traffic with different priorities can be differentiated and directed on different TDLS direct links to reduce interference on any single link and to allocate the RTA traffic to channels which have wider bandwidths and less traffic congestion.
Wireless protocol enhancements are provided for allowing inter-BSS stations/APs to support, negotiate and participate in cooperative R-TWT SPs that are overlapped in time and are scheduled on different links or different channels to avoid the inter-BSS interference of prioritized traffic that has been scheduled to transmitted during the corresponding R-TWT SPs. The protocol makes use of new subfield elements toward informing, negotiating and supporting the cooperative R-TWT SPs.
H04W 72/541 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de qualité en utilisant le niveau d’interférence
H04W 72/566 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de priorité de l’information, de la source d’information ou du destinataire
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
57.
V3C SYNTAX EXTENSION FOR MESH COMPRESSION USING SUB-PATCHES
With the concept of sub-patches, the single geometry is described at the patch level, while the multiple attributes are then described at the sub-patch level. One application of the concept is the derivation of texture coordinates using projections for several sections of a mesh surface (affecting the attributes only), whereby the geometry of the whole mesh section uses just one single syntax element.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
58.
CHANNEL ACCESS FOR REMOTE RADIO HEADS (RRHS) IN WHICH MAC AND PHY LAYERS ARE IMPLEMENTED ON DIFFERENT DEVICES
A Central Unit (CU) is connected to multiple Remote Radio Heads (RRHs) over a backhaul. The spatial separation of units creates long round-trip times, which are addressed in the present disclosure. The enhanced operation allows for counting down a BackOff (BO) on either the RRH side or on the CU side. RRHs that perform the BO procedure can resolve the issue of insufficient InterFrame Space (IFS) time for the RRH to obtain Data from the CU and to invoke a BO. In addition, RRHs that perform the BO procedures are able to reduce channel access contention when multiple RRHs are performing BOs, while utilization efficiency for the medium can also be increased by the improved synchronization of BO and Clear Channel Assessment (CCA) status between the CU and RRHs.
An electronic device and method for multiscale inter-prediction for dynamic point cloud compression is provided. The electronic device receives a set of reference point cloud frames and a current point cloud frame. The electronic device generates reference frame data comprising a feature set for each reference point cloud frame and a first set of features for the current point cloud frame. The electronic device predicts a second set of features for the current point cloud frame, using a first neural network predictor, based on the reference frame data. The electronic device computes a set of residual features based on the first set of features and the second set of features. The electronic device generates a set of quantized residual features based on the set of residual features and a bitstream of encoded point cloud data for the current 3D point cloud frame based on the set of quantized residual features.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/30 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques hiérarchiques, p. ex. l'échelonnage
H04N 19/503 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre la prédiction temporelle
H04N 19/96 - Codage au moyen d'une arborescence, p. ex. codage au moyen d'une arborescence quadratique
60.
V3C SYNTAX EXTENSION FOR MESH COMPRESSION USING SUB-PATCHES
With the concept of sub-patches, the single geometry is described at the patch level, while the multiple attributes are then described at the sub-patch level. One application of the concept is the derivation of texture coordinates using projections for several sections of a mesh surface (affecting the attributes only), whereby the geometry of the whole mesh section uses just one single syntax element.
A Central Unit (CU) is connected to multiple Remote Radio Heads (RRHs) over a backhaul. The spatial separation of units creates long round-trip times, which are addressed in the present disclosure. The enhanced operation allows for counting down a BackOff (BO) on either the RRH side or on the CU side. RRHs that perform the BO procedure can resolve the issue of insufficient InterFrame Space (IFS) time for the RRH to obtain Data from the CU and to invoke a BO. In addition, RRHs that perform the BO procedures are able to reduce channel access contention when multiple RRHs are performing BOs, while utilization efficiency for the medium can also be increased by the improved synchronization of BO and Clear Channel Assessment (CCA) status between the CU and RRHs.
An electronic device and method for detection and indication of geometry reconstruction artifacts is provided. The electronic device acquires a reference point cloud, encodes the reference point cloud to generate encoded point cloud data, and decodes the encoded point cloud data to generate a test point cloud. The electronic device further generates a first local density map and a second local density map for points of the reference point cloud and the test point cloud, respectively. The electronic device generates a final density map based on the first local density map and the second local density map, and further generates supplementary information based on the final density map. The supplementary information includes missing points data corresponding to regions of the test point cloud that include artifacts such as holes or includes descriptors for the regions that include the artifacts. The electronic device signals the supplementary information to a decoder.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
63.
DETECTION AND INDICATION OF GEOMETRY RECONSTRUCTION ARTIFACTS IN POINT CLOUDS BASED ON LOCAL DENSITY
An electronic device and method for detection and indication of geometry reconstruction artifacts is provided. The electronic device acquires a reference point cloud, encodes the reference point cloud to generate encoded point cloud data, and decodes the encoded point cloud data to generate a test point cloud. The electronic device further generates a first local density map and a second local density map for points of the reference point cloud and the test point cloud, respectively. The electronic device generates a final density map based on the first local density map and the second local density map, and further generates supplementary information based on the final density map. The supplementary information includes missing points data corresponding to regions of the test point cloud that include artifacts such as holes or includes descriptors for the regions that include the artifacts. The electronic device signals the supplementary information to a decoder.
An electronic device and method for using a rate adaptive codec for dynamic point cloud compression is provided. The electronic device receives a point cloud frame (PCF) sequence that includes a set of reference PCFs and a current PCF that is to be encoded. The electronic device generates 3D feature data based on application of a neural network-based encoder on at least one of the set of reference PCFs and the current PCF. The electronic device selects from an encoding pipeline, an encoding path as an I-frame encoding path or a P-frame encoding path for the current PCF. The selected encoding path extends from an output of the neural network-based encoder and includes a first sequence of transformation operations that is applicable on the 3D feature data. The electronic device generates a bit-stream of coded PCF based on application of the first sequence of transformation operations on the 3D feature data.
An electronic device and method for using a rate adaptive codec for dynamic point cloud compression is provided. The electronic device receives a point cloud frame (PCF) sequence that includes a set of reference PCFs and a current PCF that is to be encoded. The electronic device generates 3D feature data based on application of a neural network-based encoder on at least one of the set of reference PCFs and the current PCF. The electronic device selects from an encoding pipeline, an encoding path as an I-frame encoding path or a P-frame encoding path for the current PCF. The selected encoding path extends from an output of the neural network-based encoder and includes a first sequence of transformation operations that is applicable on the 3D feature data. The electronic device generates a bit-stream of coded PCF based on application of the first sequence of transformation operations on the 3D feature data.
Digital authentication of a product using a digital wallet of a user, including: generating a blockchain for the product; receiving and matching a serial number for the product to the digital wallet of the user; and adding the generated blockchain to the digital wallet.
Digital authentication of a product using a digital wallet of a user, including: generating a blockchain for the product; receiving and matching a serial number for the product to the digital wallet of the user; and adding the generated blockchain to the digital wallet.
G06Q 20/36 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des portefeuilles électroniques ou coffres-forts électroniques
G06Q 20/38 - Protocoles de paiementArchitectures, schémas ou protocoles de paiement leurs détails
68.
DEEP-LEARNING BASED PEAK DETECTION IN BIOLOGICAL SIGNAL
An electronic device and a method for implementation for deep-learning based peak detection in biological signal. The electronic device receives a biological signal associated with a user. The electronic device detects a first peak and a first trough associated with the first peak, from the received biological signal. The electronic device applies a local search algorithm on the received biological signal. The electronic device refines the detected first peak and the first trough based on the application of the local search algorithm on the received biological signal. The electronic device determines a health condition associated with the user based on the refined first peak and the first trough.
A framework that includes one or more feature extractors (models) and a cluster component for clustering is described herein. The framework supports (1) offline image-based unsupervised clustering that replaces time-consuming manual gating; (2) online image-based single cell sorting. During training, one or multiple cell image datasets with or without ground truth are used to train one or more feature extractors, which are based on a neural network including several convolutional layers. Once trained, the feature extractors are used to extract features of cell images for unsupervised cell clustering and sorting. In addition, additional datasets may be used to further refine the feature extractors after they have been trained.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
G06V 10/80 - Fusion, c.-à-d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
The single cell identification described herein utilizes cell image information and extracts cell features with a neural network model to subtly distinguish the noise events from single cells, allowing the user to choose which different types of noise events to exclude depending on the requirement of applications. The fast neural network model is able to extract more abundant and specific cell features than handpicked features, which enables the model to be equipped with higher accuracy and higher discriminative capability of distinguishing noise events and identifying the single cells in real-time. Utilization of a neural network model for real-time single cell identification represents a novel technique never applied before. It allows high discriminative capability and high accuracy compared to traditional FACS (Fluorescence-activated Cell Sorting). The usefulness of this technique is to integrate with any brightfield (BF) model and fluorescence (FL) model to identify single cells for different downstream applications.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
The single cell identification described herein utilizes cell image information and extracts cell features with a neural network model to subtly distinguish the noise events from single cells, allowing the user to choose which different types of noise events to exclude depending on the requirement of applications. The fast neural network model is able to extract more abundant and specific cell features than handpicked features, which enables the model to be equipped with higher accuracy and higher discriminative capability of distinguishing noise events and identifying the single cells in real-time. Utilization of a neural network model for real-time single cell identification represents a novel technique never applied before. It allows high discriminative capability and high accuracy compared to traditional FACS (Fluorescence-activated Cell Sorting). The usefulness of this technique is to integrate with any brightfield (BF) model and fluorescence (FL) model to identify single cells for different downstream applications.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques
A new SEI message for the V-DMC standard is described herein, the zippering SEI. The zippering SEI message can be used by the decoder for the mesh reconstruction, where in the case of multiple sub-meshes, the zippering SEI provides ways to reduce common artifacts caused by independent sub-mesh encoding, such as holes and cracks on the mesh surface.
An electronic device and method for lighting control for multi-view image capture is provided. The electronic device selects, from a plurality of predefined lighting patterns, a set of lighting patterns to capture a set of images of a subject. The electronic device controls a lighting system to illuminate the subject with a first pulse and a second pulse. An illumination intensity increases from a first intensity to a second intensity, associated with the first pulse and the second pulse respectively. The electronic device controls the lighting system to illuminate the subject with flash pulses corresponding to the selected set of lighting patterns, based on illumination of the subject with the first pulse and the second pulse. The electronic device controls a set of multi-view image capture devices to capture the set of images of the subject, based on illumination of the subject with the selected set of flash pulses.
H04N 23/74 - Circuits de compensation de la variation de luminosité dans la scène en influençant la luminosité de la scène à l'aide de moyens d'éclairage
G03B 15/05 - Combinaisons d'appareils photographiques avec flash électroniqueFlash électronique
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
An electronic device and method for lighting control for multi-view image capture is provided. The electronic device selects, from a plurality of predefined lighting patterns, a set of lighting patterns to capture a set of images of a subject. The electronic device controls a lighting system to illuminate the subject with a first pulse and a second pulse. An illumination intensity increases from a first intensity to a second intensity, associated with the first pulse and the second pulse respectively. The electronic device controls the lighting system to illuminate the subject with flash pulses corresponding to the selected set of lighting patterns, based on illumination of the subject with the first pulse and the second pulse. The electronic device controls a set of multi-view image capture devices to capture the set of images of the subject, based on illumination of the subject with the selected set of flash pulses.
In the current implementation of V-DMC, the (u, v) coordinates are generated using Microsoft UVAtlas and they, together with the 3D positions and the topology, are carried in the base mesh sub-bitstream. High level syntax structures described herein support projection-based atlas map generation, and the means to derive the (u, v) coordinates on the decoder side using V3C syntax structure extensions. In comparison with previous implementations and in order to preserve the current V3C geometry bitstream concept, a separate sub-bitstream referred to hereby as vertex property sub-bitstream is used to carry displacement information.
In the current implementation of V-DMC, the (u, v) coordinates are generated using Microsoft UV Atlas and they, together with the 3D positions and the topology, are carried in the base mesh sub-bitstream. High level syntax structures described herein support projection-based atlas map generation, and the means to derive the (u, v) coordinates on the decoder side using V3C syntax structure extensions. In comparison with previous implementations and in order to preserve the cunent V3C geometry bitstream concept, a separate sub-bitstream referred to hereby as vertex property sub-bitstream is used to carry displacement information.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
An electronic device and method for multiscale inter-prediction for dynamic point cloud compression is provided. The electronic device receives a set of reference point cloud frames and a current point cloud frame. The electronic device generates reference frame data comprising a feature set for each reference point cloud frame and a first set of features for the current point cloud frame. The electronic device predicts a second set of features for the current point cloud frame, using a first neural network predictor, based on the reference frame data. The electronic device computes a set of residual features based on the first set of features and the second set of features. The electronic device generates a set of quantized residual features based on the set of residual features and a bitstream of encoded point cloud data for the current 3D point cloud frame based on the set of quantized residual features.
An electronic device and method for generation of reflectance maps for relightable 3D models is disclosed. The electronic device acquires multi-view image data that includes a set of images of an object and generates a 3D mesh of the object based on the multi-view image data. The electronic device obtains a set of motion-corrected images based on a minimization of a rigid motion associated with the object between images of the set of images and generates texture maps in a UV space based on the set of motion-corrected images and the 3D mesh. The electronic device obtains specular and diffuse reflectance maps based on a separation of specular and diffuse reflectance components from the texture maps, and obtains a relightable 3D model of the object based on the specular and diffuse reflectance maps.
An electronic device and method for generation of reflectance maps for relightable 3D models is disclosed. The electronic device acquires multi-view image data that includes a set of images of an object and generates a 3D mesh of the object based on the multi-view image data. The electronic device obtains a set of motion-corrected images based on a minimization of a rigid motion associated with the object between images of the set of images and generates texture maps in a UV space based on the set of motion-corrected images and the 3D mesh. The electronic device obtains specular and diffuse reflectance maps based on a separation of specular and diffuse reflectance components from the texture maps. and obtains a relightable 3D model of the object based on the specular and diffuse reflectance maps.
G01B 11/245 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en utilisant plusieurs transducteurs fixes fonctionnant simultanément
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p. ex. des franges de moiré, sur l'objet
G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la télémétrie laser, p. ex. par interférométrieRécupération de la profondeur ou de la forme à partir de la projection de lumière structurée
G06T 7/586 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir de plusieurs sources de lumière, p. ex. stéréophotométrie
G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
80.
ORTHOATLAS: TEXTURE MAP GENERATION FOR DYNAMIC MESHES USING ORTHOGRAPHIC PROJECTIONS
The generation of a texture map using orthographic projections is performed in a fast and efficient manner. A method to generate texture maps taking significantly less time and also allowing maps to exploit the correlation between content of different frames in time is described herein. The texture mapping is able to be used for automatic generation of volumetric content or for more efficient compression of dynamic meshes. The texture map generation described herein includes ways to generate a texture atlas using orthographic projections. A novel stretch metric for orthographic projections is described, and a merging algorithm is devised to optimally cluster triangles into a single patch. Additionally, packing techniques are able to be used for mesh patches that try to optimize size and temporal stability.
A method is disclosed to generate (u,v) coordinates at the decoder side by using parameters of orthographic projection functions, transmitted via an atlas bitstream. With the parameters for orthographic projection, the decoder is able to efficiently generate (u,v) coordinates and avoid their expensive coding.
The generation of a texture map using orthographic projections is performed in a fast and efficient manner. A method to generate texture maps taking significantly less time and also allowing maps to exploit the correlation between content of different frames in time is described herein. The texture mapping is able to be used for automatic generation of volumetric content or for more efficient compression of dynamic meshes. The texture map generation described herein includes ways to generate a texture atlas using orthographic projections. A novel stretch metric for orthographic projections is described, and a merging algorithm is devised to optimally cluster triangles into a single patch. Additionally, packing techniques are able to be used for mesh patches that try to optimize size and temporal stability.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
83.
EFFICIENT MAPPING COORDINATE CREATION AND TRANSMISSION
A method is disclosed to generate (u,v) coordinates at the decoder side by using parameters of orthographic projection functions, transmitted via an atlas bitstream. With the parameters for orthographic projection, the decoder is able to efficiently generate (u,v) coordinates and avoid their expensive coding.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/46 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression
84.
MODULE FOR IDENTIFICATION AND CLASSIFICATION TO SORT CELLS BASED ON THE NUCLEAR TRANSLOCATION OF FLUORESCENCE SIGNALS
An Image Activated Cell Sorting (IACS) classification workflow includes: employing a neural network-based feature encoder (or extractor) to extract features of cell images; automatically clustering cells based on extracted cell features; identifying a cluster to pick which cluster(s) to sort based on the cell images; fine-tuning a classification network based on the cluster(s) selected; and once refined, the classification network is used to sort cells for real-time live sorting.
G06F 18/2415 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur des modèles paramétriques ou probabilistes, p. ex. basées sur un rapport de vraisemblance ou un taux de faux positifs par rapport à un taux de faux négatifs
An Image Activated Cell Sorting (IACS) classification workflow includes: employing a neural network-based feature encoder (or extractor) to extract features of cell images; automatically clustering cells based on extracted cell features; identifying a cluster to pick which cluster(s) to sort based on the cell images; fine-tuning a classification network based on the cluster(s) selected; and once refined, the classification network is used to sort cells for real-time live sorting.
G06V 20/69 - Objets microscopiques, p. ex. cellules biologiques ou pièces cellulaires
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
G06V 10/771 - Sélection de caractéristiques, p. ex. sélection des caractéristiques représentatives à partir d’un espace multidimensionnel de caractéristiques
86.
ENHANCED QUALITY OF SERVICE STATUS REPORT THAT SUPPORTS LATENCY REQUIREMENTS
Protocol enhancements for IEEE 802.11 handling of Buffer Status Reports (BSRs) to provide real-time information on Quality of Service (QoS) requirements. A non-AP STA can report QoS status about portions of a buffer, which then allows the AP to schedule transmissions for those buffers toward satisfying their QoS requirements. The non-AP STA can also change parameters of the QoS characteristics element of an existing SCS to allow the AP to schedule transmission for the SCS traffic with the new parameters immediately. The non-AP STA can also report on buffers that should soon arrive to allow the AP to trigger the transmission before those buffers arrive. Additional benefits are also provided.
Protocol enhancements for IEEE 802.11 handling of Buffer Status Reports (BSRs) to provide real-time information on Quality of Service (QoS) requirements. A non-AP STA can report QoS status about portions of a buffer, which then allows the AP to schedule transmissions for those buffers toward satisfying their QoS requirements. The non-AP STA can also change parameters of the QoS characteristics element of an existing SCS to allow the AP to schedule transmission for the SCS traffic with the new parameters immediately. The non-AP STA can also report on buffers that should soon arrive to allow the AP to trigger the transmission before those buffers arrive. Additional benefits are also provided.
H04W 24/10 - Planification des comptes-rendus de mesures
H04W 28/02 - Gestion du trafic, p. ex. régulation de flux ou d'encombrement
H04W 72/566 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de priorité de l’information, de la source d’information ou du destinataire
88.
System and method to reduce an amount of sunlight and an amount of specular reflection in drone sensing
An apparatus includes one or more processors and logic encoded in one or more non-transitory media for execution by the one or more processors and when executed operable to receive sensor data from a drone that travels around a target object. The logic is further operable to generate, based on the sensor data, a first three-dimensional (3D) reconstruction of the target object. The logic is further operable to estimate a direction of sunlight and a direction of spectral reflection. The logic is further operable to plan a trajectory of sensor capturing positions for the drone to capture images of the target object that reduce an amount of sunlight and an amount of specular reflection.
An electronic device and method for image component generation based on application of iterative learning on autoencoder model and transformer model is provided. The electronic device fine-tunes, based on first training data including a first set of images, an autoencoder model and a transformer model. The autoencoder model includes an encoder model, a learned codebook, a generator model, and a discriminator model. The electronic device selects a subset of images from the first training data. The electronic device applies the encoder model on the selected subset of images. The electronic device generates second training data including a second set of images, based on the application of the encoder model. The generated second training data corresponds to a quantized latent representation of the selected subset of images. The electronic device pre-trains the autoencoder model to create a next generation of the autoencoder model, based on the generated second training data.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
90.
ESTIMATION OF DENSITY DISTORTION METRIC FOR PROCESSING OF POINT CLOUD GEOMETRY
An electronic device and method for estimation of density distortion metric for processing of point cloud geometry is provided. The electronic device acquires a reference point cloud, encodes the reference point cloud to generate encoded point cloud data, and decodes the encoded point cloud data to generate a test point cloud. The electronic device further generates a first local density map representing local density values at points of the reference point cloud. The electronic device determines locations in the test point cloud corresponding to locations of the points. The electronic device generates a second local density map representing local density values at the determined locations. The electronic device computes a value of density distortion metric based on the first local density map and the second local density map. The electronic device controls a display device to render a reconstruction quality of the test point cloud based on the computed value.
An electronic device and method for estimation of density distortion metric for processing of point cloud geometry is provided. The electronic device acquires a reference point cloud, encodes the reference point cloud to generate encoded point cloud data, and decodes the encoded point cloud data to generate a test point cloud. The electronic device further generates a first local density map representing local density values at points of the reference point cloud. The electronic device determines locations in the test point cloud corresponding to locations of the points. The electronic device generates a second local density map representing local density values at the determined locations. The electronic device computes a value of density distortion metric based on the first local density map and the second local density map. The electronic device controls a display device to render a reconstruction quality of the test point cloud based on the computed value.
A wireless protocol allowing an AP to allocate time within a transmit opportunity (TXOP) to non-AP stations. The allocations are made under condition that the non-AP STA(s) will satisfy Access Class (AC) requirements communicated in a frame sent by the AP STA, such as within an MU RTS TXS Trigger frame. Upon receiving the frame, the non-AP responds on whether it can meet the conditions, such as within a Clear-To-Send (CTS) message. If the non-AP can meet the conditions, then it proceeds to transmit data (to the AP, or a peer non-AP). Otherwise, if it cannot meet the conditions, then the AP can communicate the allocation to another non-AP STA. There are also conditions for how the non-AP can aggregate data units into A-MSDUs.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
93.
WIRELESS NETWORK PROTOCOL ALLOWING THE SAME SCRAMBLING SEED TO BE UTILIZED BY MULTIPLE STAS
Using this described protocol an AP can control multiple STAs to allow them to utilize the same scrambling seed to scramble the data in identical data units being sent on the wireless network. The AP sends a scrambling seed indication within a frame communicated to other STAs on the network. These other STAs, then use this scrambling seed indication to scramble a data field of a data unit with an identical scrambling seed, and to transmit these packets using the same modulation and coding scheme (MCS).
Methods are described for overcoming contention issues during R-TWT SPs. In particular, enhanced mechanisms are defined in prioritizing transmissions of R-TWT TIDs during corresponding R-TWT SPs. In addition, enhanced access mechanisms are described for overcoming issues with stations accessing the channel after sensing an idle channel after a DIFS or AIFS time period.
H04W 72/566 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de priorité de l’information, de la source d’information ou du destinataire
A wireless protocol allowing an AP to allocate time within a transmit opportunity (TXOP) to non-AP stations. The allocations are made under condition that the non-AP STA(s) will satisfy Access Class (AC) requirements communicated in a frame sent by the AP STA, such as within an MU RTS TXS Trigger frame. Upon receiving the frame, the non-AP responds on whether it can meet the conditions, such as within a Clear-To-Send (CTS) message. If the non-AP can meet the conditions, then it proceeds to transmit data (to the AP, or a peer non-AP). Otherwise, if it cannot meet the conditions, then the AP can communicate the allocation to another non-AP STA. There are also conditions for how the non-AP can aggregate data units into A-MSDUs.
H04W 72/0446 - Ressources du domaine temporel, p. ex. créneaux ou trames
H04W 72/566 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de priorité de l’information, de la source d’information ou du destinataire
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
96.
WIRELESS NETWORK PROTOCOL ALLOWING THE SAME SCRAMBLING SEED TO BE UTILIZED BY MULTIPLE STAS
Using this described protocol an AP can control multiple STAs to allow them to utilize the same scrambling seed to scramble the data in identical data units being sent on the wireless network. The AP sends a scrambling seed indication within a frame communicated to other STAs on the network. These other STAs, then use this scrambling seed indication to scramble a data field of a data unit with an identical scrambling seed, and to transmit these packets using the same modulation and coding scheme (MCS).
Enhanced wireless protocols are described for stations (STAs) which are associated with Multiple-Link Devices (MLD). One embodiment describes overcoming replay attacks (with or without a modified Sequence Number (SN) even when protected by a Group Temporal Key (GTK) or similar. Other embodiments describe MLP operating enhancements associated with utilizing a Robust Security Network Element (RSNE), or NSTR Mobile AP MLD, or Target Beacon Transmission Time (TBTT) Information fields with Reduced Neighbor Reports (RNR), or Quiet Channel elements in an Association/Reassociation, or that use PPDU end alignment are all subject to specific shortcomings when applied to MLDs.
Methods are described for overcoming contention issues during R-TWT SPs. In particular, enhanced mechanisms are defined in prioritizing transmissions of R-TWT TIDs during corresponding R-TWT SPs. In addition, enhanced access mechanisms are described for overcoming issues with stations accessing the channel after sensing an idle channel after a DIFS or AIFS time period.
Enhanced wireless protocols are described for stations (STAs) which are associated with Multiple-Link Devices (MLD). One embodiment describes overcoming replay attacks (with or without a modified Sequence Number (SN) even when protected by a Group Temporal Key (GTK) or similar. Other embodiments describe MLP operating enhancements associated with utilizing a Robust Security Network Element (RSNE), or NSTR Mobile AP MLD, or Target Beacon Transmission Time (TBTT) Information fields with Reduced Neighbor Reports (RNR), or Quiet Channel elements in an Association/Reassociation, or that use PPDU end alignment are all subject to specific shortcomings when applied to MLDs.
An electronic device and method for keypoints-based estimation of face bounding box is provided. The electronic device receives an image that includes an object and detects a set of facial keypoints on a face of the object in the image. The set of facial keypoints corresponds to a set of facial features of the object. The electronic device identifies a face pose of the object in the image based on the detected set of facial keypoints and computes a set of values that is associated with a size and a position of a face bounding box in the image. The computation is based on the identified face pose and locations of the set of facial keypoints in the image. The electronic device overlays the face bounding box on the image based on the computed set of values.