The disclosed computer-implemented method may include applying, via a sound reproduction system, sound cancellation that reduces an amplitude of various sound signals. The method further includes identifying, among the sound signals, an external sound whose amplitude is to be reduced by the sound cancellation. The method then includes analyzing the identified external sound to determine whether the identified external sound is to be made audible to a user and, upon determining that the external sound is to be made audible to the user, the method includes modifying the sound cancellation so that the identified external sound is made audible to the user. Various other methods, systems, and computer-readable media are also disclosed.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférenceMasquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
A headset includes a frame, a camera, and a plurality of transducers positioned on the frame to transmit beams a face of a user of the headset. The plurality of transducers receive reflected beams from the face and generate sensor data that varies in response to the received reflected beams. The camera is positioned to capture images of the face of the user of the headset. An expression of the user is estimated based on the sensor data and the images captured by the camera.
Aspects of the present disclosure are directed to providing an artificial reality environment with augments and surfaces. An “augment” is a virtual container in 3D space that can include presentation data, context, and logic. An artificial reality system can use augments as the fundamental building block for displaying 2D and 3D models in the artificial reality environment. For example, augments can represent people, places, and things in an artificial reality environment and can respond to a context such as a current display mode, time of day, a type of surface the augment is on, a relationship to other augments, etc. Augments can be on a “surface” that has a layout and properties that cause augments on that surface to display in different ways. Augments and other objects (real or virtual) can also interact, where these interactions can be controlled by rules for the objects evaluated based on information from the shell.
Methods and systems are described for mitigating acoustic feedback via an open ear device. In various examples, systems or methods may receive, via a first microphone and a second microphone positioned on an open ear device, a first audio signal and a second audio signal, respectively. The first audio signal and second audio signal may be converted to a first digital audio signal and a second digital audio signal. The first digital audio signal and second digital audio signal may be independently processed by an AFC to reduce acoustic feedback. A beamformer may adjust the first digital audio signal and second digital audio signal based on a target direction to create a beamformer digital audio signal. The beamformer digital audio signal may be processed via feedforward processing to create a target audio. The target audio may be phase shifted and transmitted to a loudspeaker positioned on the open ear device.
G02C 11/00 - Accessoires non optiquesFixation de ceux-ci
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
An eyewear device comprising (1) a display assembly configured to generate graphical imagery for viewing by a user, (2) an eye-tracking device at least partially integrated into the display assembly, and (3) circuitry communicatively coupled to the eye-tracking device and configured to track an eye of the user based at least in part on light detected by the eye-tracking device. Various other apparatuses, systems, and methods are also disclosed.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c.-à-d. avec détection de l’axe de vision des yeux du spectateur
6.
PASSIVE MECHANICAL GUIDING FOR PROPER DONNING A WRIST-WEARABLE DEVICE, AND SYSTEMS AND METHODS OF USE THEREOF
A wrist-wearable device comprises a pivoting loop slider that is coupled to a first band and configured to receive a second band, the pivoting slider is configured to pivot around the end of the first band, such that when the pivoting slider has a first position relative to the end of the first band the pivoting slider provides a first clamping force on the second band of the wrist-wearable device. The first clamping force is configured to provide large adjustments to the tightness of the wrist-wearable device about a wrist of a user. When the pivoting slider has a second position relative to the end of the first band, the pivoting slider provides a second clamping force on the second band. The second clamping force is greater than the first clamping force and provides finer adjustments to the tightness of the wrist-wearable device about a wrist of the user.
Some individuals may have hearing impairments and/or may be in situations that include impediments to hearing. The present disclosure relates to a method (800) comprising: generating (810) an audio signal based on sound received by one or more microphones of an open-ear device; and performing (820) a hearing enhancement process in response to user input, the hearing enhancement process comprising: detecting (830) a speech component of the audio signal; applying (840) a gain function to the speech component of the audio signal based on a plurality of gain parameters to generate a modified output signal; and operating (850) one or more loudspeakers of the open-ear device to produce sound according to the modified output signal.
H04R 1/34 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en utilisant un seul transducteur avec des moyens réfléchissant, diffractant, dirigeant ou guidant des sons
A wrist-wearable device comprises a pivoting loop slider that is coupled to a first band and configured to receive a second band, the pivoting slider is configured to pivot around the end of the first band, such that when the pivoting slider has a first position relative to the end of the first band the pivoting slider provides a first clamping force on the second band of the wrist-wearable device. The first clamping force is configured to provide large adjustments to the tightness of the wrist-wearable device about a wrist of a user. When the pivoting slider has a second position relative to the end of the first band, the pivoting slider provides a second clamping force on the second band. The second clamping force is greater than the first clamping force and provides finer adjustments to the tightness of the wrist-wearable device about a wrist of the user.
A44B 11/12 - BouclesFermetures similaires pour attacher ensemble des courroies ou similaires, p. ex. pour ceintures des sécurité maintenant la bande par friction avec systèmes de serrage à attache tournante
A44C 5/20 - Fermoirs pour bracelets pour bracelets ouverts
9.
Dispersion-compensated optical assembly and system
An optical assembly includes a first grating for redirecting a polychromatic light beam, e.g. for deviating light path of an optical system away from an optical train of some other optical sub-system. The first grating exhibits chromatic dispersion, resulting in imaging aberrations in the optical system relying on the first grating to deviate the light path. A second grating with lesser chromatic dispersion but closer to the imaging camera may be used to undo the chromatic dispersion induced blurring of the image caused by the first grating, resulting in sharpening of the final image.
A disclosed computer-implemented method may include receiving, as part of a demodulation reference signal (DMRS) channel estimation operation, a frequency-domain channel estimation signal that includes a sequence of DMRS channels. The method may also include generating an augmented DMRS channel sequence that may include generating (1) based on an edge DMRS channel and an additional DMRS channel included in the sequence of DMRS channels, an extended DMRS channel sequence that includes a plurality of extended DMRS channels that extend a frequency range of the sequence of DMRS channels, and (2) based on an interpolation matrix and the extended DMRS channel sequence, an extrapolated DMRS channel sequence that includes extrapolated DMRS channels that further extend the frequency range of the sequence of DMRS channels. The method may also include interpolating a data channel based on the augmented DMRS channel sequence. Various other methods and systems are also disclosed.
A liquid crystal display (LCD) of a near-eye display comprises a first substrate, a second substrate, a plurality of photo spacers formed on the second substrate, a plurality of column spacers formed on the first substrate, and a liquid crystal material in regions between the first substrate and the second substrate. In some embodiments, each column spacer of the plurality of column spacers includes two parts that form a recess for receiving a respective photo spacer of the plurality of photo spacers. In some embodiments, each photo spacer of the plurality of photo spacers includes two parts that form a recess for receiving a respective column spacer of the plurality of column spacers. In some embodiments, the plurality of column spacers includes different types of column spacers with different or same heights, and the plurality of photo spacers includes different types of photo spacers with different or same heights.
A system for controlling notification outputs at a head-wearable device that is in communication with one or more peripherals. The system is configured to receive a request to provide a notification, and in response to receiving the request to provide the notification: in accordance with a determination that the head wearable device and one of the one or more peripheral devices are in a first state based on first data from one or more sensors of the system, presenting a notification in a first manner that corresponds to the request to provide the notification, and in accordance with a determination that the head wearable device and one of the one or more peripheral devices are in a second state based on second data from the one or more sensors of the system, presenting a notification in a second manner that corresponds to the request to provide the notification.
The disclosed system may include a conductive mesh layer. The conductive mesh layer may include a lattice that has multiple different electrically conductive links. The electrically conductive links may be shaped according to at least one specified form. The system may also include an active dimming layer that may be configured to provide active dimming according to a control signal. The system may further include an antenna feed connected to the conductive mesh layer. The antenna feed circuitry may drive the conductive mesh layer as a radiating element of an antenna. Various other apparatuses, systems, and mobile electronic devices are also disclosed.
Methods and systems are described for mitigating acoustic feedback via an open ear device. In various examples, systems or methods may receive, via a first microphone and a second microphone positioned on an open ear device, a first audio signal and a second audio signal, respectively. The first audio signal and second audio signal may be converted to a first digital audio signal and a second digital audio signal. The first digital audio signal and second digital audio signal may be independently processed by an AFC to reduce acoustic feedback. A beamformer may adjust the first digital audio signal and second digital audio signal based on a target direction to create a beamformer digital audio signal. The beamformer digital audio signal may be processed via feedforward processing to create a target audio. The target audio may be phase shifted and transmitted to a loudspeaker positioned on the open ear device.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférenceMasquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
15.
SYSTEMS AND METHODS FOR TESTING UNIVERSAL SERIAL BUS DEVICES
As disclosed herein, an electronic adapter for facilitating testing of a Universal Serial Bus Type-C device is provided. The adapter may include a first Universal Serial Bus (USB) port for removably coupling with a host device. The adapter may include a USB connector for removably coupling with a first peripheral device. The USB connector may include a first row of pins and a second row of pins. The adapter may include at least one integrated circuit configured to digitally switch between using the first set of pins and the second set of pins to enable a communication between the host device and the first peripheral device. A method, a system, and a non-transitory computer-readable storage medium are also disclosed.
A method for fabricating a deformable electronic device includes obtaining a first substrate having a plurality of circuit components, and a second substrate having a plurality of channels and a plurality of holes. The method also includes assembling the first and second substrates to form a stack, in which holes in the plurality of holes of the second substrate are aligned with circuit components in the plurality of circuit components of the first substrates. The method further includes filling the plurality of channels and the plurality of holes with a liquid metal material, thereby producing a plurality of deformable interconnects in the stack. The plurality of deformable interconnects electrically connects the plurality of circuit components to form one or more circuits.
The disclosed computer-implemented method may include determining that a sensor is to gather data for first and second sensor operations within a specified time period that is dependent on an operational frequency of the sensor. The method may also include determining that an exposure center for the first sensor operations is to be altered to allow both the first and the second sensor operations to occur within the specified time period. Still further, the method may include dynamically calculating the exposure center for the first sensor operation that will allow both the first and the second sensor operations to occur within the specified time period. The method may also include triggering the sensor to perform the first sensor operation using the dynamically calculated exposure center, where the first and second sensor operations are both performed during the specified time period. Various other methods, systems, and computer-readable media are also disclosed.
H04N 25/589 - Commande de la gamme dynamique impliquant plusieurs expositions acquises de manière séquentielle, p. ex. en utilisant la combinaison de champs d'image pairs et impairs avec des temps d'intégration différents, p. ex. des expositions courtes et longues
H04N 25/78 - Circuits de lecture pour capteurs adressés, p. ex. amplificateurs de sortie ou convertisseurs A/N
18.
PORTABLE SMART GLASSES CHARGING CASE CONFIGURED TO CHARGE ONE OR MORE ADDITIONAL PERIPHERAL DEVICES, AND SYSTEMS THEREOF
A portable smart glasses charging case comprises a glasses-holding section configured to hold a pair of smart glasses and deliver a charge. The portable smart glasses charging case also comprises a foldable section having: a folded state in which the foldable section is configured to cover the glasses-holding section while the pair of smart glasses continues to receive the charge; and an unfolded state in which the foldable section lays substantially flat to reveal a first section for delivering a charge to a first electronic device that is distinct from the pair of smart glasses and a second section for delivering a charge to a second electronic device that is distinct from the pair of smart glasses and the first electronic device, and the pair of smart glasses continues to receive the charge while the first and second sections are delivering respective charges to the first and second electronic devices.
H02J 50/40 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique utilisant plusieurs dispositifs de transmission ou de réception
Systems and methods for Mixed Reality (MR) applications in a head-mounted display. In implementations, an ambient light sensor (ALS) or world-side camera capture an ambient light measurement of an environment of a world-side scene. In some aspects, based on the ambient light measurement, one or more processing logic may be configured to adjust a brightness and/or color temperature of pass-through images and virtual images to harmonize a user's viewing experience while using the head-mounted display.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
20.
GENERATIVE MODEL-DRIVEN SAMPLING FOR ADAPTIVE SPARSE MULTIMODAL SENSING OF USER ENVIRONMENT AND INTENT
The disclosed computer-implemented method may include (1) predicting a user state, wherein the user state is measurable via a plurality of different sensor sampling modes, (2) determining a level of uncertainty associated with the predicted user state, and (3) selecting, from the plurality of different sensor sampling modes, a sampling mode to measure the user state. Selecting the sampling mode may include selecting a first sampling mode in response to determining that the level of uncertainty is above a threshold or selecting a second sampling mode in response to determining that the level of uncertainty is below the threshold. Various other methods, systems, and computer-readable media are also disclosed.
A camera device with optical image stabilization and focusing functionalities. The camera device includes a lens assembly positioned along an optical axis, a magnetic assembly with a plurality of magnets that produce a magnetic field, and a platform comprising a plurality of stabilization coils, a plurality of focusing coils, and a sensor. The platform can move the sensor in one or more directions relative to the optical axis. Each stabilization coil is aligned to a first side of a respective magnet and supplied with respective first current that interacts with the magnetic field causing the sensor to translate in a direction orthogonal to the optical axis. Each focusing coil is aligned to a second side of the respective magnet and supplied with respective second current that interacts with the magnetic field causing the sensor to translate towards or away from the lens assembly.
G03B 5/04 - Réglage vertical de l'objectifPorte-objectifs décentrables en hauteur
G03B 13/36 - Systèmes de mise au point automatique
G03B 30/00 - Modules photographiques comprenant des objectifs et des unités d'imagerie intégrés, spécialement adaptés pour être intégrés dans d'autres dispositifs, p. ex. des téléphones mobiles ou des véhicules
22.
INTEGRATED MEMS MICROPHONE PERFORMANCE ENHANCEMENT WITH A MEMBRANE
Systems and methods for a MEMS microphone package are disclosed. The MEMS microphone package may include a first port to direct sound to a MEMS system, including a die substrate, an acoustic membrane, and one or more plates. The MEMS microphone package may include an ASIC to produce microphone output based on an electrical signal, a PCB, a lid, and a second port. The first port and the second port may define a front volume and a back volume respectively. The second port may increase the back volume to improve sensitivity and reduce the acoustic sensor's noise floor to improve signal-to-noise ratio.
The disclosed system may include a conductive mesh layer. The conductive mesh layer may include a lattice that has multiple different electrically conductive links. The electrically conductive links may be shaped according to at least one specified form. The system may also include an active dimming layer that may be configured to provide active dimming according to a control signal. The system may further include an antenna feed connected to the conductive mesh layer. The antenna feed circuitry may drive the conductive mesh layer as a radiating element of an antenna. Various other apparatuses, systems, and mobile electronic devices are also disclosed.
A method for fabricating a deformable electronic device includes obtaining a first substrate having a plurality of circuit components, and a second substrate having a plurality of channels and a plurality of holes. The method also includes assembling the first and second substrates to form a stack, in which holes in the plurality of holes of the second substrate are aligned with circuit components in the plurality of circuit components of the first substrates. The method further includes filling the plurality of channels and the plurality of holes with a liquid metal material, thereby producing a plurality of deformable interconnects in the stack. The plurality of deformable interconnects electrically connects the plurality of circuit components to form one or more circuits.
A system and method for typeahead image generation are provided. The method may include receiving, via a user interface during a prompting session, a text prompt describing an image. The method also may include generating, via a trained diffusion model, the image representative of the text prompt. The method further may include determining, via the trained diffusion model, a reconciled risk score based on a determined risk score of the text prompt and a determined risk score of the generated image. The method even further may include causing, via the trained diffusion model in response to the determined reconciled risk score, to (i) approve the generated image in an instance in which the determined reconciled risk score meets or exceeds a predetermined threshold, or (ii) deny the generated image in an instance in which the determined reconciled risk score fails to meet the predetermined threshold.
A method is provided for detecting whether image data captured by a head-worn device is blocked by an occlusion. The method includes receiving image data captured by a camera of a head-worn device. The method includes determining the image data indicates an occlusion caused by the user is present in the camera's field of view. The method includes determining that the image data indicates that an occlusion caused by the user is present in a portion of a field of view of the camera. The method includes, when determining the occlusion satisfies a first occlusion threshold, notifying the user there is an occlusion to the field of view of the camera. And the method includes, when the occlusion satisfies a second occlusion threshold, (i) forgoing notifying the user there is an occlusion to the field of view, and (ii) modifying the image data to remove or minimize the occlusion.
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
Systems may include an eye-tracking illumination source, a waveguide optically coupled to the eye-tracking illumination source, and an optical sensor positioned to detect light from the eye-tracking illumination source. The waveguide may include at least one optical element for directing light from the eye-tracking illumination source toward an eye of a user. Various other related systems, devices, and methods are also disclosed.
Methods, systems, and apparatuses may assist with implementing hands free sharing of files between head-mounted displays or other devices. The devices in proximity may be identified and then eye-gaze tracking information or electromyogram information may be used to select and share files.
Apparatuses, systems, methods, and systems/methods of manufacturing a compact light projection device with one or more nonlocal metasurface space compressors, suitable for use in an eye/face tracking system of a near-eye device, are described. In one aspect, an eye/face tracking system may have a light source, a nonlocal metasurface space compressor to receive light from the light source, modify said light, and project the modified light, and a beam forming element to receive and project the modified light as the structured light onto the user's eye and/or surrounding facial tissue.
Systems and methods for a MEMS microphone package are disclosed. The MEMS microphone package may include a first port to direct sound to a MEMS system, including a die substrate, an acoustic membrane, and one or more plates. The MEMS microphone package may include an ASIC to produce microphone output based on an electrical signal, a PCB, a lid, and a second port. The first port and the second port may define a front volume and a back volume respectively. The second port may increase the back volume to improve sensitivity and reduce the acoustic sensor's noise floor to improve signal-to-noise ratio.
A method for providing seamless transitions between different user interfaces using multiple processors of a single device is described. The method includes causing, by a first processor of the device, display of a first user interface and receiving a user input requesting display of a second user interface instead of the first user interface. In accordance with determining that the user input request requires performance of an operation that does not satisfy processor performance criteria, the method includes sending, by the first processor and to a second processor, a wake-up message based on the user input. The first processor causes, based on the user input, partial display of the second user interface. In conjunction with handing-off display control for the second user interface to the second processor, the method causes the second processor, to display a complete second user interface.
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
32.
Coordination Between Independent Rendering Frameworks
Aspects of the present disclosure provide a “framework of frameworks” to help developers build artificial reality (XR) applications, including two-dimensional and three-dimensional content, using disparate rendering frameworks. These rendering frameworks can output data to the XR environment, but cannot talk to each other. Thus, some implementations can provide an intermediary framework to coordinate communication and rendering of content between the various systems. The intermediary framework can provide input routing by detecting an event with respect to a piece of content, and routing input data only to the system associated with that piece of content. The intermediary framework can also allow a node within an augment associated with one system to receive notifications of events in another node within the augment associated with another system.
A system and method for typeahead image generation are provided. The method may include receiving, via a user interface during a prompting session, a text prompt describing an image. The method also may include generating, via a trained diffusion model, the image representative of the text prompt. The method further may include determining, via the trained diffusion model, a reconciled risk score based on a determined risk score of the text prompt and a determined risk score of the generated image. The method even further may include causing, via the trained diffusion model in response to the determined reconciled risk score, to (i) approve the generated image in an instance in which the determined reconciled risk score meets or exceeds a predetermined threshold, or (ii) deny the generated image in an instance in which the determined reconciled risk score fails to meet the predetermined threshold.
A head-wearable device allowing for passive cooling at a face cover of the head-wearable device is described herein. The head-wearable device comprises a housing and a passive cooling surface cover coupled to a user-facing surface of the housing such that, when the head-wearable device is worn, the passive-cooling surface cover contacts a portion of a user's body. The housing includes one or more electronics components and the user-facing surface. The passive cooling surface cover is configured to (i) absorb heat generated by the one or more electronics components and transferred to the user-facing contact surface and (ii) evaporate a stored moisture using the heat generated by the one or more electronics components to decrease a temperature of the one or more electronics components and/or the passive cooling surface cover.
A system comprising (1) a tuner configured to tune a radio and (2) a controller communicatively coupled to the tuner, wherein the controller is configured to (1) select a tuner code to apply to the tuner based at least in part on telemetry data indicative of a certain use-case scenario and (2) cause the tuner to tune the radio by applying the tuner code to achieve a certain state of the radio. Various other apparatuses, devices, systems, and methods are also disclosed.
A polyimide-based heat pipe is described. In examples, the heat pipe may include a first substrate including raised features on a surface of the substrate. In examples, the first substrate is covered with a second substrate or cover. At least the second substrate comprises polyimide. An outer surface of the heat pipe, e.g., the second substrate, includes one or more antennas disposed thereon.
Aspects of the present disclosure are directed to native artificial reality system execution of an application using synthetic input from an external device. In traditional use cases, artificial reality system input is provided by input channels native to the system, However, utilizing native input channels can detract from usability in some scenarios, such as during application development and/or testing. Implementations include an interface manager, executing at external device(s), that provides synthetic input to an artificial reality system. The artificial reality system can execute an application using the synthetic input, generate application data via the execution, such as visual information, and stream the application data back to the interface manager at the external device(s). In some implementations, a user defines the synthetic input via interactions with the interface manager, and the interface manager displays, to the user, the application data generated via the synthetic input.
In some embodiments, a first access point may include one or more processors and a transceiver for operating in a wireless local area network (WLAN). The one or more processors may be configured to receive, via the transceiver associated with a first basic service set (BSS), information relating to a first schedule from a second access point of the WLAN associated with a second BSS different from the first BSS. The one or more processors may be configured to generate a second schedule based on the information relating to the first schedule. Each of the first schedule and the second schedule may indicate timing information for coordinating between the second access point and the first access point.
H04W 74/0816 - Accès non planifié, p. ex. ALOHA utilisant une détection de porteuse, p. ex. accès multiple par détection de porteuse [CSMA] avec évitement de collision
H04W 84/12 - Réseaux locaux sans fil [WLAN Wireless Local Area Network]
39.
DISPLAY DISPARITY SENSOR, AND SYSTEMS AND METHODS OF USE THEREOF
An artificial-reality (AR) headset including a first holographic element that projects first focused light onto a first light detector and a second holographic element that projects second focused light onto a second light detector. The AR headset includes a first display coupled to the first holographic element, wherein the first display causes display of a first image, and a second display coupled to the second holographic element, where the second display causes display of a second image. The headset includes at least one display engine configured to receive respective calibration data from the first light detector and the second light detector; determine, based on comparing the respective calibration data, a disparity between the first display and the second display; and in accordance with a determination that the disparity between the first display and the second display satisfies disparity correction criteria generate an updated first image or updated second image.
The various implementations described herein include methods and systems for tracking user movements. In one aspect, a method includes, while a user is interacting with a virtual object, obtaining tracking information by tracking, via a sensor, a position of an arm of a user. The method also includes, in conjunction with tracking the position of the arm, obtaining wrist information by detecting, via a neuromuscular-signal sensor, a movement of a wrist of the arm. The method further includes assigning one or more motion characteristics to the virtual object in accordance with the tracking information and the wrist information.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
A variable-stiffness spring member may be integrated into a deformable optical element, such as a fluid lens. An example fluid lens may include a substrate, an actuator, a fluid layer disposed between the substrate and the actuator, and a spring member disposed between the substrate and the actuator. The spring member may include a flexure that at least partially surrounds the fluid layer and a spring stiffness of the flexure may differ at each of at least two peripheral locations around the fluid layer. Various other devices, systems, and methods are also disclosed.
G02B 3/14 - Lentilles remplies d'un fluide ou à l'intérieur desquelles le vide a été fait à distance focale variable
G02B 7/04 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement
G02B 26/00 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables
Aspects of the present disclosure are directed to a dynamic boundary that triggers operating changes at an artificial reality system. Implementations of a dynamic boundary manager can enforce dynamic boundary criteria. A dynamic boundary that corresponds to the user can be automatically generated, such as when the artificial reality system operates in a certain operating condition. User movements can trigger dynamic boundary criteria and cause one or more of: expansion of the dynamic boundary; popping of the dynamic boundary; and/or reforming of the dynamic boundary. Responsive to these triggers, the dynamic boundary manager can also cause changes to operations at the artificial reality system, such as pause(s) to executing applications and/or changes to the artificial reality environment displayed to the user (e.g., transitions to and from a virtual reality environment, a pass-through visual, etc.).
Aspects of the present disclosure are directed to operating an artificial reality system in single-handed mode. Artificial reality systems receive user input via several channels, however conventional systems lack functionality that helps diverse users operate these systems. Some types of input, such as input that requires movement of two hands and/or two hand-held controllers, may be more challenging for some diverse individuals to provide or may not be possible in certain situations, e.g., where one controller is disabled. Implementations operate artificial reality systems in single-handed mode, such as by translating instances of single-handed input into two-handed input. For example, the translated two-handed input can cause application functionality at the artificial reality system that would otherwise pose a challenge for some diverse individual.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p. ex. glissement, rotation, agrandissement ou changement de couleur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
The disclosed system may include a user device with (1) a first module, which performs a first functionality, and (2) a second module, which performs a second functionality, (3) a physical processor, and (4) physical memory including computer-executable instructions that cause the physical processor to (i) determine that a change in a range of frequency, being used by the first module, has resulted, or will result, in interference between the first module's changed range of frequency and a range of frequency being used by the second module, and (ii) in response to the determining that the change in the range of frequency has resulted in the interference, change the range of the frequency being used by the second module to a new range of frequency that does not interfere with the first module's changed range of frequency. Various other wearable devices, apparatuses, and methods of manufacturing are also disclosed.
H04W 36/20 - Exécution d'une resélection à des fins spécifiques pour optimiser le niveau d’interférence
H04L 5/00 - Dispositions destinées à permettre l'usage multiple de la voie de transmission
H04W 36/06 - Resélection d'une ressource de communication au point d'accès serveur
H04W 72/0453 - Ressources du domaine fréquentiel, p. ex. porteuses dans des AMDF [FDMA]
H04W 72/541 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de qualité en utilisant le niveau d’interférence
The disclosed system may include a user device with (1) a first module, which performs a first functionality, and (2) a second module, which performs a second functionality, (3) a physical processor, and (4) physical memory including computer-executable instructions that cause the physical processor to (i) determine that a change in a range of frequency, being used by the first module, has resulted, or will result, in interference between the first module's changed range of frequency and a range of frequency being used by the second module, and (ii) in response to the determining that the change in the range of frequency has resulted in the interference, change the range of the frequency being used by the second module to a new range of frequency that does not interfere with the first module's changed range of frequency. Various other wearable devices, apparatuses, and methods of manufacturing are also disclosed.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G11B 33/14 - Diminution de l'influence des paramètres physiques, p. ex. changements de température, humidité, poussière
H02K 11/02 - Association structurelle de machines dynamo-électriques à des organes électriques ou à des dispositifs de blindage, de surveillance ou de protection pour la suppression des parasites d’origine électromagnétique
H04B 1/713 - Techniques d'étalement de spectre utilisant des sauts de fréquence
An apparatus comprising (1) a beam splitter, (2) a plurality of light sources that are oriented to emit light toward the beam splitter from different directions relative to one another, and (3) a controllable mirror configured to (A) receive the light from the beam splitter and reflect the light back toward the beam splitter for transmission to an eye of a user via a waveguide. Various other apparatuses, systems, and methods are also disclosed.
G02B 27/10 - Systèmes divisant ou combinant des faisceaux
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
47.
BODY POSE ESTIMATION USING SELF-TRACKED CONTROLLERS
In one embodiment, a computing system may determine a pose of a device held by or attached to a hand of a user based on sensor data captured by the device. The system may determine a pose of a headset worn by the user based on sensor data captured by the headset. The system may detemline positions of a first set of keypoints associated with a first portion of a body of the user based on (1) one or more first images captured by one or more cameras of the device, (2) the pose of the device, (3) one or more second images captured by one or more cameras of the headset and (4) the pose of the headset. The system may determine a body pose of the user based at least on the positions of the first set of keypoints.
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06N 3/049 - Réseaux neuronaux temporels, p. ex. éléments à retard, neurones oscillants ou entrées impulsionnelles
In some embodiments, a first access point may include one or more processors and a transceiver for operating in a wireless local area network (WLAN). The one or more processors may be configured to receive, via the transceiver associated with a first basic service set (BSS), information relating to a first schedule from a second access point of the WLAN associated with a second BSS different from the first BSS. The one or more processors may be configured to generate a second schedule based on the information relating to the first schedule. Each of the first schedule and the second schedule may indicate timing information for coordinating between the second access point and the first access point.
A photonic integrated circuit comprising (1) a substrate, (2) a set of light sources that are each disposed on the substrate, and (3) a movable cantilevered waveguide that is at least partially disposed on the substrate and extends outward from the substrate, wherein the movable cantilevered waveguide is configured to move in at least one direction to facilitate actuating light emitted by the set of light sources. Various other apparatuses, systems, and methods are also disclosed.
G02B 6/12 - Guides de lumièreDétails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p. ex. des moyens de couplage du type guide d'ondes optiques du genre à circuit intégré
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
50.
DISPLAY DISPARITY SENSOR, AND SYSTEMS AND METHODS OF USE THEREOF
An artificial-reality (AR) headset including a first holographic element that projects first focused light onto a first light detector and a second holographic element that projects second focused light onto a second light detector. The AR headset includes a first display coupled to the first holographic element, wherein the first display causes display of a first image, and a second display coupled to the second holographic element, where the second display causes display of a second image. The headset includes at least one display engine configured to receive respective calibration data from the first light detector and the second light detector; determine, based on comparing the respective calibration data, a disparity between the first display and the second display; and in accordance with a determination that the disparity between the first display and the second display satisfies disparity correction criteria generate an updated first image or updated second image.
The disclosure is directed to systems and methods for radio link control for latency service services using error protection encoding and RLC retransmission configurations of PDUs based on PDU types. The solutions can include a device. The device can include one or more processors configured to determine whether a protocol data unit (PDU) of an extended reality (XR) application to be transmitted via a radio link control (RLC) layer corresponds to a low-loss traffic or a loss-tolerant traffic. The device can encode the PDU for error protection, according to the determination. The device can select, from a plurality of configurations for retransmission, a configuration for retransmission of the PDU, according to the determination.
Aspects of the present disclosure are directed to application multitasking in a shell with a three-dimensional environment. Implementations immerse a user in a three-dimensional environment via an artificial reality system, such as a display environment for a system shell. The system shell can execute applications (e.g., system shell applications, remoted applications, etc.). An executing application can correspond to a displayed virtual object (e.g., panel). The system shell can concurrently execute two, three, or more applications and the three-dimensional environment can concurrently display two, three, or more corresponding virtual objects that display contents for the executing applications. Implementations of a mode manager can manage a mode for the three-dimensional environment/system shell. Example modes include cooperative mode and exclusive mode. Implementations of cooperative mode permit concurrent display of multiple virtual objects from different applications while the exclusive mode permits display of virtual objects only from the executing application entering exclusive mode.
A device and/or method for storing one or more sound filters in a discretized pose space. The device determines that a microphone array during a first time period is in a first discrete pose of a plurality of discrete poses, wherein the plurality of discrete poses discretizes a pose space. The pose space includes at least an orientation component and may further include a translation component. The device retrieves a sound filter associated with the first discrete pose from a memory cache (e.g., for memoization). The device generates audio content using the sound filter and presents the audio content via a transducer array.
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
54.
Low-resource task-oriented semantic parsing via intrinsic modeling for assistant systems
In one embodiment, a method includes receiving training utterances associated with a domain, receiving ontology labels for the domain, wherein the ontology labels comprise one or more of an intent or a slot, generating an inventory for the domain, wherein the inventory comprises at least a respective index and respective span for each intent or slot, wherein the respective span comprises a respective descriptive label associated with the intent or slot, and wherein the respective descriptive label comprises a natural-language description of the intent or slot, generating frames for training utterances based on the training utterances and the inventory by a natural-language understanding (NLU) model, wherein each frame comprises a structural representation of the respective training utterance, wherein the structural representation is generated based on a comparison between the corresponding training utterance and the inventory, and updating the NLU model based on the frames.
A system and method for providing recommended or suggested format(s) of content are provided. The system may analyze one or more items of content associated with a user being input or captured by a user interface. The system may also implement a machine learning model including training data pre-trained, or trained in real-time, on one or more content items having one or more content formats. The system may also automatically determine at least one suggested content format applied to the one or more items of content responsive to determining that at least a subset of the one or more items of content are similar to corresponding content items of a same or similar type associated with, or within, the training data. The system may also present, by a user interface or a display device, the at least one suggested content format applied to the one or more items of content.
A system and method for generating summaries of a resource(s) are provided. The system may analyze a resource(s), associated with a user, being input/captured by a user interface. The resource(s) may be sharable among users of a group. The system may implement a machine learning model including training data pre-trained, or trained in real-time, on summaries of resources as a same or similar type as the resource(s), one or more content items associated with content of the resource(s), or user interaction historical data. The system may automatically determine a suggested summary, of the resource(s), tailored to the user in response to determining interests or focuses of the user based in part on analyzing the user interaction historical data. The system may present, by a user interface or a display device, the suggested summary of the resource(s).
A pair of smart glasses described herein includes a temple arm and a removable temple tip. The temple tip is configured to be removably attached to a distal end of the temple arm and the temple tip is configured to be removed by a wearer of the pair of smart glasses. The temple tip includes a battery and an electrical connection configured for transferring power to an electrical component of the pair of smart glasses.
Systems and methods for latency improvement may include a first endpoint which is configured to transmit, via one or more intermediary network devices to a second endpoint, first traffic generated by the first endpoint for receipt by the second endpoint. The first endpoint may receive, from a first intermediary network device of the one or more intermediary network devices, a packet generated by the first intermediary network device, the packet indicating congestion experienced by the first intermediary network device. The first endpoint may transmit, via the one or more intermediary network devices to the second endpoint, second traffic generated by the first endpoint according to the packet received from the first intermediary network device.
A optical element includes a primary electrode array having a plurality of primary electrodes extending in a first direction, a secondary electrode array having a plurality of secondary electrodes extending in a second direction, where at least one secondary electrode overlaps a portion of at least one primary electrode, and a switchable active layer disposed between the primary electrode array and the secondary electrode array, the switchable active layer being configured to modulate light transmission through the optical element in response to an applied voltage.
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p. ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
The disclosed system may include a housing dimensioned to secure various components including at least one physical processor and various sensors. The system may also include a camera mounted to the housing, as well as physical memory with computer-executable instructions that, when executed by the physical processor, cause the physical processor to: acquire images of a surrounding environment using the camera mounted to the housing, identify features of the surrounding environment from the acquired images, generate a map using the features identified from the acquired images, access sensor data generated by the sensors, and determine a current pose of the system in the surrounding environment based on the features in the generated map and the accessed sensor data. Various other methods, apparatuses, and computer-readable media are also disclosed.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
61.
Systems And Methods For Coordinating Operation Of A Head-Wearable Device And An Electronic Device To Assist A User In Interacting With The Electronic Device
Methods and systems for coordinating operation of a head-wearable device and an electronic device to assist a user in interacting with the electronic device are disclosed. A method includes, while a user interface is displayed on a display of an electronic device associated with a user, receiving sensor data from one or more sensors of the electronic device or from a head-wearable device worn by the user that is in communication with the electronic device. The method includes determining, based at least in part on the sensor data from the one or more sensors, whether an augmented-display criterion is satisfied for the electronic device, and, in accordance with a determination that the augmented-display criterion is satisfied, causing presentation, via a display of the head-wearable device, of an augmented representation of the user interface.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p. ex. sous forme de gestes ou de texte
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
62.
SINGLE-STEP GRAPHICAL RENDERING OF TWO-DIMENSIONAL CONTENT FOR THREE-DIMENSIONAL SPACE
A method for single-step graphical rendering of two-dimensional (2D) content for three-dimensional (3D) display by a graphics system that includes augmented reality headsets and display devices is described. The method includes identifying 2D content of a first segment of a plurality of segments of an image frame for single-step rendering based on an overlap of the 2D content of the first segment with a primitive shape; receiving vector data for the 2D content of the first segment of the plurality of segments of the image frame; and rendering the 2D content of the first segment directly in 3D using the vector data for the 2D content. The single-step graphical rendering technique forgoes generation of a flat texture image corresponding to the 2D content that would otherwise be generated and forgoes resampling of the flat texture image for generation of corresponding 3D content.
Aspects of the present disclosure relate to more accurate and quicker localization of an artificial reality (XR) system in a real-world space (e.g., a room). If a user enters a room and localization fails, the system can locate a corner that was designated in a previous localization. The corner could have been manually selected by the user or could have been automatically recommended by the XR system. In some implementations, the user or system can identify two adjacent corners in the room for further accuracy. Through later selection of the corner(s) for localization, the XR system can identify the saved room using depth sensors, with identification of corners being more reliable and detectable than other methods identifying walls.
Aspects of the present disclosure relate to instant boundary creation for a virtual reality (VR) experience. An artificial reality (XR) system, while rendering an augmented reality (AR) or mixed reality (MR) environment, can scan and gather visual characteristic data for a user's real-world environment in the background. When the XR system detects an intent to enter VR mode (e.g., by launching a VR application), the XR system can generate a recommendation for a boundary for the real-world space, which can include a type of interaction mode (e.g., moveable or stationary mode). Based on the user's response to the recommendation, the XR system can prompt the user to scan the real-world space further and/or manually adjust the boundary, while continuing to scan and gather visual characteristic data in the background. Some implementations can alternatively or additionally automatically relocalize a real-world space based on a generated boundary.
An optical element includes a substrate and a periodic relief structure formed on a surface of the substrate. The relief structure includes a series of alternating ridges and valleys where the ridges are formed from a first optically anisotropic material and the valleys are filled with an optically isotropic material. The ridges have a pitch defined by a distance between adjacent ridges is configured to diffract light incident on the relief structure.
A device may include a wearable frame. The device may include at least one contact transducer coupled to the wearable frame, the at least one contact transducer being configured to detect vibrations produced by vocalization of a user. The device may include a nose pad configured to attach to the wearable frame, the nose pad being dimensioned to cover the contact transducer in a manner configured to maintain contact with a nose of the user and the contact transducer. Various other devices, systems, and methods are also disclosed.
Aspects of the present disclosure are directed to generating media element(s) using a multimodal scene graph. A scene manager can process visual information, such as video, images, and/or a recorded artificial relay scene, and generate a multimodal scene graph that comprises components and metadata generated via the processing. The scene manager can utilize the multimodal scene graph to generate social media elements, such as images, video, and/or artificial reality scenes. For example, a video of a user can be converted to a multimodal scene graph, which can be used to generate one or more images (e.g., memes, animated images, stickers, etc.), such as an image that represents the user via an avatar of the user. This generated media can be shared with other social platform users, and the stored multimodal scene graph can be accessed by the others to generate variations of the media.
A liquid crystal display (LCD) of a near-eye display includes a first substrate, a second substrate, a plurality of photo spacers formed on the second substrate, a plurality of sub-spacers formed on the first substrate, and a liquid crystal material in regions between the first substrate and the second substrate. Each sub-spacer of the plurality of sub-spacers is configured to support a corresponding photo spacer of the plurality of photo spacers. A resolution of the LCD is greater than 800 pixels per inch. Each photo spacer of the plurality of photo spacers has a smaller lateral size and a larger height than the corresponding sub-spacer of the plurality of photo spacers.
A light engine for near-eye display includes an array of color pixels. Each color pixel of the array of color pixels includes a first input waveguide, a second input waveguide, a third input waveguide, and an output waveguide in a waveguide layer; a first light emitter configured to emit light of a first color and optically coupled to the first input waveguide; a second light emitter configured to emit light of a second color and optically coupled to the second input waveguide; a third light emitter configured to emit light of a third color and optically coupled to the third input waveguide; and one or more waveguide couplers configured to couple light from the first input waveguide, the second input waveguide, and the third input waveguide into the output waveguide to combine into light of a desired color and brightness.
A device may include a wearable frame. The device may include at least one contact transducer coupled to the wearable frame, the at least one contact transducer being configured to detect vibrations produced by vocalization of a user. The device may include a nose pad configured to attach to the wearable frame, the nose pad being dimensioned to cover the contact transducer in a manner configured to maintain contact with a nose of the user and the contact transducer. Various other devices, systems, and methods are also disclosed.
The disclosed system may include an antenna and an antenna matching network. The antenna matching network may include an aperture tuner configured to shift a frequency response of the antenna and an impedance tuner configured to dynamically change an amount of radiated power for the antenna. The antenna matching network may be positioned at least a specified minimum distance from the antenna according to various operating characteristics of the antenna. Various other apparatuses, wearable electronic devices, and methods of manufacturing are also disclosed.
H01Q 5/335 - Éléments rayonnants individuels ou couplés, chaque élément étant alimenté d’une façon non précisée utilisant des circuits ou des composants dont la réponse dépend de la fréquence, p. ex. des circuits bouchon ou des condensateurs au point d’alimentation, p. ex. aux fins d’adaptation d’impédance
H01Q 1/27 - Adaptation pour l'utilisation dans ou sur les corps mobiles
A display system includes a waveguide body extending from an input end to an output end and configured to guide light by total internal reflection from the input end to the output end, an in-coupling element located proximate to the input end and configured to direct image light into the waveguide body, an out-coupling element located proximate to the output end and configured to direct image light out of the waveguide body, and a reflective element located on a world side of the waveguide body, where the reflective element is configured to direct image light out-coupled from the waveguide body toward a user's eyes.
A method of grasp detection is described. The method includes, capturing, via one or more image sensors of a wearable device, image data including a plurality of frames. The plurality of frames includes an object within a field of view of the one or more image sensors. The method further includes capturing, via one or more non-image sensors of the wearable device, sensor data including a sensed interaction with the object and a user of the wearable device and identifying a grasp action performed by the user based on a combination of the sensor data and the image data.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A61B 5/00 - Mesure servant à établir un diagnostic Identification des individus
74.
ELECTRICAL STIMULATION FROM A WRISTBAND WITH AUGMENTED REALITY VISUAL EFFECTS FOR REMOTE HAPTIC SENSATIONS IN THE HAND, AND SYSTEMS AND METHODS OF USE THEREOF
A method of providing remote haptic feedback is described. The method includes applying, via a set of electrodes of a wearable device, a haptic signal to a first portion of a user. The haptic signal is configured to cause haptic feedback to be perceived at a second portion of the user that is distinct from the first portion of the user. The method further includes causing a visual indication of the haptic feedback at the second portion to be displayed to the user via a display of a head-wearable device.
Systems and methods for latency improvement may include a first endpoint which is configured to transmit, via one or more intermediary network devices to a second endpoint, first traffic generated by the first endpoint for receipt by the second endpoint. The first endpoint may receive, from a first intermediary network device of the one or more intermediary network devices, a packet generated by the first intermediary network device, the packet indicating congestion experienced by the first intermediary network device. The first endpoint may transmit, via the one or more intermediary network devices to the second endpoint, second traffic generated by the first endpoint according to the packet received from the first intermediary network device.
A system for network optimization using geo-data may access a set of geotagged data samples associated with a cell. The set of geotagged data samples may be obtained from an application associated with one or more user equipment associated with the cell and the cell includes at least one antenna. The system may also determine a metric associated with the cell based on the set of geotagged data samples, generate one or more antenna adjustments based on the metric, and predict, based on the one or more antenna adjustments, a performance improvement associated with the cell.
H04W 16/28 - Structures des cellules utilisant l'orientation du faisceau
H04W 24/02 - Dispositions pour optimiser l'état de fonctionnement
77.
Wearable device with a band portion for determining gestures using differential neuromuscular sensors and an inertial measurement unit, and methods of use thereof
A wearable device is provided. The wearable device includes a band portion, an inertial measurement unit, and a plurality of sensors configured to contact the user while the wearable device is worn by the user. The plurality of sensors includes a first pair of sensors in the plurality of sensors forming a first differential sensor configured to contact the user above first neuromuscular pathways of the user, and a second pair of sensors in the plurality of sensors forming a second differential sensor configured to contact the user above second neuromuscular pathways of the user, distinct from the first neuromuscular pathways. And the wearable device includes instructions for obtaining neuromuscular signals from the first and second differential sensors and the inertial measurement unit, determining a gesture that the user intends to perform, and causing an operation to be performed at a computing device based on the gesture.
Systems, apparatuses and methods provide technology that identifies a first color for an area, adjusts a first color space of the first color to a second color space when a predetermined condition is met, identifies a second color of text that is to overlay the area and adjusts one or more of the first color or the second color so that a contrast ratio between the first and second colors meets a threshold contrast value. The technology further modifies an original image to include the text overlaid on the area to generate an adjusted image, where the one or more of the text and the area in the adjusted image includes the adjusted one or more of the first color or the second color, and transmits the adjusted image to a user device to be displayed.
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 5/94 - Modification de la plage dynamique d'images ou de parties d'images basée sur les propriétés locales des images, p. ex. pour l'amélioration locale du contraste
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
79.
Systems for enabling quick interactions with a user's frequent contacts from within a watch-face user interface of a wrist-wearable device and methods of use thereof
Systems and methods are provided for enabling quick interactions with a user's frequent contacts from a wrist-wearable device. The method includes displaying, at a wrist-wearable device, a user interface including a plurality of representations of contacts. The plurality of representations of contacts includes a first representation of a first contact that is associated with an image-sharing application, and a second representation of a second contact that is associated with a fitness application that allows a user of the wrist-wearable device to compare performance of a physical activity against at least the second contact. The method includes, responsive to an input selecting the first representation of the first contact, causing activation of an imaging device for capturing an image. The method includes responsive to another input, capturing an image using the imaging device and causing the image to be sent to the first contact via the image-sharing application.
The disclosed systems and methods may include (1) digitally verifying that a user of a social media application has attained an achievement, (2) in response to digitally verifying that the user has attained the achievement, providing the user with an option to digitally claim a graphical element corresponding to the achievement, where the option to digitally claim the graphical element is only provided to users who have been digitally verified as having attained the achievement, and (3) in response to receiving user input digitally claiming the graphical element, posting the graphical element to a digital user footprint corresponding to the user within a page of the social media application. Various other methods, systems, and computer-readable media are also disclosed.
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
H04W 4/021 - Services concernant des domaines particuliers, p. ex. services de points d’intérêt, services sur place ou géorepères
81.
TECHNIQUES FOR EDITING THREE-DIMENSIONAL SCENES AND RELATED SYSTEMS AND METHODS
The present disclosure is generally directed to techniques for editing a portion of a 3D scene represented by a neural field model. Embodiments of the present disclosure may erase an object from a 3D scene by identifying the object in one or more images of the scene and generating mask regions around (e.g., covering) the object in these images. A neural field model that represents the scene without the object in it may be trained by relying on an image generative model configured for inpainting. When trained, this ‘background’ neural field model can be used to render the implicit background of light rays that pass through the region of 3D space represented by the mask regions, thereby producing different views of the scene with the object effectively erased from the scene.
An optical assembly is configured to receive visible scene light at the backside of the optical assembly and to direct the visible scene light on an optical path toward the eyeward side. The optical assembly includes a dimming element disposed on the optical path, where the dimming element includes a photochromic material that is configured to darken in response to exposure to a range of light wavelengths. A display element is disposed on the optical path between the eyeward side of the optical assembly and the dimming element. The display element is configured to direct visible display light toward the eyeward side and also to direct activation light to the dimming element, where the activation light is within the range of light wavelengths to activate a darkening of the photochromic material to dim the visible scene light.
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/157 - Association structurelle des cellules avec des dispositifs optiques, p. ex. des réflecteurs ou des dispositifs d’éclairage
G02F 1/163 - Fonctionnement des cellules électro-chromiques, p. ex. des cellules d’électrodépositionDispositions des circuits à cet effet
83.
REFLECTIVE HOLLOW SINGLET FOLDED OPTICAL LENS STRUCTURE
According to examples, an optical lens assembly for head-mount display (HMD) devices may include an optical lens configuration with a first optical element and a second optical element. The first optical element and the second optical element may be affixed together along a peripheral edge to form a gap between the two optical elements and may compensate chromatic dispersion characteristics. The optical lens configuration may also include a reflective polarizer layer, a quarter wave layer, and a semi-transparent mirror provided on selected surfaces of the optical elements. In some examples, the gap may be filled with air or an inert gas.
The disclosed computer-implemented method may include recognizing a last instruction of a layer from a subset of a plurality of layers of a first machine learning model during its execution. The method may also include identifying a request for executing a second machine learning model and performing a context switch to the second machine learning model after executing the last instruction of the layer. Various other methods, systems, and computer-readable media are also disclosed.
The disclosed systems and methods may include (1) generating a caption for a spoken phrase in a video, (2) determining an accuracy rating for the caption, and (3) in response to determining that the accuracy rating is below an accuracy threshold, prompting a user to manually review the caption prior to publishing the caption. Various other methods, systems, and computer-readable media are also disclosed.
A device worn on the human body can stimulate targeted mechanoreceptors from a distance beyond their receptive fields through use of modulated mechanical waves transmitted from an array of transducers which generate one or more specific subsurface strains at the target mechanoreceptors.
Systems, methods, and non-transitory computer readable media are configured to perform operations comprising determining a ranking of users participating in a video call; determining a selected number of videos to be included in a grid of videos associated with the video call based on the ranking; and generating the grid of videos based on a combination of the selected number of videos.
H04N 21/4402 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
H04N 21/4788 - Services additionnels, p. ex. affichage de l'identification d'un appelant téléphonique ou application d'achat communication avec d'autres utilisateurs, p. ex. discussion en ligne
In this disclosure, a battery pack/module with pressure control device integrated has been introduced to provide more efficient packing in system. This design reduces and deaccelerates fading/aging, impedance growth, and dimensional instability over product life. Moreover, it also helps protect the battery from damage with improved reliability. This technology enables better industrial design, e.g., for XR and wearable systems.
H01M 50/242 - MonturesBoîtiers secondaires ou cadresBâtis, modules ou blocsDispositifs de suspensionAmortisseursDispositifs de transport ou de manutentionSupports caractérisés par les propriétés physiques des boîtiers ou des bâtis, p. ex. dimensions adaptés pour protéger les batteries contre les vibrations, les collisions ou le gonflement
H01M 50/211 - Bâtis, modules ou blocs de multiples batteries ou de multiples cellules caractérisés par leur forme adaptés aux cellules en forme de poche
H01M 50/247 - MonturesBoîtiers secondaires ou cadresBâtis, modules ou blocsDispositifs de suspensionAmortisseursDispositifs de transport ou de manutentionSupports spécialement adaptés aux dispositifs portatifs, p. ex. aux téléphones portables, aux ordinateurs, aux outils à main ou aux stimulateurs cardiaques
89.
Generating Targetable Remote Haptic Sensations using Through Body Mechanical Waves
A device worn on the human body can stimulate targeted mechanoreceptors from a distance beyond their receptive fields through use of modulated mechanical waves transmitted from an array of transducers which generate one or more specific subsurface strains at the target mechanoreceptors.
A capacitive sensor for use in a wearable or flexible input device is described. The capacitive sensor includes a dielectric knitted core comprising deformable polymer patches deposited on a top surface of the dielectric knitted core, conductive electrode layers with stretchable electrodes positioned on the top and bottom surfaces of the dielectric knitted core, and a conductive textile shielding layer on each of the conductive electrode layers. The deformable polymer patches stiffen regions of the dielectric knitted core corresponding to the stretchable electrodes to limit strain on the stretchable electrodes as a wearer of the input device moves and deforms the input device during use. Moreover, the conductive electrode layers and conductive textile shielding layers comprise openings around the stretchable electrodes that redistribute strain away from the stretchable electrodes. These features limit motion artifacts while maintaining the flexibility and comfortability of the input device.
An example, facial interface structure for an extended-reality headset comprises a front portion configured to couple with a head-facing portion of an extended-reality headset, and a rear portion configured to couple with a portion of the user's face, wherein the front portion is configured to move relative to the rear portion in response to a force applied by a strap around a head of a user. The example facial interface includes two side support structures coupling the front portion to the rear portion, where each of the side support structures is configured to be positioned on a different respective side of the user's face. In addition, movement of the front portion relative to the rear portion causes, via the two side support structures, the rear portion to deform to thereby conform the rear portion to a profile of a user's head.
A head-wearable device comprising one or more displays and one or more programs. The one or more programs include instructions for, in response to a detection of an object within a field-of-view of the user, presenting a first XR augment overlaid over a first portion of the field-of-view of the user that is associated with the object. The one or more programs further include instructions for, in accordance with a determination that a first user eye movement is focused on the first XR augment for a first predetermined time, replacing the first XR augment with a second XR augment. The one or more programs further include instructions for, in accordance with a determination that a second user eye movement is focused outside a perimeter of the second XR augment for a second predetermined time, replacing the second XR augment with a third XR element.
The present disclosure regards devices, methods, and systems for dynamically enhancing visual XR content. An example device is an XR device with a display device, a lens, and one or more programs. The programs of the XR device include instructions for determining a hardware pixels-per-degree (“HPPD”) metric for the XR device and for determining a render pixels-per-degree (“RPPD”) metric for visual XR content to be displayed by the XR device. The programs further include instructions for determining whether a difference between the HPPD metric and the RPPD metric satisfy a minimum difference threshold and, after determining that the threshold is satisfied, enhancing the visual XR content based on the difference between the metrics. For example, if the HPPD metric is greater than the RPPD metric, then enhancement may involve applying a sharpening filter to the visual XR content; otherwise, it may involve super-sampling the content.
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
G06T 5/73 - Élimination des flousAccentuation de la netteté
94.
DETECTING OBJECT GRASPS WITH LOW-POWER CAMERAS AND SENSOR FUSION ON THE WRIST, AND SYSTEMS AND METHODS OF USE THEREOF
A method of grasp detection is described. The method includes, capturing, via one or more image sensors of a wearable device, image data including a plurality of frames. The plurality of frames includes an object within a field of view of the one or more image sensors. The method further includes capturing, via one or more non-image sensors of the wearable device, sensor data including a sensed interaction with the object and a user of the wearable device and identifying a grasp action performed by the user based on a combination of the sensor data and the image data.
G06V 10/143 - Détection ou éclairage à des longueurs d’onde différentes
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
G06V 40/70 - Biométrique multimodale, p. ex. combinaison d’informations de modalités biométriques distinctes
95.
Electrical Stimulation From A Wristband With Augmented Reality Visual Effects For Remote Haptic Sensations In The Hand, And Systems And Methods Of Use Thereof
A method of providing remote haptic feedback is described. The method includes applying, via a set of electrodes of a wearable device, a haptic signal to a first portion of a user. The haptic signal is configured to cause haptic feedback to be perceived at a second portion of the user that is distinct from the first portion of the user. The method further includes causing a visual indication of the haptic feedback at the second portion to be displayed to the user via a display of a head-wearable device.
An optical element may include two or more zones to perform functions in a head-mounted display (HMD) device. An optical component may include a first optical zone characterized by a first sag profile to correct a refractive error of an eye of a user and a second optical zone characterized by a second sag profile to redirect a path of an illumination light beam. A transition zone located between the first optical zone and the second optical zone may provide a smooth transition between the first optical zone and the second optical zone.
The present disclosure is generally directed to techniques for editing a portion of a 3D scene represented by a neural field model. Examples of the present disclosure may erase an object from a 3D scene by identifying the object in one or more images of the scene and generating mask regions around (e.g., covering) the object in these images. A neural field model that represents the scene without the object in it may be trained by relying on an image generative model configured for inpainting. When trained, this 'background' neural field model can be used to render the implicit background of light rays that pass through the region of 3D space represented by the mask regions, thereby producing different views of the scene with the object effectively erased from the scene.
A capacitive sensor for use in a wearable or flexible input device is described. The capacitive sensor includes a dielectric knitted core comprising deformable polymer patches deposited on a top surface of the dielectric knitted core, conductive electrode layers with stretchable electrodes positioned on the top and bottom surfaces of the dielectric knitted core, and a conductive textile shielding layer on each of the conductive electrode layers. The deformable polymer patches stiffen regions of the dielectric knitted core corresponding to the stretchable electrodes to limit strain on the stretchable electrodes as a wearer of the input device moves and deforms the input device during use. Moreover, the conductive electrode layers and conductive textile shielding layers comprise openings around the stretchable electrodes that redistribute strain away from the stretchable electrodes. These features limit motion artifacts while maintaining the flexibility and comfortability of the input device.
An optical element includes a layer having a structured surface formed from an organic solid crystal material. The structured surface includes a configuration of features that are arranged in a one-dimensional (1D) or two-dimensional (2D) periodic or non-periodic pattern such that the structured surface is configured to modify a property selected from the group consisting of transmission, reflection, and scattering of light interacting with the optical element.
G02B 1/00 - Éléments optiques caractérisés par la substance dont ils sont faitsRevêtements optiques pour éléments optiques
G02B 1/04 - Éléments optiques caractérisés par la substance dont ils sont faitsRevêtements optiques pour éléments optiques faits de substances organiques, p. ex. plastiques
100.
Systems and methods for improving channel estimation for 5G-NR PUSCH
A disclosed computer-implemented method may include (1) receiving a channel estimation signal comprising a plurality of frequency domain signals, each frequency domain signal corresponding to an antenna in a plurality of antennas, a port in a plurality of ports, and a demodulation reference signal sequence, (2) for each frequency domain signal, (A) converting the frequency domain signal to a time domain signal, and (B) determining a power level of the time domain signal, (3) determining, for each port in the plurality of ports (a) a sum of the power levels of the time domain signals corresponding to the port, each antenna in the plurality of antennas, and the DMRS sequence, and (b) based on the determined sum corresponding to the port, measuring (i) a pre-equalization signal-to-interference-plus noise ratio, and (ii) a power delay profile (PDP). Various other systems and methods are also disclosed.