A display system aligns the location of its exit pupil with the location of a viewer's pupil by changing the location of the portion of a light source that outputs light. The light source may include an array of pixels that output light, thereby allowing an image to be displayed on the light source. The display system includes a camera that captures image(s) of the eye and negatives of the eye image(s) are displayed by the light source. In the negative image, the dark pupil of the eye is a bright spot which, when displayed by the light source, defines the exit pupil of the display system, such that image content may be presented by modulating the light source. The location of the pupil of the eye may be tracked by capturing the images of the eye.
Disclosed herein are systems and methods for distributed computing and/or networking for mixed reality systems. A method may include capturing an image via a camera of a head-wearable device. Inertial data may be captured via an inertial measurement unit of the head-wearable device. A position of the head-wearable device can be estimated based on the image and the inertial data via one or more processors of the head-wearable device. The image can be transmitted to a remote server. A neural network can be trained based on the image via the remote server. A trained neural network can be transmitted to the head-wearable device.
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 40/18 - Eye characteristics, e.g. of the iris
A virtual reality (VR) and/or augmented reality (AR) display system is configured to control a display using control information that is embedded in or otherwise included with imagery data to be presented through the display. The control information can indicate depth plane(s) and/or color plane(s) to be used to present the imagery data, depth plane(s) and/or color plane(s) to be activated or inactivated, shift(s) of at least a portion of the imagery data (e.g., one or more pixels) laterally within a depth plane and/or longitudinally between depth planes, and/or other suitable controls.
G06T 7/579 - Depth or shape recovery from multiple images from motion
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
A waveguide stack having color-selective regions on one or more waveguides. The color-selective regions are configured to absorb incident light of a first wavelength range in such a way as to reduce or prevent the incident light of the first wavelength range from coupling into a waveguide configured to transmit a light of a second wavelength range.
Systems and methods of generating a three-dimensional (3D) reconstruction of a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality system, using only multiview images comprising, and without the need for depth sensors or depth data from sensors. Features are extracted from a sequence of frames of RGB images and back-projected using known camera intrinsics and extrinsics into a 3D voxel volume wherein each pixel of the voxel volume is mapped to a ray in the voxel volume. The back-projected features are fused into the 3D voxel volume. The 3D voxel volume is passed through a 3D convolutional neural network to refine the and regress truncated signed distance function values at each voxel of the 3D voxel volume.
Images perceived to be substantially full color or multi-colored may be formed using component color images that are distributed in unequal numbers across a plurality of depth planes. The distribution of component color images across depth planes may vary based on color. In some embodiments, a display system includes a stack of waveguides that each output light of a particular color, with some colors having fewer numbers of associated waveguides than other colors. The waveguide stack may include multiple pluralities (e.g., first and second pluralities) of waveguides, each configured to produce an image by outputting light corresponding to a particular color. The total number of waveguides in the second plurality of waveguides may be less than the total number of waveguides in the first plurality of waveguides.
An eyepiece includes an optical waveguide, a transmissive input coupler at a first end of the optical waveguide, an output coupler at a second end of the optical waveguide, and a polymeric color absorbing region along a portion of the optical waveguide between the transmissive input coupler and the output coupler. The transmissive input coupler is configured to couple incident visible light to the optical waveguide, and the color-absorbing region is configured to absorb a component of the visible light as the visible light propagates through the optical waveguide.
G02B 1/118 - Anti-reflection coatings having sub-optical wavelength surface structures designed to provide an enhanced transmittance, e.g. moth-eye structures
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. A first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the sound source in the virtual environment, and the first intermediate audio signal is associated with a first bus. A second intermediate audio signal is determined. The second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The second intermediate audio signal is determined based on a location of the sound source, and further based on an acoustic property of the virtual environment. The second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first and second buses.
G10K 15/10 - Arrangements for producing a reverberation or echo sound using time-delay networks comprising electromechanical or electro-acoustic devices
H04R 3/04 - Circuits for transducers for correcting frequency response
H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
H04R 5/033 - Headphones for stereophonic communication
A method of operating a virtual image generation system comprises allowing an end user to interact with a three-dimensional environment comprising at least one virtual object, presenting a stimulus to the end user in the context of the three-dimensional environment, sensing at least one biometric parameter of the end user in response to the presentation of the stimulus to the end user, generating biometric data for each of the sensed biometric parameter(s), determining if the end user is in at least one specific emotional state based on the biometric data for the each of the sensed biometric parameter(s), and performing an action discernible to the end user to facilitate a current objective at least partially based on if it is determined that the end user is in the specific emotional state(s).
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/21 - Input arrangements for video game devices characterised by their sensors, purposes or types
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
Systems and methods for enhanced depth determination using projection spots. An example method includes obtaining images of a real-world object, the images being obtained from image sensors positioned about the real-world object, and the images depicting projection spots projected onto the real-world object via projectors positioned about the real-world object. A projection spot map is accessed, the projection spot map including information indicative of real-world locations of projection spots based locations of the projection spots in the obtained images. Location information is assigned to the projection spots based on the projection spot map. Generation of a three-dimensional representation of the real-world object is caused.
A method for determining a focal point depth of a user of a three-dimensional (“3D”) display device includes tracking a first gaze path of the user. The method also includes analyzing 3D data to identify one or more virtual objects along the first gaze path of the user. The method further includes when only one virtual object intersects the first gaze path of the user identifying a depth of the only one virtual object as the focal point depth of the user.
Enhanced eye-tracking techniques for augmented or virtual reality display systems. An example method includes obtaining an image of an eye of a user of a wearable system, the image depicting glints on the eye caused by respective light emitters, wherein the image is a low dynamic range (LDR) image; generating a high dynamic range (HDR) image via computation of a forward pass of a machine learning model using the image; determining location information associated with the glints as depicted in the HDR image, wherein the location information is usable to inform an eye pose of the eye.
Devices are described for high accuracy displacement of tools. In particular, embodiments provide a device for adjusting a position of a tool. The device includes a threaded shaft having a first end and a second end and a shaft axis extending from the first end to the second end, a motor that actuates the threaded shaft to move in a direction of the shaft axis. In some examples, the motor is operatively coupled to the threaded shaft. The device includes a carriage coupled to the camera, and a bearing assembly coupled to the threaded shaft and the carriage. In some examples, the bearing assembly permits a movement of the carriage with respect to the threaded shaft. The movement of the carriage allows the position of the camera to be adjusted.
Systems and methods are provided for interpolation of disparate inputs. A radial basis function neural network (RBFNN) may be used to interpolate the pose of a digital character. Input parameters to the RBFNN may be separated by data type (e.g. angular vs. linear) and manipulated within the RBFNN by distance functions specific to the data type (e.g. use an angular distance function for the angular input data). A weight may be applied to each distance to compensate for input data representing different variables (e.g. clavicle vs. shoulder). The output parameters of the RBFNN may be a set of independent values, which may be combined into combination values (e.g. representing x, y, z, w angular value in SO(3) space).
A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.
Examples of the disclosure describe systems and methods for estimating acoustic properties of an environment. In an example method, a first audio signal is received via a microphone of a wearable head device. An envelope of the first audio signal is determined, and a first reverberation time is estimated based on the envelope of the first audio signal. A difference between the first reverberation time and a second reverberation time is determined. A change in the environment is determined based on the difference between the first reverberation time and the second reverberation time. A second audio signal is presented via a speaker of a wearable head device, wherein the second audio signal is based on the second reverberation time.
An augmented reality device includes a projector, projector optics optically coupled to the projector, and a substrate structure including a substrate having an incident surface and an opposing exit surface and a first variable thickness film coupled to the incident surface. The substrate structure can also include a first combined pupil expander coupled to the first variable thickness film, a second variable thickness film coupled to the opposing exit surface, an incoupling grating coupled to the opposing exit surface, and a second combined pupil expander coupled to the opposing exit surface.
G02B 6/13 - Integrated optical circuits characterised by the manufacturing method
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/12 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type of the integrated circuit kind
G02B 6/122 - Basic optical elements, e.g. light-guiding paths
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
18.
APPARATUS FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH MUTUAL OCCLUSION AND OPAQUENESS CONTROL
The present invention comprises a compact optical see-through head-mounted display capable of combining, a see-through image path with a virtual image path such that the opaqueness of the see-through image path can be modulated and the virtual image occludes parts of the see-through image and vice versa.
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 27/28 - Optical systems or apparatus not provided for by any of the groups , for polarising
G03B 37/02 - Panoramic or wide-screen photographyPhotographing extended surfaces, e.g. for surveyingPhotographing internal surfaces, e.g. of pipe with scanning movement of lens or camera
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
19.
CROSS REALITY SYSTEM WITH QUALITY INFORMATION ABOUT PERSISTENT COORDINATE FRAMES
A cross reality system that provides an immersive user experience shared by multiple user devices by providing quality information about a shared map. The quality information may be specific to individual user devices rendering virtual content specified with respect to the shared map. The quality information may be provided for persistent coordinate frames (PCFs) in the map. The quality information about a PCF may indicate positional uncertainty of virtual content, specified with respect to the PCF, when rendered on the user device. The quality information may be computed as upper bounding errors by determining error statistics for one or more steps in a process of specifying position with respect to the PCF or transforming that positional expression to a coordinate frame local to the device for rendering the virtual content. Applications running on individual user devices may adjust the rendering of virtual content based on the quality information about the shared map.
This disclosure relates to the use of variable-pitch light-emitting devices for display applications, including for displays in augmented reality, virtual reality, and mixed reality environments. In particular, it relates to small (e.g., micron-size) light emitting devices (e.g., micro-LEDs) of variable pitch to provide the advantages, e.g., of compactness, manufacturability, color rendition, as well as computational and power savings. Systems and methods for emitting multiple lights by multiple panels where a pitch of one panel is different than pitch(es) of other panels are disclosed. Each panel may comprise a respective array of light emitters. The multiple lights may be combined by a combiner.
H01L 25/075 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices all the devices being of a type provided for in a single subclass of subclasses , , , , or , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group
A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
H04R 1/26 - Spatial arrangement of separate transducers responsive to two or more frequency ranges
H04R 1/28 - Transducer mountings or enclosures designed for specific frequency responseTransducer enclosures modified by provision of mechanical or acoustic impedances, e.g. resonator, damping means
H04R 1/34 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
A display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. The wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The images may be formed by an emissive micro-display. Each pixel formed by the micro-display may be formed by one of a group of light emitters, which are at different locations such that the emitted light takes different paths to the eye to provide different amounts of parallax disparity.
G02B 30/24 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type involving temporal multiplexing, e.g. using sequentially activated left and right shutters
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
The disclosure relates to systems and methods for authorization of a user in a spatial 3D environment. The systems and methods can include receiving a request from an application executing on a mixed reality display system to authorize the user with a web service, displaying to the user an authorization window configured to accept user input associated with authorization by the web service and to prevent the application or other applications from receiving the user input, communicating the user input to the web service, receiving an access token from the web service, in which the access token is indicative of successful authorization by the web service, and communicating the access token to the application for authorization of the user. The authorization window can be a modal window displayed in an immersive mode by the mixed reality display system.
Diffraction gratings provide optical elements, e.g., in a head-mountable display system, that can affect light, for example by incoupling light into a waveguide, outcoupling light out of a waveguide, and/or multiplying light propagating in a waveguide. The diffraction gratings may be configured to have reduced polarization sensitivity such that light of different polarization states, or polarized and unpolarized light, is incoupled, outcoupled, multiplied, or otherwise affected with a similar level of efficiency. The reduced polarization sensitivity may be achieved through provision of a transmissive layer and a metallic layer on one or more gratings. A diffraction grating may comprise a blazed grating or other suitable configuration.
Systems and methods of disabling user control interfaces during attachment of a wearable electronic device to a portion of a user's clothing or accessory are disclosed. The wearable electronic device can include inertial measurement units (IMUs), optical sources, optical sensors or electromagnetic sensors. Based on the information provided by the IMUs, optical sources, optical sensors or electromagnetic sensors, an electrical processing and control system can make a determination that the electronic device is being grasped and picked up for attaching to a portion of a user's clothing or accessory or that the electronic device is in the process of being attached to a portion of a user's clothing or accessory and temporarily disable one or more user control interfaces disposed on the outside of the wearable electronic device.
A wearable display system includes a fiber scanner including an optical fiber and a scanning mechanism configured to scan a tip of the optical fiber along an emission trajectory defining an optical axis. The wearable display system also includes an eyepiece positioned in front of the tip of the optical fiber and including a planar waveguide, an incoupling diffractive optical element (DOE) coupled to the planar waveguide, and an outcoupling DOE coupled to the planar waveguide. The wearable display system further includes a collimating optical element configured to receive light reflected by the incoupling DOE and collimate and reflect light toward the eyepiece.
An augmented reality head mounted display system an eyepiece having a transparent emissive display. The eyepiece and transparent emissive display are positioned in an optical path of a user's eye in order to transmit light into the user's eye to form images. Due to the transparent nature of the display, the user can see an outside environment through the transparent emissive display. The transmissive emissive display comprising a plurality of emitters configured to emit light into the eye of the user.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/3208 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED]
An eyepiece for projecting an image light field to an eye of a viewer for forming an image of virtual content includes a waveguide, a light source configured to deliver a light beam to be incident on the waveguide, a controller coupled to the light source and configured to modulate an intensity of the light beam in a plurality of time slots, a dynamic input coupling grating (ICG) configured to, for each time slot, diffract a respective portion of the light beam into the waveguide at a respective total internal reflection (TIR) angle corresponding to a respective field angle, and an outcoupling diffractive optical element (DOE) configured to diffract each respective portion of the light beam out of the waveguide toward the eye at the respective field angle, thereby projecting the light field to the eye of the viewer.
A multiple degree of freedom hinge system is provided, which is particularly well adapted for eyewear, such as spatial computing headsets. In the context of such spatial computing headsets having an optics assembly supported by opposing temple arms, the hinge system provides protection against over-extension of the temple arms or extreme deflections that may otherwise arise from undesirable torsional loading of the temple arms. The hinge systems also allow the temple arms to splay outwardly to enable proper fit and enhanced user comfort.
Systems include three optical elements arranged along an optical axis each having a different cylinder axis and a variable cylinder refractive power. Collectively, the three elements form a compound optical element having an overall spherical refractive power (SPH), cylinder refractive power (CYL), and cylinder axis (Axis) that can be varied according to a prescription (Rx).
A method for placing content in an augmented reality system. A notification is received regarding availability of new content to display in the augmented reality system. A confirmation is received that indicates acceptance of the new content. Three dimensional information that describes the physical environment is provided, to an external computing device, to enable the external computing device to be used for selecting an assigned location in the physical environment for the new content. Location information is received, from the external computing device, that indicates the assigned location. A display location on a display system of the augmented reality system at which to display the new content so that the new content appears to the user to be displayed as an overlay at the assigned location in the physical environment is determined, based on the location information. The new content is displayed on the display system at the display location.
Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0483 - Interaction with page-structured environments, e.g. book metaphor
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 3/18 - Image warping, e.g. rearranging pixels individually
G06T 7/277 - Analysis of motion involving stochastic approaches, e.g. using Kalman filters
G06T 19/00 - Manipulating 3D models or images for computer graphics
A display system includes a waveguide assembly having a plurality of waveguides, each waveguide associated with an in-coupling optical element configured to in-couple light into the associated waveguide. A projector outputs light from one or more spatially-separated pupils, and at least one of the pupils outputs light of two different ranges of wavelengths. The in-coupling optical elements for two or more waveguides are inline, e.g. vertically aligned, with each other so that the in-coupling optical elements are in the path of light of the two different ranges of wavelengths. The in-coupling optical element of a first waveguide selectively in-couples light of one range of wavelengths into the waveguide, while the in-coupling optical element of a second waveguide selectively in-couples light of another range of wavelengths. Absorptive color filters are provided forward of an in-coupling optical element to limit the propagation of undesired wavelengths of light to that in-coupling optical element.
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
Disclosed herein are systems and methods for sharing and synchronizing virtual content. A method may include receiving, from a host application via a wearable device comprising a transmissive display, a first data package comprising first data; identifying virtual content based on the first data; presenting a view of the virtual content via the transmissive display; receiving, via the wearable device, first user input directed at the virtual content; generating second data based on the first data and the first user input; sending, to the host application via the wearable device, a second data package comprising the second data, wherein the host application is configured to execute via one or more processors of a computer system remote to the wearable device and in communication with the wearable device.
G06F 30/12 - Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
This disclosure is related to systems and methods for rendering audio for a mixed reality environment. Methods according to embodiments of this disclosure include receiving an input audio signal, via a wearable device in communication with a mixed reality environment, the input audio signal corresponding to a sound source originating from a real environment. In some embodiments, the system can determine one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can determine a signal modification parameter based on the one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can apply the signal modification parameter to the input audio signal to determine a second audio signal. The system can present the second audio signal to the user.
Disclosed is an improved diffraction structure for 3D display systems. The improved diffraction structure includes an intermediate layer that resides between a waveguide substrate and a top grating surface. The top grating surface comprises a first material that corresponds to a first refractive index value, the underlayer comprises a second material that corresponds to a second refractive index value, and the substrate comprises a third material that corresponds to a third refractive index value.
A wearable device may include a head-mounted display (HMD) for rendering a three-dimensional (3D) virtual object which appears to be located in an ambient environment of a user of the display. The relative positions of the HMD and one or more eyes of the user may not be in desired positions to receive image information outputted by the HMD. For example, the HMD-to-eye vertical alignment may be different between the left and right eyes. The wearable device may determine if the HMD is level on the user's head and may then provide the user with a left-eye alignment marker and a right-eye alignment marker. Based on user feedback, the wearable device may determine if there is any left-right vertical misalignment and may take actions to reduce or minimize the effects of any misalignment.
G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
This disclosure describes techniques for device authentication and/or pairing. A display system can comprise a head mountable display, computer memory, and processor(s). In response to receiving a request to authenticate a connection between the display system and a companion device (e.g., controller or other computer device), first data may be determined, the first data based at least partly on biometric data associated with a user. The first data may be sent to an authentication device configured to compare the first data to second data received from the companion device, the second data based at least partly on the biometric data. Based at least partly on a correspondence between the first and second data, the authentication device can send a confirmation to the display system to permit communication between the display system and companion device.
H04M 1/60 - Substation equipment, e.g. for use by subscribers including speech amplifiers
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
An apparatus configured for head-worn by a user, includes: a screen configured to present graphics for the user; a camera system configured to view an environment in which the user is located; and a processing unit coupled to the camera system, the processing unit configured to: obtain a feature detection response for a first image, divide the feature detection response into a plurality of patches having a first patch and a second patch, determine a first maximum value in the first patch of the feature detection response, and identify a first set of one or more features for a first region of the first image based on a first criterion that relates to the determined first maximum value.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/771 - Feature selection, e.g. selecting representative features from a multi-dimensional feature space
Several unique configurations for interferometric recording of volumetric phase diffractive elements with relatively high angle diffraction for use in waveguides are disclosed. Separate layer EPE and OPE structures produced by various methods may be integrated in side-by-side or overlaid constructs, and multiple such EPE and OPE structures may be combined or multiplexed to exhibit EPE/OPE functionality in a single, spatially-coincident layer. Multiplexed structures reduce the total number of layers of materials within a stack of eyepiece optics, each of which may be responsible for displaying a given focal depth range of a volumetric image. Volumetric phase type diffractive elements are used to offer properties including spectral bandwidth selectivity that may enable registered multi-color diffracted fields, angular multiplexing capability to facilitate tiling and field-of-view expansion without crosstalk, and all-optical, relatively simple prototyping compared to other diffractive element forms, enabling rapid design iteration.
G02B 30/24 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type involving temporal multiplexing, e.g. using sequentially activated left and right shutters
G02B 30/26 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type
G02F 1/1334 - Constructional arrangements based on polymer-dispersed liquid crystals, e.g. microencapsulated liquid crystals
G03H 1/04 - Processes or apparatus for producing holograms
43.
METHOD OF FABRICATING DISPLAY DEVICE HAVING PATTERNED LITHIUM-BASED TRANSITION METAL OXIDE
The present disclosure generally relates to display systems, and more particularly to augmented reality display systems and methods of fabricating the same. A method of fabricating a display device includes providing a substrate comprising a lithium (Li)-based oxide and forming an etch mask pattern exposing regions of the substrate. The method additionally includes plasma etching the exposed regions of the substrate using a gas mixture comprising CHF3 to form a diffractive optical element, wherein the diffractive optical element comprises Li-based oxide features configured to diffract visible light incident thereon.
An apparatus for providing a virtual or augmented reality experience, includes: a screen, wherein the screen is at least partially transparent for allowing a user of the apparatus to view an object in an environment surrounding the user; a surface detector configured to detect a surface of the object; an object identifier configured to obtain an orientation and/or an elevation of the surface of the object, and to make an identification for the object based on the orientation and/or the elevation of the surface of the object; and a graphic generator configured to generate an identifier indicating the identification for the object for display by the screen, wherein the screen is configured to display the identifier.
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
The disclosure describes an improved drop-on-demand, controlled volume technique for dispensing resist onto a substrate, which is then imprinted to create a patterned optical device suitable for use in optical applications such as augmented reality and/or mixed reality systems. The technique enables the dispensation of drops of resist at precise locations on the substrate, with precisely controlled drop volume corresponding to an imprint template having different zones associated with different total resist volumes. Controlled drop size and placement also provides for substantially less variation in residual layer thickness across the surface of the substrate after imprinting, compared to previously available techniques. The technique employs resist having a refractive index closer to that of the substrate index, reducing optical artifacts in the device. To ensure reliable dispensing of the higher index and higher viscosity resist in smaller drop sizes, the dispensing system can continuously circulate the resist.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfacesMaterials therefor, e.g. comprising photoresistsApparatus specially adapted therefor
46.
COMPUTATIONALLY EFFICIENT METHOD FOR COMPUTING A COMPOSITE REPRESENTATION OF A 3D ENVIRONMENT
Methods and apparatus for providing a representation of an environment, for example, in an XR system, and any suitable computer vision and robotics applications. A representation of an environment may include one or more planar features. The representation of the environment may be provided by jointly optimizing plane parameters of the planar features and sensor poses that the planar features are observed at. The joint optimization may be based on a reduced matrix and a reduced residual vector in lieu of the Jacobian matrix and the original residual vector.
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system that images the user's eye and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of rotation of the user's eye. The display system may render virtual image content with a render camera positioned at or relative to the center of rotation of the eye.
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 3/11 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for measuring interpupillary distance or diameter of pupils
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02B 30/40 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
An imaging system includes a light source configured to generate a light beam. The system also includes first and second light guiding optical elements having respective first and second entry portions, and configured to propagate at least respective first and second portions of the light beam by total internal reflection. The system further includes a light distributor having a light distributor entry portion, a first exit portion, and a second exit portion. The light distributor is configured to direct the first and second portions of the light beam toward the first and second entry portions, respectively. The light distributor entry portion and the first exit portion are aligned along a first axis. The light distributor entry portion and the second exit portion are aligned along a second axis different from the first axis.
An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
H04S 7/00 - Indicating arrangementsControl arrangements, e.g. balance control
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
H04S 3/00 - Systems employing more than two channels, e.g. quadraphonic
An HMD comprises a head-mountable frame, and a light projection assembly supported by the frame. The light projection assembly comprises a micro-display supported by the frame. The micro-display has a two-dimensional array of pixels. Each of the pixels comprises a group of light emitters configured for emitting image light. The micro-display further comprises projection optics configured for receiving the image light at an entrance pupil from the group of light emitters of each of the array of pixels, and projecting focused image light from an exit pupil. The micro-display further comprises a two-dimensional array of light collimators disposed between the micro-display and the projection optics. A steerable light collimator is further configured for redirecting the emission profiles of the corresponding group of light emitters towards a center of the entrance pupil of the projection optics.
51.
MAPPING OF ENVIROMENTAL AUDIO RESPONSE ON MIXED REALITY DEVICE
This disclosure relates in general to augmented reality (AR), mixed reality (MR), or extended reality (XR) environmental mapping. Specifically, this disclosure relates to AR, MR, or XR audio mapping in an AR, MR, or XR environment. In some embodiments, the disclosed systems and methods allow the environment to be mapped based on a recording. In some embodiments, the audio mapping information is associated to voxels located in the environment.
Disclosed herein are systems and methods for presenting mixed reality audio. In an example method, audio is presented to a user of a wearable head device. A first position of the user's head at a first time is determined based on one or more sensors of the wearable head device. A second position of the user's head at a second time later than the first time is determined based on the one or more sensors. An audio signal is determined based on a difference between the first position and the second position. The audio signal is presented to the user via a speaker of the wearable head device. Determining the audio signal comprises determining an origin of the audio signal in a virtual environment. Presenting the audio signal to the user comprises presenting the audio signal as if originating from the determined origin. Determining the origin of the audio signal comprises applying an offset to a position of the user's head.
An augmented reality display having a world side and a user side includes a world side optical structure including a geometric-phase lens, an eyepiece waveguide, and a user side optical device. A dimming structure having a linear polarizer, a liquid crystal cell, and a quarter-wave plate provides attenuation. A second geometric-phase lens may be part of the user side optical device.
G02F 1/1347 - Arrangement of liquid crystal layers or cells in which the final condition of one light beam is achieved by the addition of the effects of two or more layers or cells
G02F 1/29 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the position or the direction of light beams, i.e. deflection
54.
Nose pad for a head mounted audio-visual display system
Various embodiments relate to a thermal management system for an electronic device, such as an augmented reality or virtual reality device. The thermal management system can comprise an active cooling mechanism in various embodiments to dynamically or actively cool components of the device, for example, by adjust fan speeds of a fan assembly. In some embodiments, a hardware shutdown mechanism can be provided to shut down the device if software-based thermal management devices are inoperable. In some embodiments, the air flow into and/or within the electronic device can be adjusted to cool various components of the device.
G05B 15/02 - Systems controlled by a computer electric
H02H 1/00 - Details of emergency protective circuit arrangements
H02H 5/04 - Emergency protective circuit arrangements for automatic disconnection directly responsive to an undesired change from normal non-electric working conditions with or without subsequent reconnection responsive to abnormal temperature
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
Disclosed herein are systems and methods for capturing a sound field, in particular, using a mixed reality device. In some embodiments, a method comprises: detecting, with a microphone of a first wearable-head device, a sound of an environment; determining a digital audio signal based on the detected sound, the digital audio signal associated with a sphere having a position in the environment; concurrently with detecting the sound, a microphone movement with respect to the environment; adjusting the digital audio signal, wherein the adjusting comprises adjusting the position of the sphere based on based on the detected microphone movement.
One embodiment is directed to a system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; wherein at least a first portion of the virtual world data originates from a first user virtual world local to a first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to a second user, such that the second user may experience the first portion from the location of the second user, such that aspects of the first user virtual world are effectively passed to the second user.
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
An eyepiece waveguide for an augmented reality display system includes an optically transmissive substrate, a first in-coupling grating (ICG) region, a second ICG region and one or more pupil expander and extraction gratings. The first ICG region can receive input beams of light corresponding to a first color component of an input image, and can couple them into the substrate. The second ICG region can receive input beams of light corresponding to a second color component of the input image, and can couple them into the substrate. The pupil expander and extraction gratings can replicate the in-coupled beams and out-couple them from the substrate. The first and second ICG regions can be provided at angularly separated locations around the substrate. The eyepiece waveguide can be capable of reducing color distortion in an output image.
G02B 30/22 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type
G02B 30/25 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type using polarisation techniques
G02B 30/26 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type
G02B 30/20 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes
G02B 30/27 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type involving lenticular arrays
G02B 30/28 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type involving lenticular arrays involving active lenticular arrays
G02B 30/34 - Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
An augmented reality system includes a light source configured to generate a virtual light beam. The system also includes a light guiding optical element having an entry portion, an exit portion, and a surface having a diverter disposed adjacent thereto. The light source and the light guiding optical element are configured such that the virtual light beam enters the light guiding optical element through the entry portion, propagates through the light guiding optical element by at least partially reflecting off of the surface, and exits the light guiding optical element through the exit portion. The light guiding optical element is transparent to a first real-world light beam. The diverter is configured to modify a light path of a second real-world light beam at the surface.
G02B 30/52 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
Disclosed herein are systems and methods for presenting and annotating virtual content. According to an example method, a virtual object is presented to a first user at a first position via a transmissive display of a wearable device. A first input is received from the first user. In response to receiving the first input, a virtual annotation is presented at a first displacement from the first position. A first data is transmitted to a second user, the first data associated with the virtual annotation and the first displacement. A second input is received from the second user. In response to receiving the second input, the virtual annotation is presented to the first user at a second displacement from the first position. Second data is transmitted to a remote server, the second data associated with the virtual object, the virtual annotation, the second displacement, and the first position.
Systems and methods for compressing dynamic unstructured point clouds. A dynamic unstructured point cloud can be mapped to a skeletal system of a subject to form one or more structured point cloud representations. One or more sequences of the structured point cloud representations can be formed. The one or more sequences of structured point cloud representations can then be compressed.
Controllable three-dimensional (3D) virtual dioramas in a rendered 3D environment such as a virtual reality or augmented reality environment including one or more rendered objects. 3D diorama is associated with a spatial computing content item such as a downloadable application executable by a computing device. 3D diorama assets may include visual and/or audio content and are presented with rendered 3D environment objects in a composite view, which is presented to a user through a display of computing device. 3D diorama is rotatable in composite view, and at least one 3D diorama asset at least partially occludes, or is at least partially occluded by, at least one rendered 3D environment object. 3D diorama may depict or provide a preview of a spatial computing user experience generated by the downloadable application.
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
An extended reality display system includes a display subsystem configured to present an image corresponding to image data to a user. The display subsystem includes an optical component that introduce a non-uniformity to the image, a segmented illumination light source, and a spatial light modulator (SLM) configured to receive light from the segmented illumination light source. The system also includes a display controller configured to control the segmented illumination light source. The display controller includes a memory for storing non-uniformity correction information, and a processor to control the segmented illumination light source based on the non-uniformity correction information. The segmented illumination light source is configured to differentially illuminate first and second portions of the SLM using respective first and second portions of the segmented illumination light source.
65.
METHODS AND APPARATUSES FOR CASTING POLYMER PRODUCTS
In an example method of forming a waveguide part having a predetermined shape, a photocurable material is dispensed into a space between a first mold portion and a second mold portion opposite the first mold portion. A relative separation between a surface of the first mold portion with respect to a surface of the second mold portion opposing the surface of the first mold portion is adjusted to fill the space between the first and second mold portions. The photocurable material in the space is irradiated with radiation suitable for photocuring the photocurable material to form a cured waveguide film so that different portions of the cured waveguide film have different rigidity. The cured waveguide film is separated from the first and second mold portions. The waveguide part is singulated from the cured waveguide film. The waveguide part corresponds to portions of the cured waveguide film having a higher rigidity than other portions of the cured waveguide film.
A fan assembly is disclosed. The fan assembly can include a first support frame. The fan assembly can comprise a shaft assembly having a first end coupled with the first support frame and a second end disposed away from the first end. A second support frame can be coupled with the first support frame and disposed at or over the second end of the shaft assembly. An impeller can have fan blades coupled with a hub, the hub being disposed over the shaft assembly for rotation between the first and second support frames about a longitudinal axis. Transverse loading on the shaft assembly can be controlled by the first and second support frames.
A wearable ophthalmic device may include a head-mounted light field display configured to generate a physical light field comprising a beam of light. Camera(s) on or in communication with the device may receive light from the surroundings, and a light field processor may determine, based on the light, left and right numerical light field image data describing image(s) to be displayed to the left and right eyes respectively. The left and/or right numerical light field image data can be modified to computationally introduce a shift based on a determined convergence point of the eyes, and the physical light field presented to the user can be generated corresponding to the modified numerical light field image data, e.g., to correct for a convergence deficiency of the eye(s).
A61B 3/00 - Apparatus for testing the eyesInstruments for examining the eyes
A61B 3/028 - Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuitySubjective types, i.e. testing apparatus requiring the active assistance of the patient for determination of refraction, e.g. phoropters
A61B 3/10 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions
Systems and methods for displaying a virtual reticle in an augmented or virtual reality environment by a wearable device are described. The environment can include real or virtual objects that may be interacted with by the user through a variety of poses, such as, e.g., head pose, eye pose or gaze, or body pose. The user may select objects by pointing the virtual reticle toward a target object by changing pose or gaze. The wearable device can recognize that an orientation of a user's head or eyes is outside of a range of acceptable or comfortable head or eye poses and accelerate the movement of the reticle away from a default position and toward a position in the direction of the user's head or eye movement, which can reduce the amount of movement by the user to align the reticle and target.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 40/18 - Eye characteristics, e.g. of the iris
69.
EYEPIECE FOR HEAD-MOUNTED DISPLAY AND METHOD FOR MAKING THE SAME
A method, includes providing a wafer including a first surface grating extending over a first area of a surface of the wafer and a second surface grating extending over a second area of the surface of the wafer; de-functionalizing a portion of the surface grating in at least one of the first surface grating area and the second surface grating area; and singulating an eyepiece from the wafer, the eyepiece including a portion of the first surface grating area and a portion of the second surface grating area. The first surface grating in the eyepiece corresponds to an input coupling grating for a head-mounted display and the second surface grating corresponds to a pupil expander grating for the head-mounted display.
Disclosed herein are systems and methods for setting, accessing, and modifying user privacy settings using a distributed ledger. In an aspect, a system can search previously stored software contracts to locate an up-to-date version of a software contract associated with a user based on a request for access to user data for the particular user. Then, the system determines that the user data is permitted to be shared. The system transmits, to a data virtualization platform, instructions to extract encrypted user data from a data platform. The system can then make available, to a data verification system, a private encryption key and details associated with the software contract to verify that the private encryption key and the user data match. Then the system transmits, to the data virtualization platform, the private encryption key so that the data virtualization platform can decrypt the encrypted user data.
Neutral avatars are neutral with reference physical characteristics of the corresponding user, such as weight, ethnicity, gender, or even identity. Thus, neutral avatars may be desirable to use in various copresence environments where the user desires to maintain privacy with reference to the above-noted characteristics. Neutral avatars may be configured to convey, in real-time, actions and behaviors of the corresponding user without using literal forms of the user's actions and behaviors.
Systems, apparatus, and methods for double-sided imprinting are provided. An example system includes first rollers for moving a first web including a first template having a first imprinting feature, second rollers for moving a second web including a second template having a second imprinting feature, dispensers for dispensing resist, a locating system for locating reference marks on the first and second webs for aligning the first and second templates, a light source for curing the resist, such that a cured first resist has a first imprinted feature corresponding to the first imprinting feature on one side of the substrate and a cured second resist has a second imprinted feature corresponding to the second imprinting feature on the other side of the substrate, and a moving system for feeding in the substrate between the first and second templates and unloading the double-imprinted substrate from the first and second webs.
B29C 59/04 - Surface shaping, e.g. embossingApparatus therefor by mechanical means, e.g. pressing using rollers or endless belts
B29C 43/22 - Compression moulding, i.e. applying external pressure to flow the moulding materialApparatus therefor of articles of indefinite length
B29C 43/28 - Compression moulding, i.e. applying external pressure to flow the moulding materialApparatus therefor of articles of indefinite length incorporating preformed parts or layers, e.g. compression moulding around inserts or for coating articles
B29C 43/30 - Making multilayered or multicoloured articles
B29C 43/34 - Feeding the material to the mould or the compression means
B29C 51/26 - Component parts, details or accessoriesAuxiliary operations
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfacesMaterials therefor, e.g. comprising photoresistsApparatus specially adapted therefor
73.
UV AND VISIBLE LIGHT EXIT GRATING FOR EYEPIECE FABRICATION AND OPERATION
A method of forming a waveguide for an eyepiece for a display system to reduce optical degradation of the waveguide during segmentation is disclosed herein. The method includes providing a substrate having top and bottom major surfaces and a plurality of surface features, and using a laser beam to cut out a waveguide from said substrate by cutting along a path contacting and/or proximal to said plurality of surface features. The waveguide has edges formed by the laser beam and a main region and a peripheral region surrounding the main region. The peripheral region is surrounded by the edges.
Techniques are disclosed for operating a time-of-flight (TOF) sensor. The TOF may be operated in a low power mode by repeatedly performing a low power mode sequence, which may include performing a depth frame by emitting light pulses, detecting reflected light pulses, and computing a depth map based on the detected reflected light pulses. Performing the low power mode sequence may also include performing an amplitude frame at least one time by emitting a light pulse, detecting a reflected light pulse, and computing an amplitude map based on the detected reflected light pulse. In response to determining that an activation condition is satisfied, the TOF may be switched to operate in a high accuracy mode by repeatedly performing a high accuracy mode sequence, which may include performing the depth frame multiple times.
Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically select avatar characteristics that optimize gaze perception by the user, based on context parameters associated with the virtual environment.
A display assembly suitable for use with a virtual or augmented reality headset is described and includes the following: an input coupling grating; a scanning mirror configured to rotate about two or more different axes of rotation; an optical element; and optical fibers, each of which have a light emitting end disposed between the input coupling grating and the scanning mirror and oriented such that light emitted from the light emitting end is refracted through at least a portion of the optical element, reflected off the scanning mirror, refracted back through the optical element and into the input coupling grating. The scanning mirror can be built upon a MEMS type architecture.
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
An augmented reality (AR) system includes a handheld device comprising handheld fiducials affixed to the handheld device. The AR system also includes a wearable device comprising a display operable to display virtual content and an imaging device mounted to the wearable device and having a field of view that at least partially includes the handheld fiducials and a hand of a user. The AR system also includes a computing apparatus configured to receive hand pose data associated with the hand based on an image captured by the imaging device and receive handheld device pose data associated with the handheld device based on the image captured by the imaging device. The computing apparatus is also configured to determine a pose discrepancy between the hand pose data and the handheld device pose data and perform an operation to fuse the hand pose data with the handheld device pose data.
Disclosed herein are systems and methods for colocating virtual content. A method may include receiving first persistent coordinate data, second persistent coordinate data, and relational data. A third persistent coordinate data and a fourth persistent coordinate data may be determined based on input received via one or more sensors of a head-wearable device. It can be determined whether the first persistent coordinate data corresponds to the third persistent coordinate data. In accordance with a determination that the first persistent coordinate data corresponds to the third persistent coordinate data, it can be determined whether the second persistent coordinate data corresponds to the fourth persistent coordinate data. In accordance with a determination that the second persistent coordinate data corresponds to the fourth persistent coordinate data, a virtual object can be displayed using the relational data and the second persistent coordinate data via a display of the head-wearable device. In accordance with a determination that the second persistent coordinate data does not correspond to the fourth persistent coordinate data, the virtual object can be displayed using the relational data and the first persistent coordinate data via the display of the head-wearable device. In accordance with a determination that the first persistent coordinate data does not correspond to the third persistent coordinate data, the method may forgo displaying the virtual object via the display of the head-wearable device.
An electronic device is disclosed. The electronic device comprises a first clock configured to operate at a frequency. First circuitry of the electronic device is configured to synchronize with the first clock. Second circuitry is configured to determine a second clock based on the first clock. The second clock is configured to operate at the frequency of the first clock, and is further configured to operate with a phase shift with respect to the first clock. Third circuitry is configured to synchronize with the second clock.
A switchable optical assembly comprises a switchable waveplate configured to be electrically activated and deactivated to selectively alter the polarization state of light incident on the switchable waveplate. The switchable waveplate comprises first and second surfaces and a liquid crystal layer disposed between the first and second surfaces. The liquid crystal layer comprises a plurality of liquid crystal molecules. The first surface and/or the second surface may be planar. The first surface and/or the second surface may be curved. The plurality of liquid crystal molecules may vary in tilt with respect to the first and second surfaces with outward radial distance from an axis through the first and second surfaces and the liquid crystal layer in a plurality of radial directions. The switchable waveplate can include a plurality of electrodes to apply an electrical signal across the liquid crystal layer.
G02F 1/137 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
G02B 30/34 - Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
G02F 1/01 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour
G02F 1/13 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using depth data to update camera calibration data. In some implementations, a frame of data is captured including (i) depth data from a depth sensor of a device, and (ii) image data from a camera of the device. Selected points from the depth data are transformed, using camera calibration data for the camera, to a three-dimensional space that is based on the image data. The transformed points are projected onto the two-dimensional image data from the camera. Updated camera calibration data is generated based on differences between (i) the locations of the projected points and (ii) locations that features representing the selected points appear in the two-dimensional image data from the camera. The updated camera calibration data can be used in a simultaneous localization and mapping process.
An optical projection system includes a source of collimated light, a first microelectromechanical system mirror positioned to receive collimated light from the source, and an optical relay system positioned to receive collimated light from the first microelectromechanical system mirror. The optical relay system includes a single-pass relay having a first component, a second component, and a third component. The optical projection system also includes a second microelectromechanical system mirror positioned to receive collimated light from the optical relay system and an eyepiece positioned to receive light reflected from the second microelectromechanical system mirror.
An optical scanner includes a base region and a cantilevered silicon beam protruding from the base region. The optical scanner also includes a waveguide disposed on the base region and the cantilevered silicon beam and a transducer assembly comprising one or more piezoelectric actuators coupled to the cantilevered silicon beam and configured to induce motion of the cantilevered silicon beam in a scan pattern.
Methods are disclosed for fabricating molds for forming waveguides with integrated spacers for forming eyepieces. The molds are formed by etching features (e.g., 1 μm to 1000 μm deep) into a substrate comprising single crystalline material using an anisotropic wet etch. The etch masks for defining the large features may comprise a plurality of holes, wherein the size and shape of each hole at least partially determine the depth of the corresponding large feature. The holes may be aligned along a crystal axis of the substrate and the etching may automatically stop due to the crystal structure of the substrate. The patterned substrate may be utilized as a mold onto which a flowable polymer may be introduced and allowed to harden. Hardened polymer in the holes may form a waveguide with integrated spacers. The mold may be also used to fabricate a platform comprising a plurality of vertically extending microstructures of precise heights, to test the curvature or flatness of a sample, e.g., based on the amount of contact between the microstructures and the sample.
B29C 43/02 - Compression moulding, i.e. applying external pressure to flow the moulding materialApparatus therefor of articles of definite length, i.e. discrete articles
B29C 33/42 - Moulds or coresDetails thereof or accessories therefor characterised by the shape of the moulding surface, e.g. ribs or grooves
B29C 43/40 - Moulds for making articles of definite length, i.e. discrete articles with means for cutting the article
B29L 11/00 - Optical elements, e.g. lenses, prisms
An example a head-mounted display device includes a light projector and an eyepiece. The eyepiece is arranged to receive light from the light projector and direct the light to a user during use of the wearable display system. The eyepiece includes a waveguide having an edge positioned to receive light from the display light source module and couple the light into the waveguide. The waveguide includes a first surface and a second surface opposite the first surface. The waveguide includes several different regions, each having different grating structures configured to diffract light according to different sets of grating vectors.
Blazed diffraction gratings provide optical elements in head-mounted display systems to, e.g., incouple light into or out-couple light out of a waveguide. These blazed diffraction gratings may be configured to have reduced polarization sensitivity. Such gratings may, for example, incouple or outcouple light of different polarizations with similar level of efficiency. The blazed diffraction gratings and waveguides may be formed in a high refractive index substrate such as lithium niobate. In some implementations, the blazed diffraction gratings may include diffractive features having a feature height of 40 nm to 120 nm, for example, 80 nm. The diffractive features may be etched into the high index substrate, e.g., lithium niobate.
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of cornea of the user's eye using data derived from the glint images. The display system may use spherical and aspheric cornea models to estimate a location of the corneal center of the user's eye.
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
88.
CROSS REALITY SYSTEM WITH SIMPLIFIED PROGRAMMING OF VIRTUAL CONTENT
A cross reality system that renders virtual content generated by executing native mode applications may be configured to render web-based content using components that render content from native applications. The system may include a Prism manager that provides Prisms in which content from executing native applications is rendered. For rendering web based content, a browser, accessing the web based content, may be associated with a Prism and may render content into its associated Prism, creating the same immersive experience for the user as when content is generated by a native application. The user may access the web application from the same program launcher menu as native applications. The system may have tools that enable a user to access these capabilities, including by creating for a web location an installable entity that, when processed by the system, results in an icon for the web content in a program launcher menu.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
A wearable device can present virtual content to the wearer for many applications in a healthcare setting. The wearer may be a patient or a healthcare provider (HCP). Such applications can include, but are not limited to, access, display, and modification of patient medical records and sharing patient medical records among authorized HCPs.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
A61B 3/00 - Apparatus for testing the eyesInstruments for examining the eyes
A61B 3/10 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 5/06 - Devices, other than using radiation, for detecting or locating foreign bodies
A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilitiesICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
A61B 17/00 - Surgical instruments, devices or methods
A61B 34/20 - Surgical navigation systemsDevices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 90/50 - Supports for surgical instruments, e.g. articulated arms
A two-dimensional scanning micromirror device includes a base, a first platform coupled to the base by first support flexures, and a second platform including a reflector and coupled to the first platform by second support flexures. The first platform is oscillatable about a first axis and the second platform is oscillatable about a second axis orthogonal to the first axis. The first platform, the second platform, and the second support flexures together exhibit a first resonance having a first frequency, the first resonance corresponds to oscillatory motion of at least the first platform, the second platform, and the second support flexures about the first axis. The first platform, the second platform, and the second support flexures together exhibit a second resonance having a second frequency, and the second resonance corresponds to oscillatory motion of at least the second platform about the second axis.
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
G03B 21/00 - Projectors or projection-type viewersAccessories therefor
Examples of systems and methods for matching a base mesh to a target mesh for a virtual avatar or object are disclosed. The systems and methods may be configured to automatically match a base mesh of an animation rig to a target mesh, which may represent a particular pose of the virtual avatar or object. Base meshes may be obtained by manipulating an avatar or object into a particular pose, while target meshes may be obtain by scanning, photographing, or otherwise obtaining information about a person or object in the particular pose. The systems and methods may automatically match a base mesh to a target mesh using rigid transformations in regions of higher error and non-rigid deformations in regions of lower error.
A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
A wearable computing system that includes a head-mounted display implements a gaze timer feature for enabling the user to temporarily extend the functionality of a handheld controller or other user input device. In one embodiment, when the user gazes at, or in the vicinity of, a handheld controller for a predetermined period of time, the functionality of one or more input elements (e.g., buttons) of the handheld controller is temporarily modified. For example, the function associated with a particular controller button may be modified to enable the user to open a particular menu using the button. The gaze timer feature may, for example, be used to augment the functionality of a handheld controller or other user input device during mixed reality and/or augmented reality sessions.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
The disclosure relates to systems and methods for displaying three-dimensional (3D) content in a spatial 3D environment. The systems and methods can include receiving a request from web domain to display 3D content of certain dimensions at a location within the spatial 3D environment, identifying whether the placement is within an authorized portion of the spatial 3D environment, expanding the authorized portion of the 3D spatial environment to display the 3D content based on a user authorization to resize the authorized portion, and displaying the 3D content.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 16/954 - Navigation, e.g. using categorised browsing
H04N 13/332 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
98.
METHOD AND SYSTEM FOR PERFORMING SPATIAL FOVEATION BASED ON EYE GAZE
A method includes determining an eye gaze location of a user and generating a spatial foveation map based on the eye gaze location. The method also includes receiving an image, forming a spatially foveated image using the image and the spatial foveation map, and transmitting the spatially foveated image to a wearable device. The method further includes spatially defoveating the spatially foveated image to produce a spatially defoveated image and displaying the spatially defoveated image.
An eyepiece for projecting an image to a viewer includes a substrate positioned in a substrate lateral plane and a set of color filters disposed on the substrate. The set of color filters comprise a first color filter disposed at a first lateral position and operable to pass a first wavelength range, a second color filter disposed at a second lateral position and operable to pass a second wavelength range, and a third color filter disposed at a third lateral position and operable to pass a third wavelength range. The eyepiece further includes a first planar waveguide positioned in a first lateral plane adjacent the substrate lateral plane, a second planar waveguide positioned in a second lateral plane adjacent to the first lateral plane, and a third planar waveguide positioned in a third lateral plane adjacent to the second lateral plane.
A method of fabricating a fiber scanning system includes forming a set of piezoelectric elements. The method also includes coating an interior surface and an exterior surface of each of the set of piezoelectric elements with a first conductive material. The method also includes providing a fiber optic element having an actuation region and coating the actuation region of the fiber optic element with a second conductive material. The method also includes joining the interior surfaces of the set of piezoelectric elements to the actuation region of the fiber optic element and poling the set of piezoelectric elements. The method also includes forming electrical connections to the exterior surface of each of the set of piezoelectric elements and the fiber optic element.
H02N 2/00 - Electric machines in general using piezoelectric effect, electrostriction or magnetostriction
H02N 2/02 - Electric machines in general using piezoelectric effect, electrostriction or magnetostriction producing linear motion, e.g. actuatorsLinear positioners
H10N 30/045 - Treatments to modify a piezoelectric or electrostrictive property, e.g. polarisation characteristics, vibration characteristics or mode tuning by polarising