Systems and methods for generating composite depth images are disclosed. Exemplary implementations may: capture, by a depth sensor, a set of depth images over a capture period of time; generate, by an inertial sensor, inertial signals that convey values of one or more inertial parameters characterizing motion of the depth sensor during the capture period of time; select a target capture position based on one or more of the capture positions of the set of depth images; generate, using the values of the one or more inertial parameters during the capture period of time, re-projected depth images; and generate a composite depth image by combining multiple depth images, such multiple depth images including a first re-projected depth image and a second re-projected depth image.
Methods based on growth pattern models are utilized to determine patterns of reflective dots (108) in optical combiners (100, 200, 300, 400, 600, 700 and 800) or other components for augmented reality (AR), head mounted displays (HMD) and/or head up display (HUD) applications. Optical combiners (100, 200, 300, 400, 600, 700 and 800) including the reflective dots (108) arranged in the grown patterns are provided.
Optical hyperfocal reflective systems and methods are provided. One such optical hyperfocal reflective system has an optical substrate (101, 101F, 101G1 101G2, 101G3), an optical input coupling portion (150, 150G1) configured to input couple a collimated display image to the optical substrate; and an optical hyperfocal output coupling portion integrated with said optical substrate (101, 101F, 101G1 101G2, 101G3). The optical output coupling portion includes at least one hyperfocal reflective view port (102, 102H, 102V, 102G1, 102G2....) formed from a discrete optical hyperfocal reflector spot (103, 103A1, 103A2, 103B, 103M1, 103M2, 103C1 -103C6, 103V, 103H.......) integrated with the optical substrate. The discrete optical hyperfocal reflector spot is sized to form a reflected discrete optical spot beam (105, 105A1, 105A2, 105B, 105C.....) with a diameter at a target area such that a view of a discrete virtual display image portion, as seen by a lens-detector system (400 locatable at the target area, is hyperfocused.
Optical adaptive viewport display systems and methods are provided. One such optical adaptive viewport display system (600) has an adaptive pupil device (602) which is optical coupled to an optical combiner (604). Adaptive pupil device (602) is optically couplable to an image projector (601) and is configured to select a sub-pupil (607) from the pupil of the projector. The selected sub-pupil (607) is optically relayed by relay optics (609, 604) from the adaptive pupil device (602) to an eyebox (605). The relay optics includes an optical combiner (604). The sub-pupil size and position is selected by the adaptive pupil device(602) so that an optical image spot beam from the sub-pupil and reflected by the optical combiner on to the eye box has a diameter at eyebox (605) such that the virtual image (109), as seen by a human eye (201) disposed at eyebox (605), is hyperfocused
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G02B 27/09 - Mise en forme du faisceau, p. ex. changement de la section transversale, non prévue ailleurs
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Integrated suite of augmented reality software for workforce productivity and collaboration. Providing on-line, non-downloadable augmented reality software for workforce productivity and collaboration.
Optical combiners are provided. The optical combiner may have a see through optically transparent substrate (105, 205) and a patterned region (107) included in the optically transparent substrate (105, 205) and disposed along a wave propagation axis (106) of the substrate. The patterned region (107) may be partially optically reflective and partially optically transparent (105, 205). The patterned region (107) may comprise a plurality of optically transparent regions (109) of the optically transparent substrate and a plurality of optically reflective regions (108) inclined relative to the optical transparent substrate wave propagation axis (106). Augmented reality optical apparatus, such a head up display, may include the optical combiner.
G02B 23/00 - Télescopes ou lunettes d'approche, p. ex. jumellesPériscopesInstruments pour voir à l'intérieur de corps creuxViseursPointage optique ou appareils de visée
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
A wearable computing device is provided. The wearable computing device includes at least one processor, a display element configured to display augmented reality (AR) content to a wearer, a location sensor providing location information, and a task management engine executed by the at least one processor. The task management engine is configured to receive a task event identifying a task to be performed, identify a location associated with the task event, display a first AR content item to the wearer, the first AR content item is a navigational aid associated with the location, detect that the wearable computing device is proximate the location, determine a task object associated with the task event, and display a second AR content item to the wearer using the display element, the second AR content item identifies the task object to the wearer in a field of view of the display element.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A server receives video data and location data from mobile devices. Each mobile device records a video of a target. The location data identifies a position of the corresponding mobile device relative to the target and a distance between the corresponding mobile device to the target. The location data is associated with a corresponding video frame from the video data. The server identifies video frames from the video data captured from the mobile devices. The server scales parts of the identified video frames based on the position and distance of the corresponding mobile devices to the target. The server extracts the scaled parts of the identified video frames and generates a three-dimensional model of the target based on the extracted scaled parts of the identified video frames from the plurality of mobile devices.
An augmented reality device (103, 105) includes one or more sensors for imaging and/or detecting an environment(308) and a transparent display (103) for displaying virtual objects. The augmented reality device (103, 105) monitors various biometric attributes of a user (302) and determines the user's location within the environment (308). The augmented reality device (103, 105) determines a virtual path from the user's location to a selected destination within the environment using the monitored biometric attributes as one or more constraints in the pathfinding determination. The determined virtual path (304-312, 314-324) is displayed on the transparent display (103) as a virtual object such that it appears overlaid on the user's environment. The augmented reality device(103, 105) monitors the user (302) as he or she traverses the virtual path (304-312, 314-324) through the environment (308), and updates the virtual path (304-312, 314-324 ) in response to changes in one or more of the user's monitored biometric attributes.
G01C 21/20 - Instruments pour effectuer des calculs de navigation
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A server receives datasets from mobile devices. Each dataset identifies a task selected in an augmented reality application of a corresponding mobile device and an identification of a tool detected at the corresponding mobile device. The server identifies tools present and absent at a dedicated tool board and compares an identification of the tools present and absent at the dedicated tool board with the tools detected at the mobile devices and the tasks identified at the mobile devices to generate a tool inventory and a tool compliance. The server generates an augmented reality content dataset for each mobile device to identify at least one of a missing tool, an incorrect tool, and a valid tool based on the tool compliance.
A head mounted device includes a sensor, a transparent display, and a processor comprising an augmented reality application and an alignment module. The augmented reality application generates a virtual content based on data from the sensor, and displays the virtual content in the transparent display. The alignment module detects an initialization of the augmented reality application and causes a temporary display of an alignment indicator in the transparent display in response to detecting the initialization of the augmented reality application.
A system and method for generating a dynamic sensor array for an augmented reality system is described. A head mounted device includes one or more sensors, an augmented reality (AR) application, and a sensor array module. The sensor array module identifies available sensors from other head mounted devices that are geographically located within a predefined area. A dynamic sensor array is formed based on the available sensors and the one or more sensors. The dynamic sensor array is updated based on an operational status of the available sensors and the one or more sensors. The AR application generates AR content based on data from the dynamic sensor array. A display of the head mounted device displays the AR content.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
A device can determine a distance to an object. The device can use the determined distance to vary a focal length of a first adjustable element so that the first adjustable element directs light from the object into a first waveguide and onto a detector, and forms an image of the object at the detector. The device can produce an image, such as augmented content, on a panel. The device can direct light from the panel into a second waveguide. The device can use the determined distance to vary a focal length of a second adjustable element so that the second adjustable element directs light out of the second waveguide and forms a virtual image of the panel in a plane coincident with the object. The device can operate as an augmented reality headset. The adjustable elements can be phase modulators, or acoustically responsive material with surface acoustic wave transducers.
An augmented reality device includes a transparent optical display for displaying one or more depth-encoded images. The transparent optical display leverages an optical true time delay circuit communicatively coupled to a multi-layered optical element for displaying the one or more depth-encoded images. A light source is modified or modulated, which is then directed by the optical true time delay circuit, to create the depth-encoded images. A dynamic depth encoder determines which layers of the multi-layered optical element are to be energized, and the optical true time delay circuit is directed accordingly. In this manner, the optical true time delay circuit uses the inherent delay of transmitting the light as a controlled proxy for complex processing.
G02B 27/22 - Autres systèmes optiques; Autres appareils optiques pour produire des effets stéréoscopiques ou autres effets de relief
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p. ex. des polariseurs ou des réflecteurs
15.
PLACEMENT OF A COMPUTER GENERATED DISPLAY WITH FOCAL PLANE AT FINITE DISTANCE USING OPTICAL DEVICES AND A SEETHROUGH HEAD-MOUNTED DISPLAY INCORPORATING THE SAME
Fixed position optical devices (100, 200, 300, 400, 500, 600, 1000) for displaying augmented reality images are provided herein. In one embodiment an optical device includes a AIIE (106, 206, 306, 406,1008) having a waveguide that reflects a computer generated image along a central viewing axis, the computer generated image being received from an image generator (104, 510, 512, 610, 612) optically coupled to the waveguide, and a fixed lens assembly (112, 202, 204, 302, 304, 402,404, 1002, 1004) for coupling a background image with the computer generated image to create the augmented reality display, the fixed lens assembly including a proximal lens (114, 202, 302, 402, 1002) disposed on one side of the waveguide, the proximal lens being fixedly spaced apart from the waveguide at a first distance, and a distal lens (116, 204, 304, 404, 1004) disposed on an opposing side of the AIIE from the one side, the distal lens being fixedly spaced apart from the waveguide at a second distance.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateurDispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p. ex. dispositions d'interface
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A system and method for determining a quality of contact of an electroencephalogram (EEG) electrode is described. An electrode is connected to a user to detect brainwave activity from the user. An EEG application computes a voltage of the electrode, computes a derivative of the voltage of the electrode, computes a coefficient of variation from the derivative of the voltage, and determines a quality of contact of the electrode to the user based on the coefficient of variation.
09 - Appareils et instruments scientifiques et électriques
Produits et services
wearable computer peripherals; wearable computers in the nature of glasses equipped with software for mixing physical objects with perspective based digital content; computer hardware; computer hardware and peripherals for displaying and accessing data and video.
09 - Appareils et instruments scientifiques et électriques
Produits et services
computer operating programs; computer operating systems; computer application software for mobile phones and handheld computers, namely, software for use in mixing physical objects with perspective based digital content; downloadable software for use in mixing physical objects with perspective based digital content.
09 - Appareils et instruments scientifiques et électriques
Produits et services
Computer software platforms for vision reference; Computer application software for mobile phones and handheld computers, namely, software for use in mixing physical objects with perspective based digital content; downloadable software for use in mixing physical objects with perspective based digital content.
A head mounted device (HMD) includes a transparent display, a first set of sensors, a second set of sensors, and a processor. The first set of sensors measures first sensor data related to a user of the HMD. The second set of sensors measures second sensor data related to the HMD. The processor determines a user-based context based on the first sensor data, determines an ambient-based context based on the second sensor data, and accesses AR content based on the user-based context and the ambient-based context. The HMD displays the AR content on the transparent display.
A head mounted device (HMD) includes a transparent display, sensors to generate sensor data, and a processor. The processor identifies a threat condition based on a threat pattern and the sensor data, and generates a warning notification in response to the identified threat condition. The threat pattern includes preconfigured thresholds for the sensor data. The HMD displays AR content comprising the warning notification in the transparent display.
A system and method for an acousto-optical display for augmented reality are described. In some embodiments, a viewing device includes a transparent acousto-optical display and an augmented reality (AR) device. The transparent acousto-optical display displays virtual content. The augmented reality (AR) device dynamically adjusts optical properties of the transparent acousto-optical display and controls a depth of field of the virtual content displayed in the transparent acousto-optical display based on the optical properties.
G02F 1/11 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des éléments acousto-optiques, p. ex. en utilisant la diffraction variable par des ondes sonores ou des vibrations mécaniques analogues
A head mounted device (HMD) includes a transparent display, a first set of sensors, a second set of sensors, and a processor. The first set of sensors measures first sensor data including an identification of a user of the HMD and a biometric state of the user of the HMD. The second set of sensors measures second sensor data including a location of the HMD and ambient metrics based on the location of the HMD. The HMD determines a user-based context based on the first sensor data, determines an ambient-based context based on the second sensor data, determines an application context within an AR application implemented by the processor, identifies a virtual fictional character based on a combination of the user-based context, the ambient-based context, and the application context, and displays the virtual fictional character in the transparent display.
G09G 5/32 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de caractères ou de signes individuels en utilisant des signaux de commande d'affichage dérivés de signaux codés représentant les caractères ou les signes avec une mémoire de codes de caractères avec des moyens pour commander la position de l'affichage
A system and method for generating a virtual content to a physical object is described. A processor includes an augmented reality application. The augmented reality application creates virtual content at the head mounted device, and associates the virtual content with predefined conditions based on data from sensors embedded in the head mounted device at a time of creation of the virtual content. The virtual content is displayed in a display of the head mounted device in response to sensor data satisfying the predefined conditions.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
Visual gestures in a display device allow a user to select and activate features in a display of the display device. A sensor of the display device tracks an eye gaze of a user directed at a display of the display device. A visual gesture module identifies a predefined trigger zone in the display. A virtual object application displays a virtual object in the display based on the eye gaze of the user and the predefined trigger zone.
A head mounted device includes different types of sensors for obtaining sensor data of objects in a physical environment near the head mounted device. The sensors include millimeter wave sensors disposed with the head mounted device that are automatically or manually engageable. The millimeter wave sensors may be automatically engaged based on the location of the head mounted device or when the head mounted device receives sensor data indicating an abnormality. The millimeter wave sensors may further be manually engaged based on an instruction received from a user of the head mounted device via an input device, such as a wearable device, or audio command, such as a command received from a microphone coupled with the head mounted device. The millimeter wave sensors provide millimeter wave sensor data that the head mounted device uses to construct millimeter wave sensor images.
Techniques of active surface projection correction are disclosed. In some embodiments, a computer-implemented method comprises determining that a current position of a display surface of a head-mounted display device is determined to satisfy a predetermined condition for displaying virtual content on the display surface, with the display surface being configured to be adjusted between positions that do not satisfy the predetermined condition and positions that do satisfy the predetermined condition, determining display surface position data based on the current position of the display surface, determining a display location for the virtual content based on the display surface position data, and displaying the virtual content at the display location on the display surface.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
A head mounted device includes a helmet, an ambient light sensor, a pupil dimension sensor, a lighting element, and a dynamic lighting system. The ambient light sensor is disposed in an outside surface of the helmet and measures ambient light outside the helmet. The pupil dimension sensor is disposed in a housing of the helmet and measures a size of a pupil of a wearer of the helmet. The lighting element is disposed in the outside surface of the helmet. The dynamic lighting system controls the lighting element and adjusts an intensity of the lighting element based on the ambient light and the pupil size of the wearer of the helmet.
A head mounted device includes a helmet and a substantially arc-shaped visor. The helmet has an augmented reality device disposed in a housing of the helmet. A first set of magnets is embedded and disposed along a periphery of a front portion of the helmet. The substantially arc-shaped visor has a top part and a bottom part. The top part is removably attached to the front portion of the helmet. A second set of magnets is embedded and disposed along a periphery of the top part of the visor to match the first set of magnets.
Techniques of head mounted display calibration are disclosed. In some example embodiments, corresponding intrinsic calibration procedures are performed for each component in a plurality of components of a head mounted display, with each intrinsic calibration procedure comprising determining one or more intrinsic calibration parameters for the corresponding component, and a plurality of extrinsic calibration procedures are performed among the plurality of components, with each extrinsic calibration procedure comprising determining one or more extrinsic calibration parameters. An augmented reality function of the head mounted display is configured based on the determined intrinsic calibration parameters and the determined extrinsic calibration parameters, with the configured augmented reality function being configured to cause the display of virtual content on the head mounted display using the determined intrinsic and extrinsic calibration parameters in conjunction with the plurality of components.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
Techniques of providing digitized interactions with identified objects are disclosed. In some embodiments, sensor data of an object can be received. The sensor data may have been captured by a computing device of a user. A category of the object can be identified based on at least one characteristic of the object from the sensor data. A characterizing feature of the category of the object can be determined. Virtual content can be generated based on the characterizing feature. The virtual content can be caused to be displayed concurrently with a view of the object on a display screen of the computing device.
A system and method for real-time texture mapping for an augmented reality system are described. A viewing device includes an optical sensor to capture an image of a real-world object. A texture extraction module extracts a texture of the image of the real-world object. A recognition module identifies the real-world object based on the captured image. A texture mapping module retrieves a virtual object corresponding to the identified real-world object, maps the texture to the virtual object, dynamically updates the texture to the virtual object in real time, and generates a visualization of the virtual object in a display of the viewing device.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
34.
SAMPLE BASED COLOR EXTRACTION FOR AUGMENTED REALITY
A system and method for sampling-based color extraction for augmented reality are described. A viewing device includes an optical sensor to capture an image of a real-world object. A color extraction software divides the captured image into multiple regions or recognizes pre-defined regions and identifies a color value for each region. A color-based augmented reality effect module retrieves a virtual content based on the color values for the regions, and delivers the virtual content in the viewing device.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
A head mounted device includes a helmet with a guide, a lens frame, at least one display surface mounted to the lens frame. The guide extends from a cavity of the helmet. The lens frame is moveably connected to the guide and moves along an axis of the guide between a first position within the cavity of the helmet and a second position outside the cavity of the helmet. The display surface is transparent and configured to display augmented reality content.
A system and method for visual inertial navigation for augmented reality are described. In some embodiments, at least one camera of a wearable device generates a plurality of video frames. At least one inertial measurement unit (IMU) sensors of the wearable device generates IMU data. Features in the plurality of video frames for each camera are tracked. The plurality of video frames for each camera are synchronized and aligned based on the IMU data. A dynamic state of the wearable device is computed based on the synchronized plurality of video frames with the IMU data for each camera. Augmented reality content is generated and positioned in a display of the wearable device based on the dynamic state of the wearable device.
A system and method for extracting data for augmented reality content are described. A device identifies a sensing device using an image captured with at least one camera of the device. Visual data are extracted from the sensing device. The device generates an AR content based on the extracted visual data and maps and displays the AR content in the display to form a layer on the sensing device.
A system and method for navigating augmented reality (AR) content with a watch are described. A head mounted device identifies a watch, maps and generates a display of an AR menu in a transparent display of the head mounted device. The AR menu is displayed as a layer on the watch. The head mounted device detects a physical user interaction on the watch. The head mounted device navigates the AR menu in response to detecting the physical user interaction on the watch.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
An application generates instructions to a wearable device to remotely activate a sensor in the wearable device and to receive sensor data from the sensor. A query related to a physical object is received. Instructions to wearable devices are generated to remotely activate at least one sensor of the wearable devices in response to the query. Sensor data is received from at least one of the wearable devices in response to that wearable device being within a range of the physical object.
A system and method for visualization of physical interactions are described. Objects in a scene are captured with a viewing device. Physical characteristics of the objects are computed using data from at least one sensor corresponding to the objects. A physics model of predicted interactions between the one or more objects is generated using the physical characteristics of the objects. An interaction visualization is generated based on the physics model of the predicted interactions between the one or more objects. An image of the one or more objects is augmented with the interaction visualization in a display of the viewing device.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 19/00 - Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des applications spécifiques (spécialement adaptés à des fonctions spécifiques G06F 17/00;systèmes ou méthodes de traitement de données spécialement adaptés à des fins administratives, commerciales, financières, de gestion, de surveillance ou de prévision G06Q;informatique médicale G16H)
A system and method for spatial data collection are described. Sensor data related to a position and an orientation of a device are generated over time using sensors of the device. Augmented reality content is generated based on a physical object captured by the device. A path bundle data package identifying a user interaction of the device with the augmented reality content relative to the physical object is generated. The user interaction identifies a spatial path of an interaction with the augmented reality content. The path bundle data package is generated based on the sensor data using a data model comprising a data header and a data payload. The data header comprises a contextual header having data identifying the device and a user of the device. A path header includes data identifying the path of the interaction with the augmented reality content. A sensor header includes data identifying the sensors. The data payload comprises dynamically sized sampling data from the sensor data.
G06F 19/00 - Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des applications spécifiques (spécialement adaptés à des fonctions spécifiques G06F 17/00;systèmes ou méthodes de traitement de données spécialement adaptés à des fins administratives, commerciales, financières, de gestion, de surveillance ou de prévision G06Q;informatique médicale G16H)
G06F 17/00 - Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des fonctions spécifiques
A remote expert application identifies a manipulation of virtual objects displayed in a first wearable device. The virtual objects are rendered based a physical object viewed with a second wearable device. A manipulation of the virtual objects is received from the first wearable device. A visualization of the manipulation of the virtual objects is generated for a display of the second wearable device. The visualization of the manipulation of the virtual objects is communicated to the second wearable device.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 19/00 - Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des applications spécifiques (spécialement adaptés à des fonctions spécifiques G06F 17/00;systèmes ou méthodes de traitement de données spécialement adaptés à des fins administratives, commerciales, financières, de gestion, de surveillance ou de prévision G06Q;informatique médicale G16H)
A system and method for spatial data processing are described. Path bundle data packages from a viewing device are accessed and processed. The path bundle data packages identify a user interaction of the viewing device with an augmented reality content relative to and based on a physical object captured by the viewing device. The path bundle data packages are generated based on the sensor data using a data model comprising a data header and a data payload. The data header comprises a contextual header having data identifying the viewing device and a user of the viewing device. A path header having data identifies the path of the interaction with the augmented reality content. A sensor header having data identifies the plurality of sensors. The data payload comprises dynamically sized sampling data from the sensor data. The path bundle data packages are normalized and aggregated. Analytics computation is performed on the normalized and aggregated path bundle data packages.
A system and method for spatial data visualization are described. An analytics computation of users' interactions with an augmented reality content is performed based on a physical object captured by a viewing device. The analytics computation comprises a computation of geometric paths of the users' interactions with the augmented reality content. A display of a visualization of the analytics computation is displayed based on the computation of the geometric paths of the users' interactions with the augmented reality content.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
A survey application generates a survey of components associated with a three-dimensional model of an object. The survey application receives video feeds, location information, and orientation information from wearable devices in proximity to the object. The three-dimensional model of the object is generated based on the video feeds, sensor data, location information, and orientation information received from the wearable devices. Analytics is performed from the video feeds to identify a manipulation on the object. The three-dimensional model of the object is updated based on the manipulation on the object. A dynamic status related to the manipulation on the object is generated with respect to reference data related the object. A survey of components associated with the three-dimensional model of the object is generated.
Techniques of active parallax correction are disclosed. In some embodiments, a first gaze direction of at least one eye of a user is determined. A determination about virtual content can then be made based on the first gaze direction, and the virtual content can be caused to be presented to the user based on the determination. In some embodiments, making the determination comprises determining a first location on a display surface at which to display the virtual content. In some embodiments, the virtual content can be caused to be displayed on the display surface at the first location.
A system and method for offloading augmented reality processing is described. A sensor external to a viewing device of a user tracks a location and an orientation of the viewing device. The location and orientation are defined relative to predefined references of a physical environment local to the user. A server receives a request from the viewing device to offload at least one of a tracking process and an augmented reality rendering process. The augmented reality rendering process is based on an augmented reality database. The server generates offloaded processed data based on the request and the location and the orientation of the viewing device. The offloaded processed data is streamed to the viewing device. A visualization of the offloaded processed data is generated in the viewing device.
A system and method for assigning a virtual user interface to a physical object is described. A virtual user interface for a physical object is created at a machine. The machine is trained to associate the virtual user interface with identifiers of the physical object and tracking data related to the physical object. The virtual user interface is displayed in relation to the image of the physical object.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
A system and method for discovering a machine using an augmented reality application in a viewing device is described. A default virtual user interface is associated with the machine. The machine broadcasts a status of the machine, the default virtual user interface associated with the machine, and tracking data related to the machine to the viewing device authenticated with the machine and in proximity to the machine. The visualization of the status of the machine object and the default virtual user interface are rendered in a display of the viewing device.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
50.
AUGMENTED REALITY CONTENT ADAPTED TO SPACE GEOMETRY
A system and method for augmented reality content adapted to changes in real world space geometry are described. A device captures an image of a local environment and maps a real world space geometry of the local environment using the image of the local environment. The device generates a visualization of a virtual object in the display relative to the mapped real world space geometry of the local environment. A content of the virtual object is adjusted to changes in the real world space geometry of the local environment.
A system and method for visualization of physical characteristics are described. A sensor coupled to an object generates live data. Physical characteristics of the object are computed using the live data. A visualization of the physical characteristics of the object is generated and communicated to a viewing device configured to capture an image of the object. The viewing device augments the image of the object with the visualization of the physical characteristics of the object.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
Techniques of mapping gestures to virtual functions are disclosed. In some embodiments, a software application is run on a computing device. The software application may have a first virtual function configured to manipulate a virtual object of the software application in a first predefined way. First image data of a first physical content may be captured using the computing device. A first gesture over the first physical content may be mapped to the first virtual function using the first image data. The virtual object may be displayed over a view of the first physical content on a display screen of the computing device. The first gesture over the first physical content may be detected. The virtual object may be manipulated in the first predefined way in response to detecting the first gesture over the first physical content.
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels
Techniques of providing audio based on visual content are disclosed. In some embodiments, image data of visual content is received. The image data has been captured by a computing device. Audio data is determined based on the received image data, and the corresponding audio of the audio data is then caused to be played on the computing device. Determining the audio data may comprise identifying the received image data based on a comparison with reference image data, and determining the audio data based on the identification of the received image data. The received image data can comprise video or still pictures. The audio of the audio data can comprise a song or a voice recording. The user computing device can comprises one of a smart phone, a tablet computer, a wearable computing device, a vehicle computing device, a laptop computer, and a desktop computer.
09 - Appareils et instruments scientifiques et électriques
Produits et services
(1) Wearable computer, namely, a helmet equipped with sensors, cameras, an inertial measurement unit, HD display screen, and downloadable software that creates and acts as an interface for mixing physical objects with perspective based digital content
09 - Appareils et instruments scientifiques et électriques
Produits et services
wearable computer, namely, a helmet equipped with sensors, cameras, an inertial measurement unit, HD display screen, and software for mixing physical objects with perspective based digital content.
09 - Appareils et instruments scientifiques et électriques
Produits et services
wearable computer, namely, a helmet equipped with sensors, cameras, an inertial measurement unit, HD display screen, and software for mixing physical objects with perspective based digital content.
09 - Appareils et instruments scientifiques et électriques
Produits et services
(1) Wearable computer, namely, a helmet equipped with sensors, cameras, an inertial measurement unit, HD display screen, and downloadable software that creates and acts as an interface for mixing physical objects with perspective based digital content
58.
MANIPULATION OF VIRTUAL OBJECT IN AUGMENTED REALITY VIA INTENT
A system and method for manipulating a virtual object based on intent is described. A reference identifier from a physical object is captured. The reference identifier is communicated via a network to a remote server. The remote server includes virtual object data associated with the reference identifier. The virtual object data is received at the computing device. The virtual image is displayed in a virtual landscape using the virtual object data. In response to relative movement between the computing device and the physical object caused by a user, the virtual image is modified. Brain activity data of the user is received. A state of the virtual object in the virtual landscape is changed based on the brain activity data.
A system and method for manipulating a virtual object based on thought is described. A reference identifier from a physical object is captured. Brain activity data of a user is received to obtain a brain activity data. The reference identifier and the brain activity data are communicated via a network to a remote server. The remote server selects a virtual object based on the reference identifier and the brain activity data. A communication from the server identifying the virtual object is received. The virtual object is displayed in a virtual landscape.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 19/00 - Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des applications spécifiques (spécialement adaptés à des fonctions spécifiques G06F 17/00;systèmes ou méthodes de traitement de données spécialement adaptés à des fins administratives, commerciales, financières, de gestion, de surveillance ou de prévision G06Q;informatique médicale G16H)
60.
DATA MANIPULATION BASED ON REAL WORLD OBJECT MANIPULATION
A system and method for data manipulation based on real world object manipulation is described. A device captures an image of a physical object. The image is communicated via a network to a remote server. The remote server includes virtual object data associated with the image and a communication notification for a user of the computing device. The device receives the virtual object data and displays the virtual image in a virtual landscape using the virtual object data. In response to relative movement between the computing device and the physical object caused by the user, the virtual image is modified.
A server receives and analyzes analytics data from an application of one or more devices. The application corresponds to a content generator. The server generates, using the content generator, a visualization content dataset based on the analysis of the analytics data. The visualization content dataset comprises a set of images, along with corresponding analytics virtual object models to be engaged with an image of a physical object captured with the one or more devices and recognized in the set of images. The analytics data and the visualization content dataset may be stored in a storage device of the server.
G06F 15/16 - Associations de plusieurs calculateurs numériques comportant chacun au moins une unité arithmétique, une unité programme et un registre, p. ex. pour le traitement simultané de plusieurs programmes
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A system and method for segmentation of content delivery is described. A virtual object model is divided into a plurality of segments. An order of the plurality of segments is arranged in a delivery queue. Each segment of the virtual object model is delivered in the order of the delivery queue to a device that is configured to recognize a physical object that is associated with the virtual object model.
G06F 15/16 - Associations de plusieurs calculateurs numériques comportant chacun au moins une unité arithmétique, une unité programme et un registre, p. ex. pour le traitement simultané de plusieurs programmes
A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
H04N 21/43 - Traitement de contenu ou données additionnelles, p. ex. démultiplexage de données additionnelles d'un flux vidéo numériqueOpérations élémentaires de client, p. ex. surveillance du réseau domestique ou synchronisation de l'horloge du décodeurIntergiciel de client
A server for content creation is described. A content creation tool of the server generates an experience content dataset using a template to process a content identifier and virtual object content. An experience generator of the server provides the experience content dataset to a device that recognizes the content identifier, to generate an interactive experience with the virtual object content at the device.
H04N 21/80 - Génération ou traitement de contenu ou de données additionnelles par un créateur de contenu, indépendamment du processus de distributionContenu en soi
65.
CAMPAIGN OPTIMIZATION FOR EXPERIENCE CONTENT DATASET
A server for campaign optimization is described. An experience content dataset is generated for an augmented reality application of a device based on analytics results. The analytics results are generated based on analytics data received from the device. The experience content dataset is provided to the device. The device recognizes a content identifier of the experience content dataset and generates an interactive experience with a presentation of virtual object content that is associated with the content identifier.
09 - Appareils et instruments scientifiques et électriques
28 - Jeux, jouets, articles de sport
Produits et services
(1) Computer software that makes toys digitally interactive, in the nature of a computer application for mobile phones, portable media players, and handheld computers, namely, software that displays three-dimensional digital graphics associated with physical game pieces on mobile phones, portable media players and handheld computers; digitally interactive games featuring downloadable computer programs featuring interactive toy blocks and educational manipulative blocks sold as a unit in the field of computer games; toy blocks and educational manipulative blocks for use in interactive learning computer and board games.
09 - Appareils et instruments scientifiques et électriques
10 - Appareils et instruments médicaux
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
(1) Computer application software for mobile phones and handheld computers, namely, software that mixes physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset; downloadable software that mixes physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset; electroencephalography device in the nature of a wearable computing headset featuring electrodes that creates augmented reality images using brain waves; downloadable software that creates and acts as an interface for mixing physical objects with perspective based digital content for use with electroencephalography devices in the nature of a wearable computing headset featuring electrodes that creates augmented reality images using brain waves. (1) Application service provider (ASP) featuring software for use in mixing physical objects with perspective based digital content, to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset.; Providing on-line, non-downloadable software for use in mixing physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset for creating augmented reality applications, for creating mobile web applications, for generating custom quick response codes, for creating coupons, contests, and games, for publishing 3D augmented reality graphics and animation and mobile web content, for use in creating customized marketing campaigns using such discovery methods as quick response codes, near field communication, and object recognition technologies; application service provider featuring software for use in mixing physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset for creating augmented reality applications, for creating mobile web applications, for generating custom quick response codes, for creating coupons, contests, and games, for publishing 3D augmented reality graphics and animation and mobile web content, for use in creating customized marketing campaigns using such discovery methods as quick response codes, near field communication, and object recognition technologies; providing a web hosting platform for interactive printed materials for publishers and commercial printers, design and development of computer software.
09 - Appareils et instruments scientifiques et électriques
10 - Appareils et instruments médicaux
Produits et services
(1) Electroencephalography device in the nature of a wearable computing headset featuring electrodes that creates augmented reality images using brain waves; downloadable software that creates and acts as an interface for mixing physical objects with perspective based digital content for use with electroencephalography devices in the nature of a wearable computing headset featuring electrodes that creates augmented reality images using brain waves.
(2) Computer application software for mobile phones and handheld computers, namely, software that creates and acts as an interface for mixing physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset; downloadable software that creates and acts as an interface for mixing physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset.
09 - Appareils et instruments scientifiques et électriques
10 - Appareils et instruments médicaux
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
(1) Computer application software for mobile phones and handheld computers, namely, software that mixes physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset; downloadable software that mixes physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset; electroencephalography device in the nature of a wearable computing headset featuring electrodes that creates augmented reality images using brain waves; downloadable software that creates and acts as an interface for mixing physical objects with perspective based digital content for use with electroencephalography devices in the nature of a wearable computing headset featuring electrodes that creates augmented reality images using brain waves. (1) Providing on-line, non-downloadable software for use in mixing physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset for creating augmented reality applications, for creating mobile web applications, for generating custom quick response codes, for creating coupons, contests, and games, for publishing 3D augmented reality graphics and animation and mobile web content, for use in creating customized marketing campaigns using such discovery methods as quick response codes, near field communication, and object recognition technologies; application service provider featuring software for use in mixing physical objects with perspective based digital content to create augmented reality technology that overlays digital information on an image of something being viewed through a smart phone camera, a tablet computer camera, a computer camera, or a wearable computing headset for creating augmented reality applications, for creating mobile web applications, for generating custom quick response codes, for creating coupons, contests, and games, for publishing 3D augmented reality graphics and animation and mobile web content, for use in creating customized marketing campaigns using such discovery methods as quick response codes, near field communication, and object recognition technologies; providing a web hosting platform for interactive printed materials for publishers and commercial printers, design and development of computer software.