Systems and methods for display position estimation, e.g., in a three-dimensional (3D) display system rendering interactive augmented reality (AR) and/or virtual reality (VR) experiences. A portable computer system may receive one or more images of a portion of the portable computer system captured via at least one camera, e.g., via at least one camera of a portable device. The one or more images may be received via wireless communications with the portable device and/or via an input/output bus and/or peripheral bus between the portable device and the portable computer system. The portable computer system may compare the one or more images to a set of cached images of the portable computer system and determine, e.g., based on the comparison, the angle of the display relative to the base of the portable computer system.
09 - Scientific and electric apparatus and instruments
Goods & Services
Computer technology, namely, computer hardware, computer styluses, sensors for determining position, and recorded and downloadable computer software for enabling two-dimensional and three-dimensional images; computer hardware; computer styluses; sensors for determining position; cameras; two-dimensional and three-dimensional audio recordings for use with two-dimensional and three-dimensional images; two-dimensional and three-dimensional graphical user interface hardware and downloadable and recorded graphical user interface software; recorded and downloadable interactive multimedia computer programs for enabling direct hands-on interaction with two-dimensional and three-dimensional images and audio recordings in the science, technology, engineering and math (STEM) and career and technical education (CTE) fields
Systems and methods for six-degree of freedom (6-DoF) pose estimation of a user input device, e.g., in a three-dimensional (3D) display system rendering interactive augmented reality (AR) and/or virtual reality (VR) experiences include the user input device capturing, via a camera disposed at a forward-facing tip of the user input device, images in a direction that the user input device is directed and determining, via an inertial measurement unit (IMU), motion of the user input device in three-dimensional (3D) space. The user input device may then determine pose information associated with the user input device based on the images and motion of the user input device. The determination of the pose information may be via usage of at least one of a neural network model, estimation model trained on a set of unique and identifiable patterns, and/or an estimation model trained on a dataset of images.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/033 - Pointing devices displaced or positioned by the userAccessories therefor
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
Systems and methods for six-degree of freedom (6-DoF) pose estimation of a user input device, e.g., in a three-dimensional (3D) system rendering interactive augmented reality (AR) and/or virtual reality (VR) experiences include the user input device capturing, via a camera disposed at a forward-facing tip of the user input device, images in a direction the user input device is directed and providing the images to a computer system. The user input device provides inertial measurement unit (IMU) data to the computer system as well. The computer system may then determine pose information associated with the user input device based on the images and IMU data of the user input device. The determination of the pose information may be via usage of at least one of a neural network model, estimation model trained on a set of unique and identifiable patterns, and/or an estimation model trained on a dataset of images.
Systems and methods for six-degree of freedom (6-DoF) pose estimation of a user input device, e.g., in a three-dimensional (3D) display system rendering interactive augmented reality (AR) and/or virtual reality (VR) experiences include the user input device capturing, via a camera disposed at a forward-facing tip of the user input device, images in a direction that the user input device is directed and determining, via an inertial measurement unit (IMU), motion of the user input device in three-dimensional (3D) space. The user input device may then determine pose information associated with the user input device based on the images and motion of the user input device. The determination of the pose information may be via usage of at least one of a neural network model, estimation model trained on a set of unique and identifiable patterns, and/or an estimation model trained on a dataset of images.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
G06T 7/70 - Determining position or orientation of objects or cameras
Systems and methods for six-degree of freedom (6-DoF) pose estimation of a user input device, e.g., in a three-dimensional (3D) system rendering interactive augmented reality (AR) and/or virtual reality (VR) experiences include the user input device capturing, via a camera disposed at a forward-facing tip of the user input device, images in a direction the user input device is directed and providing the images to a computer system. The user input device provides inertial measurement unit (IMU) data to the computer system as well. The computer system may then determine pose information associated with the user input device based on the images and IMU data of the user input device. The determination of the pose information may be via usage of at least one of a neural network model, estimation model trained on a set of unique and identifiable patterns, and/or an estimation model trained on a dataset of images.
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
7.
Cloud-based rendering of interactive augmented/virtual reality experiences
Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.
Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.
Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.
Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.
Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.
Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.
Systems and methods for providing an electrical waveform to a pi-cell polarization switch. The electrical waveform may reduce/limit ion accumulation in and/or light leakage associated with the polarization switch. The electrical waveform may include multiple segments, e.g., a first segment may drive the polarization switch to a first polarization state and may be defined by a first portion having a first voltage level and a first polarity and a second portion having the first voltage level and a second polarity opposite the first polarity and a second segment, occurring after the first segment, that may drive the polarization switch to the second polarization state. The second segment may be defined by a second voltage level having the first polarity. An absolute value of the first voltage level may be greater than an absolute value of the second voltage level.
G02B 30/25 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type using polarisation techniques
G02F 1/01 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour
Systems and methods for enhancing trackability of a passive stylus. A six degree of freedom (6DoF) location and orientation of a passive stylus may be tracked by a tracking system via a retroreflector system disposed on the passive stylus. Additionally, characteristic movements of one of a user's finger, hand, and/or wrist may be recognized by the tracking system. The passive stylus may useable to interact with a virtual 3D scene being displayed via a 3D display. A user input via the passive stylus may be determined based on the tracked 6DoF location and orientation of the passive stylus and/or the recognized characteristic movements. The retroreflector system may include multiple patterns of retroreflectors and one of the patterns may be a spiral pattern of retroreflectors disposed along a longitudinal axis of the passive stylus.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
15.
Intelligent stylus beam and assisted probabilistic input to element mapping in 2D and 3D graphical user interfaces
Systems and methods for implementing methods for user selection of a virtual object in a virtual scene. A user input may be received via a user input device. The user input may be an attempt to select a virtual object from a plurality of virtual objects rendered in a virtual scene on a display of a display system. A position and orientation of the user input device may be determined in response to the first user input. A probability the user input may select each virtual object may be calculated via a probability model. Based on the position and orientation of the user input device, a ray-cast procedure and a sphere-cast procedure may be performed to determine the virtual object being selected. The probability of selection may also be considered in determining the virtual object. A virtual beam may be rendered from the user input device to the virtual object.
Systems and methods for providing an electrical waveform to a pi-cell polarization switch. The electrical waveform may reduce/limit ion accumulation in and/or light leakage associated with the polarization switch. The electrical waveform may include multiple segments, e.g., a first segment may drive the polarization switch to a first polarization state and may be defined by a first portion having a first voltage level and a first polarity and a second portion having the first voltage level and a second polarity opposite the first polarity and a second segment, occurring after the first segment, that may drive the polarization switch to the second polarization state. The second segment may be defined by a second voltage level having the first polarity. An absolute value of the first voltage level may be greater than an absolute value of the second voltage level.
G02F 1/01 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour
G02B 30/25 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type using polarisation techniques
17.
INTELLIGENT STYLUS BEAM AND ASSISTED PROBABILISTIC INPUT TO ELEMENT MAPPING IN 2D AND 3D GRAPHICAL USER INTERFACES
Systems and methods for implementing methods for user selection of a virtual object in a virtual scene. A user input may be received via a user input device. The user input may be an attempt to select a virtual object from a plurality of virtual objects rendered in a virtual scene on a display of a display system. A position and orientation of the user input device may be determined in response to the first user input. A probability the user input may select each virtual object may be calculated via a probability model. Based on the position and orientation of the user input device, a ray-cast procedure and a sphere-cast procedure may be performed to determine the virtual object being selected. The probability of selection may also be considered in determining the virtual object. A virtual beam may be rendered from the user input device to the virtual object.
Systems and methods for implementing methods for user selection of a virtual object in a virtual scene. A user input may be received via a user input device. The user input may be an attempt to select a virtual object from a plurality of virtual objects rendered in a virtual scene on a display of a display system. A position and orientation of the user input device may be determined in response to the first user input. A probability the user input may select each virtual object may be calculated via a probability model. Based on the position and orientation of the user input device, a ray-cast procedure and a sphere-cast procedure may be performed to determine the virtual object being selected. The probability of selection may also be considered in determining the virtual object. A virtual beam may be rendered from the user input device to the virtual object.
Systems and methods for enhancing trackability of a passive stylus. A six degree of freedom (6DoF) location and orientation of a passive stylus may be tracked by a tracking system via a retroreflector system disposed on the passive stylus. Additionally, characteristic movements of one of a user's finger, hand, and/or wrist may be recognized by the tracking system. The passive stylus may useable to interact with a virtual 3D scene being displayed via a 3D display. A user input via the passive stylus may be determined based on the tracked 6DoF location and orientation of the passive stylus and/or the recognized characteristic movements. The retroreflector system may include multiple patterns of retroreflectors and one of the patterns may be a spiral pattern of retroreflectors disposed along a longitudinal axis of the passive stylus.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G02B 5/13 - Reflex reflectors including curved refracting surface plural curved refracting elements forming part of a unitary body
Systems and methods for enhancing trackability of a passive stylus. A six degree of freedom (6DoF) location and orientation of a passive stylus may be tracked by a tracking system via a retroreflector system disposed on the passive stylus. Additionally, characteristic movements of one of a user's finger, hand, and/or wrist may be recognized by the tracking system. The passive stylus may useable to interact with a virtual 3D scene being displayed via a 3D display. A user input via the passive stylus may be determined based on the tracked 6DoF location and orientation of the passive stylus and/or the recognized characteristic movements. The retroreflector system may include multiple patterns of retroreflectors and one of the patterns may be a spiral pattern of retroreflectors disposed along a longitudinal axis of the passive stylus.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
21.
Identifying replacement 3D images for 2D images via ranking criteria
Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. Content of a 2D image displayed within a web page may be identified and 3D images may be identified as possible replacements of the 2D image. The 3D images may be ranked based on sets of ranking criteria. A 3D image with a highest-ranking value may be selected based on a ranking of the 3D images. The selected 3D image may be integrated into the web page, thereby replacing the 2D image with the selected 3D image. Further, a user input manipulating the 3D image within the web page may be received. The user input may include movement of a view point of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device intersecting with the 3D image.
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. The 2D image displayed within a web page may be identified and a 3D image with substantially equivalent content may also be identified. The 3D image may be integrated into the web page as a replacement to the 2D image. Further, at least one user input manipulating the 3D image within the web page may be received. The at least one user input may include movement of a view point (or point of view) of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device (and/or an end of the user input device) intersecting with the 3D image.
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
G06F 16/9535 - Search customisation based on user profiles and personalisation
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
Systems and methods for displaying a three-dimensional (3D) workspace, including a 3D internet browser, in addition to a traditional two-dimensional (2D) workspace and for browsing the internet in a 3D/virtual reality workspace and transforming and/or upconverting objects and/or visual media from the 2D workspace and/or 2D webpages to the 3D workspace as 3D objects and/or stereoscopic output for display in the 3D workspace.
H04N 13/275 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 1/16 - Constructional details or arrangements
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. The 2D image displayed within a web page may be identified and a 3D image with substantially equivalent content may also be identified. The 3D image may be integrated into the web page as a replacement to the 2D image. Further, at least one user input manipulating the 3D image within the web page may be received. The at least one user input may include movement of a view point (or point of view) of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device (and/or an end of the user input device) intersecting with the 3D image.
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. Content of a 2D image displayed within a web page may be identified and 3D images may be identified as possible replacements of the 2D image. The 3D images may be ranked based on sets of ranking criteria. A 3D image with a highest-ranking value may be selected based on a ranking of the 3D images. The selected 3D image may be integrated into the web page, thereby replacing the 2D image with the selected 3D image. Further, a user input manipulating the 3D image within the web page may be received. The user input may include movement of a view point of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device intersecting with the 3D image.
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
Systems and methods for capturing a two dimensional (2D) image of a portion of a three dimensional (3D) scene may include a computer rendering a 3D scene on a display from a user's point of view (POV). A camera mode may be activated in response to user input and a POV of a camera may be determined. The POV of the camera may be specified by position and orientation of a user input device coupled to the computer, and may be independent of the user's POV. A 2D frame of the 3D scene based on the POV of the camera may be determined and the 2D image based on the 2D frame may be captured in response to user input. The 2D image may be stored locally or on a server of a network.
Systems and methods for displaying a stereoscopic three-dimensional (3D) webpage overlay. User input may be received from a user input device and in response to determining that the user input device is interacting with the 3D content, at least one of a plurality of render properties associated with of the 3D content may be modified. The at least one render property may be incrementally modified over a specified period of time, thereby animating modification of the at least one render property.
G06T 15/00 - 3D [Three Dimensional] image rendering
G09G 5/34 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
28.
3D User Interface—360-degree visualization of 2D webpage content
Systems and methods for displaying a three-dimensional (3D) workspace, including a 3D internet browser, in addition to a traditional two-dimensional (2D) workspace and for browsing the internet in a 3D/virtual reality workspace and transforming and/or upconverting objects and/or visual media from the 2D workspace and/or 2D webpages to the 3D workspace as 3D objects and/or stereoscopic output for display in the 3D workspace.
H04N 13/275 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
29.
Pi-cell polarization switch for a three dimensional display system
Techniques are disclosed relating to the transmission of data based on a polarization of a light signal. In some embodiments, data may include 3D video data for viewing by a user. Systems for transmitting data may include a display device and a device for switching the polarization of a video source. Systems for receiving data may include eyewear configured to present images with orthogonal polarization to each eye. In some embodiments, the rate of switching of the polarization switcher may introduce a distortion to the optical data. A Pi-cell device may be used in some embodiments to reduce distortion based on switching speed. In some embodiments, polarization switchers may introduce a distortion based on the frequency of transmitted light. In some embodiments, optical elements including in the transmitting or receiving devices may be configured to reduce distortions based on frequency.
G02F 1/139 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering based on orientation effects in which the liquid crystal remains transparent
G02F 1/1335 - Structural association of cells with optical devices, e.g. polarisers or reflectors
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02B 27/26 - Other optical systems; Other optical apparatus for producing stereoscopic or other three-dimensional effects involving polarising means
H04N 13/302 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
H04N 13/337 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
H04N 13/366 - Image reproducers using viewer tracking
G02B 27/22 - Other optical systems; Other optical apparatus for producing stereoscopic or other three-dimensional effects
G06F 1/16 - Constructional details or arrangements
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
Systems and methods for increasing dynamic contrast in a liquid crystal display (LCD) may include a segmented backlight that may include one or more segments and one or more sets of light emitting diodes (LEDs). Each set of LEDs may be configured to illuminate a corresponding segment and each segment may include a notch(es) configured as a light barrier to reduce light leakage to non-adjacent segments. The notch(es) may be of variable length, depth, and width and may be three-dimensional, having a width the varies along the depth and length of the notch and a depth that varies along the width and length of the notch. In some embodiments, the notch(es) may be reflective, some degree of opaque, and/or blackened.
Systems and methods for displaying a three-dimensional (3D) workspace, including a 3D internet browser, in addition to a traditional two-dimensional (2D) workspace and for browsing the internet in a 3D/virtual reality workspace and transforming and/or upconverting objects and/or visual media from the 2D workspace and/or 2D webpages to the 3D workspace as 3D objects and/or stereoscopic output for display in the 3D workspace.
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/293 - Generating mixed stereoscopic imagesGenerating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 1/16 - Constructional details or arrangements
Systems and methods for digitally drawing on virtual 3D object surfaces using a 3D display system. A 3D drawing mode may be enabled and a display screen of the system may correspond to a zero parallax plane of a 3D scene that may present a plurality of surfaces at non-zero parallax planes. User input may be received at a location on the display screen, and in response, a surface may be specified, rendered, and displayed at the zero parallax plane. Further, additional user input on the display screen may be received specifying drawing motion across the rendered and displayed surface. The drawing motion may start at the location and continue across a boundary between the surface and another contiguous surface. Accordingly, in response to the drawing motion crossing the boundary, the contiguous surface may be rendered and displayed at the zero parallax plane along with results of the drawing motion.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
H04N 13/305 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
H04N 13/334 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spectral multiplexing
H04N 13/337 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
Systems and methods for displaying a stereoscopic three-dimensional (3D) webpage overlay. In some embodiments, user input may be received from a user input device and in response to determining that the user input device is not substantially concurrently interacting with the 3D content, interpret the user input based on a 2D mode of interaction. In addition, the user input may be interpreted based on a 3D mode of interaction in response to determining that the user input device is substantially concurrently interacting with the 3D content. The 2D mode of interaction corresponds to a first visual cursor, such as a mouse cursor, and the 3D mode of interaction corresponds to a second visual cursor, such as a virtual beam rendered to extend from a tip of the user input device.
G06F 3/033 - Pointing devices displaced or positioned by the userAccessories therefor
H04N 13/275 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
G06T 11/60 - Editing figures and textCombining figures or text
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
34.
Transitioning between 2D and stereoscopic 3D webpage presentation
Systems and methods for displaying a stereoscopic three-dimensional (3D) webpage overlay. In some embodiments user input may be received from a user input device and in response to determining that the user input device is substantially concurrently interacting with the 3D content, at least one of a plurality of render properties associated with of the 3D content may be modified. In some embodiments, the at least one render property may be incrementally modified over a specified period of time, thereby animating modification of the at least one render property.
Systems and methods for increasing dynamic contrast in a liquid crystal display (LCD) may include a segmented backlight that may include one or more segments and one or more sets of light emitting diodes (LEDs). Each set of LEDs may be configured to illuminate a corresponding segment and each segment may include a notch(es) configured as a light barrier to reduce light leakage to non-adjacent segments. The notch(es) may be of variable length, depth, and width and may be three-dimensional, having a width the varies along the depth and length of the notch and a depth that varies along the width and length of the notch. In some embodiments, the notch(es) may be reflective, some degree of opaque, and/or blackened.
Systems and methods for displaying a three-dimensional (3D) workspace, including a 3D internet browser, in addition to a traditional two-dimensional (2D) workspace and for browsing the internet in a 3D/virtual reality workspace and transforming and/or upconverting objects and/or visual media from the 2D workspace and/or 2D webpages to the 3D workspace as 3D objects and/or stereoscopic output for display in the 3D workspace.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
H04N 13/293 - Generating mixed stereoscopic imagesGenerating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Systems and methods for displaying a three-dimensional (3D) workspace, including a 3D internet browser, in addition to a traditional two-dimensional (2D) workspace and for browsing the internet in a 3D/virtual reality workspace and transforming and/or upconverting objects and/or visual media from the 2D workspace and/or 2D webpages to the 3D workspace as 3D objects and/or stereoscopic output for display in the 3D workspace.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
H04N 1/00 - Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmissionDetails thereof
G06Q 30/02 - MarketingPrice estimation or determinationFundraising
G06F 17/22 - Manipulating or registering by use of codes, e.g. in sequence of text characters
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/356 - Image reproducers having separate monoscopic and stereoscopic modes
H04N 13/293 - Generating mixed stereoscopic imagesGenerating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
38.
3D user interface—360-degree visualization of 2D webpage content
Systems and methods for displaying a three-dimensional (3D) workspace, including a 3D internet browser, in addition to a traditional two-dimensional (2D) workspace and for browsing the internet in a 3D/virtual reality workspace and transforming and/or upconverting objects and/or visual media from the 2D workspace and/or 2D webpages to the 3D workspace as 3D objects and/or stereoscopic output for display in the 3D workspace.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 17/30 - Information retrieval; Database structures therefor
H04N 13/275 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
39.
Integrating real world conditions into virtual imagery
Systems and methods for incorporating real world conditions into a three-dimensional (3D) graphics object are described herein. In some embodiments, images of a physical location of a user of a three-dimensional (3D) display system may be received from at least one camera and a data imagery map of the physical location may be determined based at least in part on the received images. The data imagery map may capture real world conditions associated with the physical location of the user. Instructions to render a 3D graphics object may be generated and the data imagery map may be incorporated into a virtual 3D scene comprising the 3D graphics object, thereby incorporating the real world conditions into virtual world imagery. In some embodiments, the data imagery may include a light map, a sparse light field, and/or a depth map of the physical location.
Systems and methods for interacting with a display system using a personal electronic device (PED). The display system may establish communication with and receive user input from the PED. The display system may use the received user input to generate and/or update content displayed on a display of the display system.
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
41.
Pi-cell polarization switch for a three dimensional display system
Techniques are disclosed relating to the transmission of data based on a polarization of a light signal. In some embodiments, data may include 3D video data for viewing by a user. Systems for transmitting data may include a display device and a device for switching the polarization of a video source. Systems for receiving data may include eyewear configured to present images with orthogonal polarization to each eye. In some embodiments, the rate of switching of the polarization switcher may introduce a distortion to the optical data. A Pi-cell device may be used in some embodiments to reduce distortion based on switching speed. In some embodiments, polarization switchers may introduce a distortion based on the frequency of transmitted light. In some embodiments, optical elements including in the transmitting or receiving devices may be configured to reduce distortions based on frequency.
G02F 1/139 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering based on orientation effects in which the liquid crystal remains transparent
G02F 1/1335 - Structural association of cells with optical devices, e.g. polarisers or reflectors
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02B 27/26 - Other optical systems; Other optical apparatus for producing stereoscopic or other three-dimensional effects involving polarising means
H04N 13/302 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
H04N 13/337 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
H04N 13/366 - Image reproducers using viewer tracking
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
42.
Stereoscopic display system using light field type data
Systems and methods for a head tracked stereoscopic display system that uses light field type data may include receiving light field type data corresponding to a scene. The stereoscopic display system may track a user's head. Using the received light field type data and the head tracking, the system may generate three dimensional (3D) virtual content that corresponds to a virtual representation of the scene. The stereoscopic display system may then present the 3D virtual content to a user. The stereoscopic display system may present a left eye perspective image and a right eye perspective image of the scene to the user based on the position and orientation of the user's head. The images presented to the user may be updated based on a change in the position or the orientation of the user's head or based on receiving user input.
Virtual plane and use in a stylus based three dimensional (3D) stereoscopic display system. A virtual plane may be displayed in a virtual 3D space on a display of the 3D stereoscopic display system. The virtual plane may extend from a stylus of the 3D stereoscopic display system. Content may be generated in response to a geometric relationship of the virtual plane with at least one virtual object in the virtual 3D space. The generated content may indicate one or more attributes of the at least one virtual object. The content may be presented via the 3D stereoscopic display system.
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
Systems and methods for a head tracked stereoscopic display system that uses light field type data may include receiving light field type data corresponding to a scene. The stereoscopic display system may track a user's head. Using the received light field type data and the head tracking, the system may generate three dimensional (3D) virtual content that corresponds to a virtual representation of the scene. The stereoscopic display system may then present the 3D virtual content to a user. The stereoscopic display system may present a left eye perspective image and a right eye perspective image of the scene to the user based on the position and orientation of the user's head. The images presented to the user may be updated based on a change in the position or the orientation of the user's head or based on receiving user input.
In some embodiments, a system for tracking with reference to a three-dimensional display system may include a display device, an image processor, a surface including at least three emitters, at least two sensors, a processor. The display device may image, during use, a first stereo three-dimensional image. The surface may be positionable, during use, with reference to the display device. At least two of the sensors may detect, during use, light received from at least three of the emitters as light blobs. The processor may correlate, during use, the assessed referenced position of the detected light blobs such that a first position/orientation of the surface is assessed. The image processor may generate, during use, the first stereo three-dimensional image using the assessed first position/orientation of the surface with reference to the display. The image processor may generate, during use, a second stereo three-dimensional image using an assessed second position/orientation of the surface with reference to the display.
Systems and methods for navigating a 3D stereoscopic scene displayed via a 3D stereoscopic display system using user head tracking. A reference POV including a reference user head position and a reference user head orientation may be established. The user head POV may be tracked, including monitoring user head positional displacements and user head angular rotations relative to the reference POV. In response to the tracking, a camera POV used to render the 3D stereoscopic scene may be adjusted based on a non-linear mapping between changes in the camera POV and the user head positional displacements and user head angular rotations relative to the reference POV. The non-linear mapping may include a mapping of user head positional displacements relative to the reference POV to translational movements in the camera POV and a mapping of user head angular rotations relative to the reference POV to rotations in the camera POV.
Modifying perspective of stereoscopic images provided by one or more displays based on changes in user viewpoint. The one or more displays may include a first display that is provided substantially horizontal for displaying 3D horizontal perspective images and/or a second display that is provided substantially vertical for displaying text or conventional images such as 2D images, or 3D vertical perspective images. The horizontal display surface may be typically positioned directly in front of the user, and at a height of about a desktop surface so that the user can have about a 45° looking angle. The vertical display surface may be positioned in front of the user and preferably behind and above the horizontal display surface.
System and methods for user interface elements for use within a 3D scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. One or more user interface elements may be used. The 3D scene may be updated in response to the use of the user interface elements.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
Embodiments of the present invention generally relate to interacting with a virtual scene at a perspective which is independent from the perspective of the user. Methods and systems can include either tracking and defining a perspective of the user based on the position and orientation of the user in the physical space, projecting a virtual scene for the user perspective to a virtual plane, tracking and defining a perspective of the a freehand user input device based on the position and orientation of the a freehand user input device, identifying a mark in the virtual scene which corresponds to the position and orientation of the device in the physical space, creating a virtual segment from the mark and interacting with virtual objects in the virtual scene at the end point of the virtual segment, as controlled using the device.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
50.
Presenting a view within a three dimensional scene
Presenting a view based on a virtual viewpoint in a three dimensional (3D) scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. A virtual viewpoint may be determined within the 3D scene that is different than the first viewpoint. The view of the 3D scene may be presented on the display(s) according to the virtual viewpoint and/or the first view point. The presentation of the view of the 3D scene is performed concurrently with presenting the 3D scene.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06T 15/00 - 3D [Three Dimensional] image rendering
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
System and method for invoking 2D and 3D operational modes of a 3D pointing device in a 3D presentation system. A 3D stereoscopic scene and a 2-dimensional (2D) scene are displayed concurrently via at least one stereoscopic display device. A current cursor position is determined based on a 6 degree of freedom 3D pointing device. The cursor is displayed concurrent with the 3D stereoscopic scene and the 2D scene, where the cursor operates in a 2D mode in response being inside a specified volume, where, in the 2D mode, the cursor is usable to interact with the 2D scene, and where the cursor operates in a 3D mode in response to being outside the specified volume, where, in the 3D mode, the cursor is usable to interact with the 3D stereoscopic scene.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
52.
Enhancing the coupled zone of a stereoscopic display
Systems and methods for calibrating a three dimensional (3D) stereoscopic display system may include rendering a virtual model on a display of a 3D stereoscopic display system that may include a substantially horizontal display. The virtual model may be geometrically similar to a physical object placed at a location on the display. A vertex of the virtual model may be adjusted in response to user input. The adjustment may be such that the vertex of the virtual model is substantially coincident with a corresponding vertex of the physical object.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
Systems and methods for capturing a two dimensional (2D) image of a portion of a three dimensional (3D) scene may include a computer rendering a 3D scene on a display from a user's point of view (POV). A camera mode may be activated in response to user input and a POV of a camera may be determined. The POV of the camera may be specified by position and orientation of a user input device coupled to the computer, and may be independent of the user's POV. A 2D frame of the 3D scene based on the POV of the camera may be determined and the 2D image based on the 2D frame may be captured in response to user input. The 2D image may be stored locally or on a server of a network.
System and method for invoking 2D and 3D operational modes of a 3D pointing device in a 3D presentation system. A 3D stereoscopic scene and a 2-dimensional (2D) scene are displayed concurrently via at least one stereoscopic display device. A current cursor position is determined based on a 6 degree of freedom 3D pointing device. The cursor is displayed concurrent with the 3D stereoscopic scene and the 2D scene, where the cursor operates in a 2D mode in response being inside a specified volume, where, in the 2D mode, the cursor is usable to interact with the 2D scene, and where the cursor operates in a 3D mode in response to being outside the specified volume, where, in the 3D mode, the cursor is usable to interact with the 3D stereoscopic scene.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
A voltage may be provided to a liquid crystal addressable element as part of a liquid crystal device. The provided voltage may be reduced from a driven state to a relaxed state in a time period greater than 1 μs. The reduction may further be performed in less than 20 ms. The liquid crystal device may be a polarization switch, which in some embodiments may be a multi-segment polarization switch. In one embodiment, pulses of limited duration of a light source may be provided to the polarization switch. The manner of voltage reduction may reduce optical bounce of the liquid crystal device and may allow one or more of the pulses of the light source to be shifted later in time.
G02F 1/133 - Constructional arrangementsOperation of liquid crystal cellsCircuit arrangements
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/34 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source
In some embodiments, a system and/or method may include accessing three-dimensional (3D) imaging software on a remote server. The method may include accessing over a network a 3D imaging software package on a remote server using a first system. The method may include assessing, using the remote server, a capability of the first system to execute the 3D imaging software package. The method may include displaying an output of the 3D imaging software using the first system based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a first portion of the 3D imaging software using the remote server based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a second portion of the 3D imaging software using the first system based upon the assessed capabilities of the first system.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06T 15/00 - 3D [Three Dimensional] image rendering
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
In some embodiments, a system and/or method may include accessing three-dimensional (3D) imaging software on a remote server. The method may include accessing over a network a 3D imaging software package on a remote server using a first system. The method may include assessing, using the remote server, a capability of the first system to execute the 3D imaging software package. The method may include displaying an output of the 3D imaging software using the first system based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a first portion of the 3D imaging software using the remote server based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a second portion of the 3D imaging software using the first system based upon the assessed capabilities of the first system.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
In some embodiments, a system for tracking with reference to a three-dimensional display system may include a display device, an image processor, a surface including at least three emitters, at least two sensors, a processor. The display device may image, during use, a first stereo three-dimensional image. The surface may be positionable, during use, with reference to the display device. At least two of the sensors may detect, during use, light received from at least three of the emitters as light blobs. The processor may correlate, during use, the assessed referenced position of the detected light blobs such that a first position/orientation of the surface is assessed. The image processor may generate, during use, the first stereo three-dimensional image using the assessed first position/orientation of the surface with reference to the display. The image processor may generate, during use, a second stereo three-dimensional image using an assessed second position/orientation of the surface with reference to the display.
Systems and methods for enhancement of a coupled zone of a 3D stereoscopic display. The method may include determining a size and a shape of the coupled zone. The coupled zone may include a physical volume specified by the user's visual depth of field with respect to screen position of the 3D stereoscopic display and the user's point of view. Content may be displayed at a first position with a virtual 3D space and the first position may correspond to a position within the coupled zone. It may be determined that the content is not contained in the coupled zone or is within a specified distance from a boundary of the coupled zone and, in response, display of the content may be adjusted such that the content has a second position in the virtual 3D space that corresponds to another position within the coupled zone.
Systems and methods for calibrating a three dimensional (3D) stereoscopic display system may include rendering a virtual object on a display of a 3D stereoscopic display system that may include a substantially horizontal display. The virtual object may be geometrically similar to a physical object placed at a location on the display. At least one dimension of the virtual object may be adjusted in response to user input. The adjustment may be such that the at least one dimension of the virtual object is approximately the same as a corresponding at least one dimension of the physical object.
Tracking objects presented within a stereo three-dimensional (3D) scene. The user control device may include one or more visually indicated points for at least one tracking sensor to track. The user control device may also include other position determining devices, for example, an accelerometer and/or gyroscope. Precise 3D coordinates of the stylus may be determined based on location information from the tracking sensor(s) and additional information from the other position determining devices. A stereo 3D scene may be updated to reflect the determined coordinates.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/046 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by electromagnetic means
62.
System and methods for using modified driving waveforms to inhibit acoustic noise during driving of a liquid crystal polarization rotator
In some embodiments, a system and/or method may operate a liquid crystal device. The method may include increasing a voltage provided to a driven level to a liquid crystal addressable element of the liquid crystal device. Said increasing may be performed over a time period greater than 1 ms. The liquid crystal addressable element may be in a driven state at the driven level. The method may include reducing the provided voltage to a relaxed level. The liquid crystal addressable element may be in a relaxed state at the relaxed level. Said increasing the voltage over the time period to the driven level may result in a reduced acoustical noise associated with the provided voltage. In some embodiments, the liquid crystal device may include a three-dimensional (3D) display.
In some embodiments, a system and/or method may include accessing three-dimensional (3D) imaging software on a remote server. The method may include accessing over a network a 3D imaging software package on a remote server using a first system. The method may include assessing, using the remote server, a capability of the first system to execute the 3D imaging software package. The method may include displaying an output of the 3D imaging software using the first system based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a first portion of the 3D imaging software using the remote server based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a second portion of the 3D imaging software using the first system based upon the assessed capabilities of the first system.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
G06F 12/00 - Accessing, addressing or allocating within memory systems or architectures
64.
Methods for automatically assessing user handedness in computer systems and the utilization of such information
In some embodiments, a system and/or method may assess handedness of a user of a system in an automated manner. The method may include displaying a 3D image on a display. The 3D image may include at least one object. The method may include tracking a position and an orientation of an input device in open space in relation to the 3D image. The method may include assessing a handedness of a user based on the position and the orientation of the input device with respect to at least one of the objects. In some embodiments, the method may include configuring at least a portion of the 3D image based upon the assessed handedness. The at least a portion of the 3D image may include interactive menus. In some embodiments, the method may include configuring at least a portion of an interactive hardware associated with the system based upon the assessed handedness.
In some embodiments, a system and/or method may include accessing three-dimensional (3D) imaging software on a remote server. The method may include accessing over a network a 3D imaging software package on a remote server using a first system. The method may include assessing, using the remote server, a capability of the first system to execute the 3D imaging software package. The method may include displaying an output of the 3D imaging software using the first system based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a first portion of the 3D imaging software using the remote server based upon the assessed capabilities of the first system. In some embodiments, the method may include executing a second portion of the 3D imaging software using the first system based upon the assessed capabilities of the first system.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04L 12/24 - Arrangements for maintenance or administration
G06F 12/00 - Accessing, addressing or allocating within memory systems or architectures
A voltage may be provided to a liquid crystal addressable element as part of a liquid crystal device. The provided voltage may be reduced from a driven state to a relaxed state in a time period greater than 1 μs. The reduction may further be performed in less than 20 ms. The liquid crystal device may be a polarization switch, which in some embodiments may be a multi-segment polarization switch. In one embodiment, pulses of limited duration of a light source may be provided to the polarization switch. The manner of voltage reduction may reduce optical bounce of the liquid crystal device and may allow one or more of the pulses of the light source to be shifted later in time.
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
G09G 3/38 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using electrochromic devices
G02F 1/133 - Constructional arrangementsOperation of liquid crystal cellsCircuit arrangements
68.
Non-linear navigation of a three dimensional stereoscopic display
Systems and methods for navigating a 3D stereoscopic scene displayed via a 3D stereoscopic display system using user head tracking. A reference POV including a reference user head position and a reference user head orientation may be established. The user head POV may be tracked, including monitoring user head positional displacements and user head angular rotations relative to the reference POV. In response to the tracking, a camera POV used to render the 3D stereoscopic scene may be adjusted based on a non-linear mapping between changes in the camera POV and the user head positional displacements and user head angular rotations relative to the reference POV. The non-linear mapping may include a mapping of user head positional displacements relative to the reference POV to translational movements in the camera POV and a mapping of user head angular rotations relative to the reference POV to rotations in the camera POV.
Systems and methods for digitally drawing on virtual 3D object surfaces using a 3D display system. A 3D drawing mode may be enabled and a display screen of the system may correspond to a zero parallax plane of a 3D scene that may present a plurality of surfaces at non-zero parallax planes. User input may be received at a location on the display screen, and in response, a surface may be specified, rendered, and displayed at the zero parallax plane. Further, additional user input on the display screen may be received specifying drawing motion across the rendered and displayed surface. The drawing motion may start at the location and continue across a boundary between the surface and another contiguous surface. Accordingly, in response to the drawing motion crossing the boundary, the contiguous surface may be rendered and displayed at the zero parallax plane along with results of the drawing motion.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
H04N 13/305 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
H04N 13/334 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spectral multiplexing
H04N 13/337 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
Presenting a view based on a virtual viewpoint in a three dimensional (3D) scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. A virtual viewpoint may be determined within the 3D scene that is different than the first viewpoint. The view of the 3D scene may be presented on the display(s) according to the virtual viewpoint and/or the first view point. The presentation of the view of the 3D scene is performed concurrently with presenting the 3D scene.
A voltage may be provided to a liquid crystal addressable element as part of a liquid crystal device. The provided voltage may be reduced from a driven state to a relaxed state in a time period greater than 1 μs. The reduction may further be performed in less than 20 ms. The liquid crystal device may be a polarization switch, which in some embodiments may be a multi-segment polarization switch. In one embodiment, pulses of limited duration of a light source may be provided to the polarization switch. The manner of voltage reduction may reduce optical bounce of the liquid crystal device and may allow one or more of the pulses of the light source to be shifted later in time.
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
Remote collaboration of a subject and a graphics object in a same view of a 3D scene. In one embodiment, one or more cameras of a collaboration system may be configured to capture images of a subject and track the subject (e.g., head of a user, other physical object). The images may be processed and provided to another collaboration system along with a determined viewpoint of the user. The other collaboration system may be configured to render and project the captured images and a graphics object in the same view of a 3D scene.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
In some embodiments, a system for tracking with reference to a three-dimensional display system may include a display device, an image processor, a surface including at least three emitters, at least two sensors, a processor. The display device may image, during use, a first stereo three-dimensional image. The surface may be positionable, during use, with reference to the display device. At least two of the sensors may detect, during use, light received from at least three of the emitters as light blobs. The processor may correlate, during use, the assessed referenced position of the detected light blobs such that a first position/orientation of the surface is assessed. The image processor may generate, during use, the first stereo three-dimensional image using the assessed first position/orientation of the surface with reference to the display. The image processor may generate, during use, a second stereo three-dimensional image using an assessed second position/orientation of the surface with reference to the display.
System and method for video processing. At least one overdrive (OD) look-up table (LUT) is provided, where the at least one OD LUT is dependent on input video levels and at least one parameter indicative of at least one attribute of the system or a user of the system. Video levels for a plurality of pixels for an image are received, as well as the at least one parameter. Overdriven video levels are generated via the at least one OD LUT based on the video levels and the at least one parameter. The overdriven video levels are provided to a display device for display of the image. The reception of video levels and at least one parameter, the generation of overdriven video levels, and the provision of overdriven video levels, may be repeated one or more times in an iterative manner to display a sequence of images.
System and method for video processing. First video levels for pixels for a left image of a stereo image pair are received from a GPU. Gamma corrected video levels (g-levels) are generated via a gamma look-up table (LUT) based on the first video levels. Outputs of the gamma LUT are constrained by minimum and/or maximum values, thereby excluding values for which corresponding post-OD display luminance values differ from static display luminance values by more than a specified error. Overdriven video levels are generated via a left OD LUT based on the g-levels. The overdriven video levels correspond to display luminance values that differ from corresponding static display luminance values by less than the error threshold, and are provided to a display device for display of the left image. This process is repeated for second video levels for a right image of the stereo image pair, using a right OD LUT.
Modifying perspective of stereoscopic images provided by one or more displays based on changes in user view, user control, and/or display status. A display system may include a housing, a display comprised in the housing, and one or more tracking sensors comprised in the housing. The one or more tracking sensors may be configured to sense user view and/or user control position and orientation information. The one or more tracking sensors may be associated with a position and orientation of the display. The user view and/or user control position and orientation information may be used in generating the rendered left and right eye images for display.
Tracking objects presented within a stereo three-dimensional (3D) scene. The user control device may include one or more visually indicated points for at least one tracking sensor to track. The user control device may also include other position determining devices, for example, an accelerometer and/or gyroscope. Precise 3D coordinates of the stylus may be determined based on location information from the tracking sensor(s) and additional information from the other position determining devices. A stereo 3D scene may be updated to reflect the determined coordinates.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
Tools for use within a 3D scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. User input may be received to the 3D scene using one or more tools. The 3D scene may be updated in response to the use of the one or more tools.
Presenting a view based on a virtual viewpoint in a three dimensional (3D) scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. A virtual viewpoint may be determined within the 3D scene that is different than the first viewpoint. The view of the 3D scene may be presented on the display(s) according to the virtual viewpoint and/or the first view point. The presentation of the view of the 3D scene is performed concurrently with presenting the 3D scene.
Modifying perspective of stereoscopic images provided by one or more displays based on changes in user viewpoint. The one or more displays may include a first display that is provided substantially horizontal for displaying 3D horizontal perspective images and/or a second display that is provided substantially vertical for displaying text or conventional images such as 2D images, or 3D vertical perspective images. The horizontal display surface may be typically positioned directly in front of the user, and at a height of about a desktop surface so that the user can have about a 45° looking angle. The vertical display surface may be positioned in front of the user and preferably behind and above the horizontal display surface.
The present invention discloses a horizontal perspective workstation comprising at least two display surfaces, one being substantially horizontal for displaying 3D horizontal perspective images, and one being substantially vertical for text or conventional images such as 2D images, or central perspective images. The horizontal display surface is typically positioned directly in front of the user, and at a height of about a desktop surface so that the user can have about a 45° angle looking. The vertical display surface is also positioned in front of the user and preferably behind and above the horizontal display surface.
A method and apparatus to balance the brain left side and the brain right side by using binaural beat is disclosed. The disclosed apparatus comprises an electroencephalographic (EEG) system to measure the brain left and right electrical signals, and an audio generator to generate a binaural beat to compensate for the unbalanced EEG frequencies. The disclosed method includes measuring the brain wave frequency spectrum of the individual, selecting the frequency exhibiting imbalanced behavior, and generating a binaural beat of that frequency. The binaural beat can be continuous or intermitten.
The present invention multi-plane display system discloses a three dimension display system comprising at least two display surfaces, one of which displaying a three dimensional horizontal perspective images. Further, the display surfaces can have a curvilinear blending display section to merge the various images. The multi-plane display system can comprise various camera eyepoints, one for the horizontal perspective images, and optionally one for the curvilinear blending display surface.