Various implementations for detection and virtualization of tangible interface object dimensions include a method that includes capturing, using a video capture device associated with a computing device, a video stream of a physical activity scene, the video stream including a first tangible interface object representing a measurement attribute, identifying, using a processor of the computing device, the measurement attribute of the first tangible interface object, determining, using the processor of the computing device, a virtual object represented by the measurement attribute of the first tangible interface object, and displaying, on a display of the computing device, a graphical user interface embodying a virtual scene, the virtual scene including the virtual object.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
2.
DETECTION AND VIRTUALIZATION OF HANDWRITTEN OBJECTS
Various implementations for detection and virtualization of handwritten objects include a method that includes capturing a video stream of a physical activity scene, the video stream including a handwritten object in an input area, detecting the input area and the handwritten object within the input area, classifying the handwritten object, normalizing the handwritten object by preprocessing an image frame from the video stream, identifying the handwritten object using the preprocessed image frame, generating a virtualization of the identified handwritten object, and displaying a graphical user interface embodying a virtual scene, the virtual scene including the virtualization.
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
Various implementations for virtualization of tangible object components include a method that includes capturing, using a video capture device associated with a computing device, a video stream of a physical activity scene including a tangible interface object, capturing, using an audio capture device associated with the computing device, an audio stream of the environment around the audio capture device, the audio stream including a pronunciation of a word by a user, comparing the captured audio stream including the pronunciation of the word to an expected sound model, and displaying a visual cue on a display screen of the computing device based on the comparison.
G10L 21/06 - Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
G10L 21/00 - Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
Various implementations for detection and virtualization of tangible interface object dimensions include a method that includes capturing, using a video capture device associated with a computing device, a video stream of a physical activity scene, the video stream including a first tangible interface object representing a measurement attribute, identifying, using a processor of the computing device, the measurement attribute of the first tangible interface object, determining, using the processor of the computing device, a virtual object represented by the measurement attribute of the first tangible interface object, and displaying, on a display of the computing device, a graphical user interface embodying a virtual scene, the virtual scene including the virtual object.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/42 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
A method and system for organizing a virtual classroom session. In an example implementation, a method includes, receiving first media data including a first user media stream depicting a first user and a first workspace media stream depicting a first physical activity scene that is proximate a first computing device of the first user and a second media data including a second user media stream depicting a second user and a second workspace media stream depicting a second physical activity scene that is proximate a second computing device of the second user, generating a graphical virtual meeting user interface and providing the graphical virtual meeting user interface for display.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G09B 5/06 - Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
G09B 5/14 - Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
An example system includes a computing device located proximate to a physical activity surface, a video capture device, and a detector. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically intractable with by a user. The detector processes the video stream to detect the one or more interface objects included in the activity scene, to identify the one or more interface objects that are detectable, to generate one or more events describing the one or more interface objects, and to provide the one or more events to an activity application configured to render virtual information on the one or more computing devices based on the one or more events.
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06F 1/16 - Constructional details or arrangements
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
Activity scene detection, display, and enhancement implementations are described. In an example implementation, a method includes displaying an animated character on a display of a computing device, detecting a tangible interface object on a physical activity scene proximate to the computing device, rendering a virtual interface object based on the tangible interface object, determining an interaction routine between the animated character and the virtual interface object, and executing the animation routine to animate on the display, an interaction between the animated character and the virtual interface object.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 7/254 - Analysis of motion involving subtraction of images
G09B 19/00 - Teaching not covered by other main groups of this subclass
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/42 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
An example method for virtualizing physical objects into a virtual environment includes capturing a video stream of a racing field component, displaying a graphical user interface embodying a virtual game, detecting a placement of a physical vehicle at a starting portion of a channel of the racing field component, identifying one or more characteristics of the physical vehicle at the starting portion, generating a virtual vehicle representation of the physical vehicle using the one or more characteristics of the physical vehicle, and displaying the virtual vehicle representation of the physical vehicle.
A63F 13/803 - Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
A63F 13/245 - Constructional details thereof, e.g. game controllers with detachable joystick handles specially adapted to a particular type of game, e.g. steering wheels
A63F 13/98 - Accessories, i.e. detachable arrangements optional for the use of the video game device, e.g. grip supports of game controllers
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A tangible object virtualization station including a base capable of stably resting on a surface and a head component unit connected to the base. The head component unit extends upwardly from the base. At an end of the head component opposite the base, the head component comprises a camera situated to capture a downward view of the surface proximate the base, a lighting array that directs light downward toward the surface proximate the base. The tangible object virtualization station further comprises a display interface included in the base. The display interface is configured to hold a display device in an upright position and connect the display device to the camera and the lighting array.
H04M 1/02 - Constructional features of telephone sets
H04M 1/72409 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
A computing device is described. In an example implementation, the computing device includes a housing including a display screen on a front surface, the housing and display screen being collectively positionable in a plurality of physical orientations, an input device that includes a first selection mechanism and a second selection mechanism, the first selection mechanism being actuatable to adjust a setting of an output of an application displayed on the display screen, the second selection mechanism being actuatable to adjust the setting of the output of the application displayed on the display screen, and an orientation sensor configured to determine which physical orientation of the plurality of physical orientations that the display screen is positioned in, and change a first input polarity of the first selection mechanism to correspond to the determined physical orientation of the display screen.
A method for enhancing tangible content on a physical activity surface is described. In an example implementation, the method includes capturing, using a video capture device of a computing device, a video stream that includes an activity scene of a physical activity surface; detecting in the video stream, using a detector executable on the computing device, a tangible content item on the physical activity surface; recognizing, from the video stream, one or more visually instructive elements in the tangible content item; determining a tangible identifier based on the one or more visually instructive elements in the tangible content item; retrieving a digital content item using the tangible identifier; and providing the digital content item on the computing device.
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06F 1/16 - Constructional details or arrangements
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
A computing device is described. In one embodiment, the computing device includes a display screen located on a front surface of a housing, a support located on a back surface of the housing, the support is configured to support the display screen in a first position relative to a surface when situated in a first orientation, and to support the display screen in a second position relative to the physical surface when situated in a second orientation, the computing device further includes a first camera located on a first peripheral side of the front surface, the first camera being configured to capture a first field of view, and a second camera located on a second peripheral side of the front surface, and the second camera is configured to capture a second field of view that is different from the first field of view.
A computing device is described. In an example implementation, the computing device includes a housing including a display screen on a front surface, the housing and display screen being collectively positionable in a plurality of physical orientations, an input device that includes a first selection mechanism and a second selection mechanism, the first selection mechanism being actuatable to adjust a setting of an output of an application displayed on the display screen, the second selection mechanism being actuatable to adjust the setting of the output of the application displayed on the display screen, and an orientation sensor configured to determine which physical orientation of the plurality of physical orientations that the display screen is positioned in, and change a first input polarity of the first selection mechanism to correspond to the determined physical orientation of the display screen.
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G09B 5/06 - Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
G09B 5/12 - Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations different stations being capable of presenting different information simultaneously
G06F 1/16 - Constructional details or arrangements
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
H04R 3/04 - Circuits for transducers for correcting frequency response
G06V 30/413 - Classification of content, e.g. text, photographs or tables
Various implementations for object detection include a method includes capturing a video stream that includes an activity object and a pointing object, identifying the activity object, displaying a graphical user interface embodying a virtual scene based on the identified activity object, determining a location of the pointing object relative to the activity object, determining a routine based on the location of the pointing object relative to the activity object, and executing the routine within the virtual scene.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 9/451 - Execution arrangements for user interfaces
G06F 1/16 - Constructional details or arrangements
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
Various implementations for virtualization of physical activity scene include a method that includes capturing a video stream that includes an interactive sheet including an interactive area, identifying the interactive sheet, determining a virtual page based on the identity of the interactive sheet, displaying a graphical user interface embodying a virtual template, detecting an interaction on the interaction area of the interactive sheet, generating a virtual annotation based on the interaction in the interaction area, and updating the graphical user interface to include the virtual annotation.
Activity scene detection, display, and enhancement implementations are described. In an example implementation, a method includes displaying an animated character on a display of a computing device, detecting a tangible interface object on a physical activity scene proximate to the computing device, rendering a virtual interface object based on the tangible interface object, determining an interaction routine between the animated character and the virtual interface object, and executing the animation routine to animate on the display, an interaction between the animated character and the virtual interface object.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 7/254 - Analysis of motion involving subtraction of images
G09B 19/00 - Teaching not covered by other main groups of this subclass
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/42 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
Various implementations for virtualization of tangible object components include a method that includes capturing a video stream a video stream of a physical activity scene, the video stream including a first tangible interface object and a second tangible interface object positioned on the physical activity scene, identifying a combined position of the first tangible interface object relative to the second tangible interface object, determining a virtual object represented by the combined position of the first tangible interface object relative to the second tangible interface object, and displaying a graphical user interface embodying a virtual scene, the virtual scene including the virtual object.
An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
A63F 13/42 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06T 7/149 - SegmentationEdge detection involving deformable models, e.g. active contour models
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 1/16 - Constructional details or arrangements
G06K 9/52 - Extraction of features or characteristics of the image by deriving mathematical or geometrical properties from the whole image
G06K 9/62 - Methods or arrangements for recognition using electronic means
A protective cover device is described. In an example implementation, the protective cover device includes a top surface; a back surface connected to the top surface, the top surface and the back surface extending around one or more device surfaces of a computing device; and an adapter window including a portion of the top surface and a portion of the back surface, the adapter window being movable relative to the protective cover device to expose a device edge of the computing device to receive a camera adapter.
A display positioning system is described. In an example implementation, the display positioning system includes an adapter adapted to redirect a field of view of a video capture device of a computing device; and a stand adapted to situate on a surface, the stand including one or more legs that are adjustable to modify a distance between the video capture device of the computing device and the surface when the computing device is placed on the stand to adjust the field of view of the video capture device.
A display positioning system is described. In an example implementation, the display positioning system includes an adapter adapted to redirect a field of view of a video capture device of a computing device; and a stand adapted to situate on a surface, the stand including one or more legs that are adjustable to modify a distance between the video capture device of the computing device and the surface when the computing device is placed on the stand to adjust the field of view of the video capture device.
Various implementations for a display positioning system include stand with a front a back surface being connected to form a curved bend such that the bottom edges of the front and back surface are spread out to support the stand, the front surface is further connected to a stand lip forming a stand channel between the front surface and an extended portion of the stand lip, an insert configured to rest within the stand channel of the stand and including a front plate and a back plate that extend beyond a top portion of the insert and form an insert channel, the insert channel being configured to receive a first edge of a computing device and support the computing device in an elevated position, and an adapter with an optical element that is configured to rest within a first slot in the stand when not in use.
Various implementations for a display positioning system include stand with a front a back surface being connected to form a curved bend such that the bottom edges of the front and back surface are spread out to support the stand, the front surface is further connected to a stand lip forming a stand channel between the front surface and an extended portion of the stand lip, an insert configured to rest within the stand channel of the stand and including a front plate and a back plate that extend beyond a top portion of the insert and form an insert channel, the insert channel being configured to receive a first edge of a computing device and support the computing device in an elevated position, and an adapter with an optical element that is configured to rest within a first slot in the stand when not in use.
A tangible object virtualization station including a base capable of stably resting on a surface and a head component unit connected to the base. The head component unit extends upwardly from the base. At an end of the head component opposite the base, the head component comprises a camera situated to capture a downward view of the surface proximate the base, a lighting array that directs light downward toward the surface proximate the base. The tangible object virtualization station further comprises a display interface included in the base. The display interface is configured to hold a display device in an upright position and connect the display device to the camera and the lighting array.
Various implementations for object detection include a method includes capturing a video stream that includes an activity object and a pointing object, identifying the activity object, displaying a graphical user interface embodying a virtual scene based on the identified activity object, determining a location of the pointing object relative to the activity object, determining a routine based on the location of the pointing object relative to the activity object, and executing the routine within the virtual scene.
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 9/451 - Execution arrangements for user interfaces
G06F 1/16 - Constructional details or arrangements
42.
Detection and visualization of a formation of a tangible interface object
Various implementations for detection and visualization of a formation of a tangible interface object include a method that includes capturing a video stream that includes an activity object and a formation of a tangible interface object, identifying the activity object, determining a virtual object based on the identity of the activity object, displaying a graphical user interface embodying a virtual scene and including the virtual object, detecting a formation of the tangible interface object, generating a virtualization based on the formation of the tangible interface object, and updating the graphical user interface to include the visualization.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 9/451 - Execution arrangements for user interfaces
G06F 1/16 - Constructional details or arrangements
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
An example system includes a computing device located proximate to a physical activity surface, a video capture device, and a detector. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically intractable with by a user. The detector processes the video stream to detect the one or more interface objects included in the activity scene, to identify the one or more interface objects that are detectable, to generate one or more events describing the one or more interface objects, and to provide the one or more events to an activity application configured to render virtual information on the one or more computing devices based on the one or more events.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 1/16 - Constructional details or arrangements
Activity surface detection, display, and enhancement implementations are described. In an example implementation, a method determines, using a processor of a computing device, a traceable image and presenting the traceable image in an interface on a display of the computing device; captures, using a video capture device coupled to the computing device, a video stream of a physical activity surface proximate to the computing device; and
displays, on the display of the computing device, the captured video stream overlaid with the traceable image in the interface.
An example system includes a computing device located proximate to a physical activity surface, a video capture device, and a detector. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically intractable with by a user. The detector processes the video stream to detect the one or more interface objects included in the activity scene, to identify the one or more interface objects that are detectable, to generate one or more events describing the one or more interface objects, and to provide the one or more events to an activity application configured to render virtual information on the one or more computing devices based on the one or more events.
Systems and methods for virtualized tangible programming are described. In an example implementation, a method includes detecting an object in image data, performing a comparison between the object and a predefined set of object definitions, recognizing the object as a visually quantified object or a visually unquantified object based on the comparison, processing a command region and a quantifier region for the visually quantified object and identifying a corresponding command, and executing a set of commands for the object.
G09B 5/02 - Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G09B 19/00 - Teaching not covered by other main groups of this subclass
A63F 13/98 - Accessories, i.e. detachable arrangements optional for the use of the video game device, e.g. grip supports of game controllers
G05B 19/042 - Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G09B 1/32 - Manually- or mechanically-operated educational appliances using elements forming or bearing symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support
G06K 9/46 - Extraction of features or characteristics of the image
54.
ACTIVITY SURFACE DETECTION, DISPLAY AND ENHANCEMENT OF A VIRTUAL SCENE
Activity scene detection, display, and enhancement implementations are described. In an example implementation, a method includes displaying an animated character on a display of a computing device, detecting a tangible interface object on a physical activity scene proximate to the computing device, rendering a virtual interface object based on the tangible interface object, determining an interaction routine between the animated character and the virtual interface object, and executing the animation routine to animate on the display, an interaction between the animated character and the virtual interface object.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/033 - Pointing devices displaced or positioned by the userAccessories therefor
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
An example system includes a computing device located proximate to a physical activity surface, a video capture device, and a detector. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically intractable with by a user. The detector processes the video stream to detect the one or more interface objects included in the activity scene, to identify the one or more interface objects that are detectable, to generate one or more events describing the one or more interface objects, and to provide the one or more events to an activity application configured to render virtual information on the one or more computing devices based on the one or more events.
G06F 3/033 - Pointing devices displaced or positioned by the userAccessories therefor
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
The present disclosure relates to technology for positioning a display for interaction and/or virtualization of tangible interface objects. According to an example embodiment, a display positioning system includes a display stand including a positioning portion having a recess, supports connected to the positioning portion, and an insert. The supports are configured to cooperatively support the positioning portion when situated on a support surface. The insert may include an elongated body configured to slidably insert into the recess, the recess may be configured to receive and removably retain the insert, the insert and recess being correspondingly shaped. The elongated body may include an upwardly facing surface having a concavity shaped to receive and removably retain at least an edge portion of a computing device display when the insert is inserted into the recess of the display stand and equipped with the computing device display.
The present disclosure relates to technology for positioning a display for interaction and/or virtualization of tangible interface objects. According to an example embodiment, a display positioning system includes a display stand including a positioning portion having a recess, supports connected to the positioning portion, and an insert. The supports are configured to cooperatively support the positioning portion when situated on a support surface. The insert may include an elongated body configured to slidably insert into the recess, the recess may be configured to receive and removably retain the insert, the insert and recess being correspondingly shaped. The elongated body may include an upwardly facing surface having a concavity shaped to receive and removably retain at least an edge portion of a computing device display when the insert is inserted into the recess of the display stand and equipped with the computing device display.
An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects.
An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/42 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
G06F 1/16 - Constructional details or arrangements
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06T 7/149 - SegmentationEdge detection involving deformable models, e.g. active contour models
G06K 9/52 - Extraction of features or characteristics of the image by deriving mathematical or geometrical properties from the whole image
G06K 9/62 - Methods or arrangements for recognition using electronic means