42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of on-line non-downloadable software for allowing users to virtually try on glasses and sunglasses and choose from the models offered according to which ones suit their face, which enables automatic detection and cropping of the face Software as a service (SAAS) services featuring software for allowing users to virtually try on glasses and sunglasses and choose from the models offered according to which ones suit their face, which enables automatic detection and cropping of the face.
2.
SYSTEMS AND METHODS FOR SCALING USING ESTIMATED FACIAL FEATURES
A system and method for scaling a user's head based on estimated facial features are disclosed. In an example, a system includes a processor configured to obtain a set of images of a user's head; generate a model of the user's head based on the set of images; determine a scaling ratio based on the model of the user's head and estimated facial features; and apply the scaling ratio to the model of the user's head to obtain a scaled user's head model; and a memory coupled to the processor and configured to provide the processor with instructions.
A system and method for scaling a user's head based on estimated facial features are disclosed. In an example, a system includes a processor configured to obtain a set of images of a user's head; generate a model of the user's head based on the set of images; determine a scaling ratio based on the model of the user's head and estimated facial features; and apply the scaling ratio to the model of the user's head to obtain a scaled user's head model; and a memory coupled to the processor and configured to provide the processor with instructions.
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
4.
GENERATION OF A 3D MODEL OF A REFERENCE OBJECT TO PERFORM SCALING OF A MODEL OF A USER'S HEAD
Methods and systems herein can include a processor configured to obtain a set of images that shows a user's head and a reference object, generate a user's head model of the user's head based at least in part on the set of images, and generate a reference object model of the reference object based at least in part on the set of images. The processor can further be configured to determine an orientation and a size of the reference object model based at least in part on a relative location of the reference object relative to the user's head in the set of images and use the reference object model, the orientation of the reference object model, the size of the reference object, and a known dimension of the reference object to determine scaling information. The processor can then be configured to apply the scaling information to the user's head model to obtain a scaled user's head model. The system can also include a memory coupled to the processor and configured to provide the processor with instructions.
Methods and systems herein can include a processor configured to obtain a set of images that shows a user's head and a reference object, generate a user's head model of the user's head based at least in part on the set of images, and generate a reference object model of the reference object based at least in part on the set of images. The processor can further be configured to determine an orientation and a size of the reference object model based at least in part on a relative location of the reference object relative to the user's head in the set of images and use the reference object model, the orientation of the reference object model, the size of the reference object, and a known dimension of the reference object to determine scaling information. The processor can then be configured to apply the scaling information to the user's head model to obtain a scaled user's head model. The system can also include a memory coupled to the processor and configured to provide the processor with instructions.
Embodiments of the present disclosure provide a recommendation system based on a user's physical/biometric features. In various embodiments, a system includes a processor configured to determine a physical characteristic of a user based at least in part on an image of the user. The processor is further configured to determine a correlation between the physical characteristic and a product, and generate a product recommendation based at least in part on the determined correlation.
In various embodiments, a process for trying on glasses includes determining an event associated with updating a current model of a user's face. In response to the event, using a set of historical recorded frames of the user's face to update the current model of the user's face. The process includes obtaining a newly recorded frame of the user's face, using the current model of the user's face to generate a corresponding image of a glasses frame, and presenting the image of the glasses frame over the newly recorded frame of the user's face.
A61B 3/11 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for measuring interpupillary distance or diameter of pupils
In various embodiments, a process for trying on glasses includes determining an event associated with updating a current model of a user's face. In response to the event, using a set of historical recorded frames of the user's face to update the current model of the user's face. The process includes obtaining a newly recorded frame of the user's face, using the current model of the user's face to generate a corresponding image of a glasses frame, and presenting the image of the glasses frame over the newly recorded frame of the user's face.
A61B 3/11 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for measuring interpupillary distance or diameter of pupils
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
42 - Scientific, technological and industrial services, research and design
Goods & Services
providing temporary use of on-line non-downloadable software to assist third party retailers and brands in selling their products by providing virtual fittings for users to try on goods and product recommendations by suggesting products to users, and featuring augmented reality software for virtual product fittings for users to try on goods; providing temporary use of online non-downloadable computer software to provide customers with text, messaging, email, voice, audio, and video communications via wireless networks and the internet, for e-commerce use as a payment gateway that authorizes processing of credit cards or direct payments for merchants, and for managing marketing campaigns for others
42 - Scientific, technological and industrial services, research and design
Goods & Services
providing temporary use of on-line non-downloadable software to assist third party retailers and brands in selling their products by providing virtual fittings for users to try on goods and product recommendations by suggesting products to users, and featuring augmented reality software for virtual product fittings for users to try on goods; providing temporary use of online non-downloadable computer software to provide customers with text, messaging, email, voice, audio, and video communications via wireless networks and the internet, for e-commerce use as a payment gateway that authorizes processing of credit cards or direct payments for merchants, and for managing marketing campaigns for others
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of on-line non-downloadable augmented reality software to assist third party retailers in selling their products and for providing virtual product fittings for users to try on goods and consumer product recommendations by suggesting products to users
12.
Constructing a user's face model using particle filters
Constructing a user's face model using particle filters is disclosed, including: using a first particle filter to generate a new plurality of sets of extrinsic camera information particles corresponding to respective ones of a plurality of images based at least in part on a selected face model particle; selecting a subset of the new plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of images; and using a second particle filter to generate a new plurality of face model particles corresponding to the plurality of images based at least in part on the selected subset of the new plurality of sets of extrinsic camera information particles.
Embodiments of the present disclosure provide a recommendation system based on a user's physical/biometric features. In various embodiments, a system includes a processor configured to determine a physical characteristic of a user based at least in part on an image of the user. The processor is further configured to determine a correlation between the physical characteristic and a product, and generate a product recommendation based at least in part on the determined correlation.
Rendering glasses with shadows is disclosed, including: generating a face image corresponding to an image of a set of images based at least in part on a face model, wherein the set of images is associated with a user's face; generating a face with shadows image corresponding to the image based at least in part on shadows casted by a glasses model on the face model; generating a shadow transform based at least in part on a difference determined based at least in part on the face image and the face with shadows image; generating a shadowed image based at least in part on applying the shadow transform to the image; and presenting the shadowed image including by overlaying a glasses image associated with the glasses model over the shadowed image.
Modeling of a user's face is disclosed, including: receiving an input image of a user's face to be modeled; and generating a set of parameter values to a statistical model that corresponds to the input image by evaluating candidate parameter values using a cost function that is determined based at least in part on optical flow.
Using computed facial feature points to position a product model relative to a model of a face is disclosed, comprising: obtaining a three-dimensional (3D) model of a user's face, wherein the 3D model of the user's face comprises a plurality of 3D points; determining a face normal that is normal to a plane that is determined based at least in part on a first subset of 3D points from the plurality of 3D points; determining a set of computed bridge points based at least in part on a second subset of 3D points from the plurality of 3D points and the face normal; and using the set of computed bridge points to determine an initial placement of a 3D model of a glasses frame relative to the 3D model of the user's face.
Constructing a user's face model using particle filters is disclosed, including: using a first particle filter to generate a new plurality of sets of extrinsic camera information particles corresponding to respective ones of a plurality of images based at least in part on a selected face model particle; selecting a subset of the new plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of images; and using a second particle filter to generate a new plurality of face model particles corresponding to the plurality of images based at least in part on the selected subset of the new plurality of sets of extrinsic camera information particles.
Constructing a user's face model using particle filters is disclosed, including: using a first particle filter to generate a new plurality of sets of extrinsic camera information particles corresponding to respective ones of a plurality of images based at least in part on a selected face model particle; selecting a subset of the new plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of images; and using a second particle filter to generate a new plurality of face model particles corresponding to the plurality of images based at least in part on the selected subset of the new plurality of sets of extrinsic camera information particles.
Rendering glasses with shadows is disclosed, including: generating a face image corresponding to an image of a set of images based at least in part on a face model, wherein the set of images is associated with a user's face; generating a face with shadows image corresponding to the image based at least in part on shadows casted by a glasses model on the face model; generating a shadow transform based at least in part on a difference determined based at least in part on the face image and the face with shadows image; generating a shadowed image based at least in part on applying the shadow transform to the image; and presenting the shadowed image including by overlaying a glasses image associated with the glasses model over the shadowed image.
Rendering glasses with shadows is disclosed, including: generating a face image corresponding to an image of a set of images based at least in part on a face model, wherein the set of images is associated with a user's face; generating a face with shadows image corresponding to the image based at least in part on shadows casted by a glasses model on the face model; generating a shadow transform based at least in part on a difference determined based at least in part on the face image and the face with shadows image; generating a shadowed image based at least in part on applying the shadow transform to the image; and presenting the shadowed image including by overlaying a glasses image associated with the glasses model over the shadowed image.
Smart image enhancements are disclosed, including: obtaining a representation of a user's face associated with a set of images associated with the user's face; obtaining a set of extrinsic information corresponding to an image of the set of images; determining a modified smoothing map by modifying a model smoothing map to correspond to the representation of the user's face; and determining an enhanced image based at least in part on the set of extrinsic information corresponding to the image, the modified model smoothing map, and the image.
Processing a set of images is disclosed, including: determining a set of user head measurements from a set of images; and determining a fit score corresponding to a glasses frames based at least in part on comparing the set of user head measurements to glasses frame measurements associated with the glasses frames.
Smart image enhancements are disclosed, including: obtaining a representation of a user's face associated with a set of images associated with the user's face; obtaining a set of extrinsic information corresponding to an image of the set of images; determining a modified smoothing map by modifying a model smoothing map to correspond to the representation of the user's face; and determining an enhanced image based at least in part on the set of extrinsic information corresponding to the image, the modified model smoothing map, and the image.
Smart image enhancements are disclosed, including: obtaining a representation of a user's face associated with a set of images associated with the user's face; obtaining a set of extrinsic information corresponding to an image of the set of images; determining a modified smoothing map by modifying a model smoothing map to correspond to the representation of the user's face; and determining an enhanced image based at least in part on the set of extrinsic information corresponding to the image, the modified model smoothing map, and the image.
Processing a set of images is disclosed, including: receiving a set of images; and searching for a representation of a user's face associated with the set of images and a plurality of sets of extrinsic information corresponding to respective ones of at least a subset of the set of images. Rendering a glasses frame is disclosed, including: receiving a selection associated with the glasses frame; rendering the glasses frame using at least a representation of a user's face and a set of extrinsic information corresponding to an image in a recorded set of images; and overlaying the rendered glasses frame on the image.
A61B 3/04 - Trial framesSets of lenses for use therewith
G09G 5/377 - Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
26.
Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session
Users desiring to associate a media object with an existing web browser session are provided with an out-of-band communication path by which to effect the association. When the media object is received at a web server involved in the session, the server creates a model of the item depicted in the media object and associates the model with the session. A projection of the resulting model is then made available for viewing (and, in some instances, manipulation) by the user during the web browser session.
A system for fitting glasses frames to a user is disclosed. The system includes an interface for receiving images of a user's head at different angles. A processor compares user head measurements determined from the images with a database of glasses frame information that includes glasses frame measurements. One or more glasses frames are selected based on the comparison and the selected glasses frames are output.
A system for fitting glasses frames to a user is disclosed. The system includes an interface for receiving images of a user's head at different angles. A processor compares user head measurements determined from the images with a database of glasses frame information that includes glasses frame measurements. One or more glasses frames are selected based on the comparison and the selected glasses frames are output.
Processing a set of images is disclosed, including: receiving a set of images; and searching for a representation of a user's face associated with the set of images and a plurality of sets of extrinsic information corresponding to respective ones of at least a subset of the set of images. Rendering a glasses frame is disclosed, including: receiving a selection associated with the glasses frame; rendering the glasses frame using at least a representation of a user's face and a set of extrinsic information corresponding to an image in a recorded set of images; and overlaying the rendered glasses frame on the image.