Snap Inc.

United States of America

Back to Profile

1-100 of 6,402 for Snap Inc. and 5 subsidiaries Sort by
Query
Aggregations
IP Type
        Patent 6,013
        Trademark 389
Jurisdiction
        United States 4,972
        World 1,232
        Europe 101
        Canada 97
Owner / Subsidiary
[Owner] Snap Inc. 6,368
Snapchat, Inc. 30
Bitstrips Inc. 1
Flite, Inc. 1
Scan, Inc. 1
See more
Date
New (last 4 weeks) 66
2026 February (MTD) 38
2026 January 81
2025 December 57
2025 November 65
See more
IPC Class
G06T 19/00 - Manipulating 3D models or images for computer graphics 988
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer 799
G02B 27/01 - Head-up displays 709
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus 484
H04L 12/58 - Message switching systems 372
See more
NICE Class
09 - Scientific and electric apparatus and instruments 265
42 - Scientific, technological and industrial services, research and design 171
41 - Education, entertainment, sporting and cultural services 144
35 - Advertising and business services 121
38 - Telecommunications services 67
See more
Status
Pending 1,163
Registered / In Force 5,239
  1     2     3     ...     65        Next Page

1.

BENDING ESTIMATION AS A BIOMETRIC SIGNAL

      
Application Number 19370519
Status Pending
Filing Date 2025-10-27
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor Katz, Sagi

Abstract

A method for generating reference biometric data based on a bending of a flexible device is described. In one aspect, a method includes forming training data includes bending estimates of a flexible device worn by a first user, training a model based on the training data, and generating reference biometric data for the first user based on the model.

IPC Classes  ?

  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06F 1/16 - Constructional details or arrangements
  • G06T 7/593 - Depth or shape recovery from multiple images from stereo images
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06V 40/19 - Sensors therefor
  • G06V 40/70 - Multimodal biometrics, e.g. combining information from different biometric modalities
  • H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

2.

FACIAL SYNTHESIS IN OVERLAID AUGMENTED REALITY CONTENT

      
Application Number 19367308
Status Pending
Filing Date 2025-10-23
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor
  • Demidov, Nikita
  • Golobokov, Roman
  • Melnyk, Alina
  • Voss, Jeremy Baker
  • Bromot, Aleksei

Abstract

The subject technology receives at least one signal from a computing device, the at least one signal comprising at least one of a current time, battery power, sensor information, or location information. The subject technology generates a digital sticker, the digital sticker including graphical content indicating information based at least in part based on the at least one signal and media content including an image of a target face, the image of the target face being modified based on at least one of sets of source pose parameters to mimic at least one of positions of a head of a source actor and at least one of facial expressions of the source actor. The subject technology provides augmented reality content for display on a computing device, the augmented reality content including the digital sticker as an overlay on at least a portion of the augmented reality content.

IPC Classes  ?

  • H04N 5/272 - Means for inserting a foreground image in a background image, i.e. inlay, outlay
  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • H04L 51/21 - Monitoring or handling of messages
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

3.

REAL-TIME NEURAL LIGHT FIELD ON MOBILE DEVICES

      
Application Number 19367052
Status Pending
Filing Date 2025-10-23
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor
  • Ren, Jian
  • Chemerys, Pavlo
  • Shakhrai, Vladislav
  • Hu, Ju
  • Makoviichuk, Denys
  • Tulyakov, Sergey
  • Cao, Junli

Abstract

A neural light field (NeLF) that runs real-time on mobile devices for neural rendering of three dimensional (3D) scenes, referred to as MobileR2L. The MobileR2L architecture runs efficiently on mobile devices with low latency and small size, and it achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world 3D scenes on mobile devices. The MobileR2L has a network backbone including a convolutional layer embedding an input image at a resolution, residual blocks uploading the embedded image, and super-resolution modules receiving the uploaded embedded image and rendering an output image having a higher resolution than the embedded image. The convolution layer generates a number of rays equal to a number of pixels in the input image, where a partial number of the rays is uploaded to the super-resolution modules.

IPC Classes  ?

  • G06T 3/4046 - Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
  • G06T 3/4053 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

4.

OVERMOLDED TEMPLE WITH THERMAL MANAGEMENT WINDOW

      
Application Number 19367469
Status Pending
Filing Date 2025-10-23
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor
  • Guang, Muye
  • Hristov, Stoyan
  • Hintermann, Mathias
  • Streets, Nicholas
  • Kraz, Mark

Abstract

An electronic eyewear device having temples that are lightweight, aesthetically pleasant, and include thermal management. The temples each have a thermally conductive stiffener with a non-thermally conductive overmolded material forming a window. The window in each temple exposes interior components, such the stiffener to ambient air, and allows heat generated by electronic components to be released to ambient air to cool the components through convection. The windows also allow the stiffeners to be coupled to tooling and are easily overmolded. The windows provide both an aesthetic feature and a functional feature.

IPC Classes  ?

5.

PROVIDING SHARED CONTENT COLLECTIONS WITHIN A MESSAGING SYSTEM

      
Application Number 19363016
Status Pending
Filing Date 2025-10-20
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Chen, Sigi
  • Cook, Matthew Lee
  • Cooper, Andrew Grosvenor
  • Copping, Benedict
  • Koai, Edward
  • Liu, Tao Marvin
  • Zhan, Yiwen
  • Zhang, Mian

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing shared content collections. The program and method provide for receiving, from a first device of a first user, an indication of first user input to share a content collection between the first user and a second user selected by the first user, the content collection comprising at least one media content item, the second user corresponding to a contact of the first user; storing the content collection in association with the first user and the second user; receiving an indication of second user input to share the content collection with a third user selected by the second user, the third user corresponding to a contact of the second user; and associating the content collection with the third user.

IPC Classes  ?

  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • G06F 16/41 - IndexingData structures thereforStorage structures
  • G06F 16/44 - BrowsingVisualisation therefor

6.

INFINITE-SCALE CITY SYNTHESIS

      
Application Number 19366925
Status Pending
Filing Date 2025-10-23
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor
  • Chai, Menglei
  • Lee, Hsin-Ying
  • Lin, Chieh
  • Menapace, Willi
  • Siarohin, Aliaksandr
  • Tulyakov, Sergey

Abstract

An environment synthesis framework generates virtual environments from a synthesized two-dimensional (2D) satellite map of a geographic area, a three-dimensional (3D) voxel environment, and a voxel-based neural rendering framework. In an example implementation, the synthesized 2D satellite map is generated by a map synthesis generative adversarial network (GAN) which is trained using sample city datasets. The multi-stage framework lifts the 2D map into a set of 3D octrees, generates an octree-based 3D voxel environment, and then converts it into a texturized 3D virtual environment using a neural rendering GAN and a set of pseudo ground truth images. The resulting 3D virtual environment is texturized, lifelike, editable, traversable in virtual reality (VR) and augmented reality (AR) experiences, and very large in scale.

IPC Classes  ?

  • G06T 17/05 - Geographic models
  • G06T 7/174 - SegmentationEdge detection involving the use of two or more images
  • G06T 17/00 - 3D modelling for computer graphics

7.

GEOFENCED AI LANDMARK INFORMATION SYSTEM

      
Application Number 19298558
Status Pending
Filing Date 2025-08-13
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor Shivji, Suraya

Abstract

A method is provided for delivering location-based information using artificial intelligence. The method includes automatically inferring a location of a user system at a geographic landmark based on location data, and triggering an AI assistant in response to the inferred location. The AI assistant generates information about the geographic landmark, and a graphical indication of the AI-generated information is displayed proximate to a graphical representation of the user on a map interface. Upon user selection of the graphical indication, a chat conversation with the AI assistant is initiated, the conversation including the AI-generated information about the geographic landmark

IPC Classes  ?

  • G06F 9/451 - Execution arrangements for user interfaces

8.

CONTEXT-BASED VIRTUAL OBJECT RENDERING

      
Application Number 19371231
Status Pending
Filing Date 2025-10-28
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Hare, Samuel Edward
  • Lazarov, Maxim Maximov
  • Mathew, Tony
  • Mcphee, Andrew James
  • Moreno, Daniel
  • Shang, Wentao

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for rendering a virtual object in a real-world environment depicted in image content based on contextual information. A virtual object template is selected. One or more stylizations for the virtual object template are determined based on contextual information associated with a computing device. A virtual object is generated by applying the one or more stylizations to the virtual object template. The virtual object is rendered within a 3D space captured within a camera feed of the computing device.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

9.

SOFTWARE DEVELOPMENT KIT FOR IMAGE PROCESSING

      
Application Number 19368635
Status Pending
Filing Date 2025-10-24
First Publication Date 2026-02-19
Owner Snap Inc. (USA)
Inventor
  • Charlton, Ebony James
  • Mandia, Patrick
  • Mourkogiannis, Celia Nicole
  • Sokolov, Mykhailo

Abstract

A modular image processing SDK comprises an API to receive API calls from third party software running on a portable device including a camera. SDK logic receives and processes commands and parameters received from the API that are based on the API calls received from the third party software. An annotation system performs image processing operations on a feed from the camera based on image processing instructions and parameters received by the annotation system from the SDK logic. The image processing is based at least in part on augmented reality content generator data (or AR content generators), user input and sensor data.

IPC Classes  ?

  • G06F 9/54 - Interprogram communication
  • G06F 9/451 - Execution arrangements for user interfaces
  • G06T 11/00 - 2D [Two Dimensional] image generation

10.

GEOFENCED AI LANDMARK INFORMATION SYSTEM

      
Application Number US2025041777
Publication Number 2026/039507
Status In Force
Filing Date 2025-08-13
Publication Date 2026-02-19
Owner SNAP INC. (USA)
Inventor Shivji, Suraya

Abstract

A method is provided for delivering location-based information using artificial intelligence. The method includes automatically inferring a location of a user system at a geographic landmark based on location data, and triggering an Al assistant in response to the inferred location. The Al assistant generates information about the geographic landmark, and a graphical indication of the Al-generated information is displayed proximate to a graphical representation of the user on a map interface. Upon user selection of the graphical indication, a chat conversation with the Al assistant is initiated, the conversation including the Al-generated information about the geographic landmark

IPC Classes  ?

11.

Accurate and anonymized attribution

      
Application Number 17491251
Grant Number 12554885
Status In Force
Filing Date 2021-09-30
First Publication Date 2026-02-17
Grant Date 2026-02-17
Owner Snap Inc. (USA)
Inventor
  • Beebe, Matthew
  • Blackwood, John Cain
  • Chopra, Samarth
  • Datta, Amit
  • Deshpande, Apoorvaa
  • Yeganeh, Bahador

Abstract

Systems and methods herein described accurate and anonymized attribution. The described systems and methods access a set of impression data, access a set of conversion data, for every predefined time period, generates a salt value, generates a hashed set of impression data by appending the salt value to at least a portion of entries in the set of impression data, generates a hashed set of conversion data by appending the salt value to at least a portion of entries in the set of conversion data, discards the generated salt value, generates an intersection set of the hashed set of impression data and the hashed set of conversion data, determines a count based on the intersection set, and stores the count.

IPC Classes  ?

  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • G06F 16/22 - IndexingData structures thereforStorage structures
  • G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules

12.

Brain-aware extended reality

      
Application Number 19081554
Grant Number 12554327
Status In Force
Filing Date 2025-03-17
First Publication Date 2026-02-17
Grant Date 2026-02-17
Owner Snap Inc. (USA)
Inventor
  • Barascud, Nicolas
  • Barbot, Antoine
  • Berriche, Hanna
  • El Bouri, Rasheed
  • Gentet, Enguerrand
  • Hwang, Steven
  • Kouider, Sid
  • Oustrière, Bertrand
  • Ployart, Guillaume
  • Royen, Clement
  • Steinmetz, Nelson

Abstract

An extended Reality (XR) system is provided that monitors neurological signals to determine an engagement of a user with a real-world environment. The XR system continuously monitors neurological signals of a user through a processor operating in a low-power mode. The XR system generates an engagement signal by analyzing endogenous brain patterns in the neurological signals. In response to the engagement signal, the XR system activates environmental sensors to capture real-world environment data. The XR system generates contextual data from the captured environment data and determines XR content to provide to the user based on the contextual data. The XR system selectively activates XR capabilities to display the determined XR content.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

13.

RF SHIELDING STRUCTURE FOR EXTENDED REALITY GLASSES

      
Application Number 19328618
Status Pending
Filing Date 2025-09-15
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Olgun, Ugur
  • Steger, Stephen Andrew
  • Wakser, Jordan

Abstract

A support arm assembly for a head-worn device provides radio frequency (RF) shielding for a projector. A metal support arm, configured to structurally attach to a rear structural element and an optical element holder of the head-worn device, forms a rear face, a bottom face, and a top face of an enclosure. A metal front face of the enclosure attaches to the optical element holder, and defines a front aperture for permitting passage of light from an exit pupil of the projector toward an input optical element. The metal support arm forms a structural support joining the optical element holder to the rear structural element without placing mechanical load on the projector. A first side face of the enclosure and a second side face of the enclosure are electrically coupled to the metal support arm.

IPC Classes  ?

14.

CONTEXT BASED MEDIA CURATION

      
Application Number 19359376
Status Pending
Filing Date 2025-10-15
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Anvaripour, Kaveh
  • Charlton, Ebony James
  • Chen, Travis
  • Mourkogiannis, Celia Nicole
  • Tang, Kevin Dechau

Abstract

A media curation system configured to perform operations that include, capturing an image at a client device, wherein the image includes a depiction of an object, identifying an object category of the object based on the depiction of the object within the image, accessing media content associated with the object category within a media repository, generating a presentation of the media content, and causing display of the presentation of the media content within the image at the client device.

IPC Classes  ?

  • G06F 16/432 - Query formulation
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 16/2457 - Query processing with adaptation to user needs
  • G06F 16/41 - IndexingData structures thereforStorage structures
  • G06F 16/438 - Presentation of query results
  • G06F 16/44 - BrowsingVisualisation therefor
  • G06F 16/48 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06K 7/14 - Methods or arrangements for sensing record carriers by electromagnetic radiation, e.g. optical sensingMethods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
  • G06N 20/00 - Machine learning
  • H04L 51/10 - Multimedia information
  • H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking

15.

CONTEXT-BASED SELECTION OF AUGMENTED REALITY EXPERIENCES

      
Application Number 19360184
Status Pending
Filing Date 2025-10-16
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Monroy-Hernández, Andrés
  • Robinson, Ava Marie
  • Tham, Yu Jiang

Abstract

The users' experience of engaging with augmented reality (AR) technology that permits users to interact with their environment and with each other is enhanced by automatically selecting an AR experience that is suitable for use given the physical environment of the user. The physical environment of the user is the physical environment of the user's computing device. The physical environment may include objects and/or conditions present in close proximity to the user's computing device, such as other humans, animals, and smart devices.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • H04L 67/131 - Protocols for games, networked simulations or virtual reality
  • H04L 67/52 - Network services specially adapted for the location of the user terminal

16.

LOCATION MAPPING FOR LARGE SCALE AUGMENTED-REALITY

      
Application Number 19364434
Status Pending
Filing Date 2025-10-21
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Mccormack, Richard
  • Pan, Qi

Abstract

An Augmented-Reality which performs operations that include: accessing a data object that comprises image data, location data, and orientation data; applying a transformation to the data object to produce a rectified data object; generating a point cloud based on the rectified data object; assigning the point cloud to a location based on at least the location data of the data object; detecting a client device at the location; and loading the point cloud to the client device in response to the detecting the client device at the location.

IPC Classes  ?

  • G06T 5/80 - Geometric correction
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
  • H04W 4/029 - Location-based management or tracking services

17.

VIRTUAL MANIPULATION OF AUGMENTED AND VIRTUAL REALITY OBJECTS

      
Application Number 19366088
Status Pending
Filing Date 2025-10-22
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor Spong, Mason

Abstract

Systems and methods are provided. For example, a method includes determining a position of a user's hand and identifying a manipulation gesture performed by the user targeting a virtual object. The method also includes determining a three-dimensional (3D) origin point based on the position of the user's hand when the manipulation gesture is performed, and determining a 3D end point based on a movement of the user's hand from the origin point. The method additionally includes deriving a 3D vector based on the 3D origin point and the 3D end point, and applying an action to the targeted virtual object based on the 3D vector, wherein the targeted virtual object is at a distance greater than the user's arm reach.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
  • G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

18.

QUOTABLE STORIES AND STICKERS FOR MESSAGING APPLICATIONS

      
Application Number 19366714
Status Pending
Filing Date 2025-10-23
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor Heikkinen, Christie Marie

Abstract

A system includes one or more hardware processors and at least one memory storing instructions that cause the one or more hardware processors to perform operations including receiving, via a client device, one or more response messages to a question sticker or to a story, and selecting, via the client device, a response message of the one or more response messages for publication. The operations also include selecting, via the client device, a privacy setting for the response message, and publishing, via the client device, the response message.

IPC Classes  ?

  • H04L 51/10 - Multimedia information
  • H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

19.

REQUEST QUEUE FOR SHARED CONTROL OF CAMERA DEVICE BY MULTIPLE DEVICES

      
Application Number 19360248
Status Pending
Filing Date 2025-10-16
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Smith, Brian Anthony
  • Vaish, Rajan

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, method, and user interface to facilitate a camera sharing session between two or more users. A camera sharing session is initiated based on session configuration information comprising a user identifier of a user permitted to control image capturing at a camera communicatively coupled to a first device. A trigger request is received from the second device and in response, an image capture, which results in at least one image, is triggered at the camera and the image is transmitted to the second device.

IPC Classes  ?

  • H04N 23/661 - Transmitting camera control signals through networks, e.g. control via the Internet
  • H04L 65/1069 - Session establishment or de-establishment
  • H04N 1/00 - Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmissionDetails thereof

20.

INCLUSIVE CAMERA

      
Application Number 19360694
Status Pending
Filing Date 2025-10-16
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor Saint-Preux, Bertrand

Abstract

Method of generating modified media content item based on user's setting selection starts with a processor causing a camera personalization interface to be displayed by a display of a client device. The processor receives a setting selection from the client device via the camera personalization interface. The processor determines a pre-capture setting and a post-processing setting based on the setting selection. The processor calibrates a camera of the client device using the pre-capture setting. The processor receives a media content item including an image captured using the camera and generates a modified media content item by modifying the media content item using the post-processing setting. Other embodiments are described herein.

IPC Classes  ?

  • H04N 23/62 - Control of parameters via user interfaces
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
  • H04N 23/80 - Camera processing pipelinesComponents thereof

21.

MESSAGING SYSTEM

      
Application Number 19360727
Status Pending
Filing Date 2025-10-16
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor Voss, Jeremy

Abstract

The present invention relates to a messaging system configured to: receive a request to generate a message at a first client device; cause display of a message notification within an interface of a second client device, wherein the message was addressed to a recipient of the second client device; receive a request to un-send the message from the first client device; and remove the message notification from the interface at the second client device in response to the request to un-send the message, according to some example embodiments.

IPC Classes  ?

  • H04L 51/42 - Mailbox-related aspects, e.g. synchronisation of mailboxes
  • H04L 51/046 - Interoperability with other network applications or services
  • H04L 51/10 - Multimedia information
  • H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking
  • H04L 51/224 - Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
  • H04L 51/234 - Monitoring or handling of messages for tracking messages

22.

USER INTERFACE FOR POSE DRIVEN VIRTUAL EFFECTS

      
Application Number 19362041
Status Pending
Filing Date 2025-10-17
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Alavi, Amir
  • Rykhliuk, Olha
  • Shi, Xintong
  • Solichin, Jonathan
  • Voronova, Olesia
  • Yagodin, Artem

Abstract

Systems and methods herein describe a method for capturing a video in real-time by an image capture device. The system provides a plurality of visual pose hints, identifies first pose information in the video while capturing the video, applies a first series of virtual effects to the video, identifies second pose information, and applies a second series of virtual effects to the video, the second series of virtual effects based on the first series of virtual effects.

IPC Classes  ?

  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • H04N 23/611 - Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders

23.

SOCIAL NETWORK POOLED POST CAPTURE

      
Application Number 19363464
Status Pending
Filing Date 2025-10-20
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Al Majid, Newar Husam
  • Boyd, Nathan Kenneth

Abstract

A social network image pool system can capture one or more image data items (e.g., image, video) in a temporary persistent post pool. The post pool enables for efficient capture of multiple image data items for publishing in a manner that allows multiple images data items to be captured while preserving the editability of the multiple items before they are published to a social network site.

IPC Classes  ?

  • H04L 67/141 - Setup of application sessions
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
  • H04L 51/10 - Multimedia information
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04L 67/143 - Termination or inactivation of sessions, e.g. event-controlled end of session

24.

PRODUCT IMAGE GENERATION BASED ON DIFFUSION MODEL

      
Application Number 19364881
Status Pending
Filing Date 2025-10-21
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Assouline, Avihay
  • Berger, Itamar
  • Heimann, Jonathan

Abstract

Methods and systems are disclosed for generating an extended reality (XR) try-on experience based on an image produced by a diffusion model. The system receives an image depicting a real-world object and generates a prompt comprising a textual description of a fashion item. The system analyzes the image and the textual description of the fashion item using a generative machine learning model to generate an artificial image that depicts an artificial object that resembles the real-world object wearing an artificial fashion item matching the textual description of the fashion item. The system identifies an object comprising a real-world product image that matches visual attributes of the artificial fashion item and replaces the artificial fashion item in the artificial image with the object to generate an output image.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06F 40/20 - Natural language analysis
  • G06F 40/40 - Processing or translation of natural language
  • G06T 7/10 - SegmentationEdge detection
  • G06T 7/60 - Analysis of geometric attributes
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06V 10/776 - ValidationPerformance evaluation
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes

25.

EXTERNAL CONTROLLER FOR AN EYEWEAR DEVICE

      
Application Number 19366164
Status Pending
Filing Date 2025-10-22
First Publication Date 2026-02-12
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Hallberg, Matthew
  • Miller, William Miles
  • Tran, Lien Le Hong
  • Tucker, Michael Benson

Abstract

Systems and methods are provided for using an external controller with an AR device. The system establishes, by one or more processors of the AR device, a communication with an external client device. The system overlays, by the AR device, a first AR object on a real-world environment being viewed using the AR device. The system receives interaction data from the external client device representing one or more inputs received by the external client device and, in response, modifies the first AR object by the AR device.

IPC Classes  ?

  • A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
  • A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
  • A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
  • A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
  • G02B 27/01 - Head-up displays
  • G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
  • G06T 7/20 - Analysis of motion
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes

26.

HAND SCALE FACTOR ESTIMATION FROM MOBILE INTERACTIONS

      
Application Number US2025040020
Publication Number 2026/035508
Status In Force
Filing Date 2025-07-31
Publication Date 2026-02-12
Owner SNAP INC. (USA)
Inventor
  • Zhou, Kai
  • Wei, Xinrong
  • Li, Xiao
  • Hu, Dunxu

Abstract

An XR system is provided that enhances user interaction within extended reality environments through precise hand scale estimation. The XR system is configured to capture tracking data of a user's hand as the user interacts with a mobile device. Concurrently, the XR system captures pose data of itself and uses the tracking data and the pose data to determine a reference line segment. This segment aids in calculating three-dimensional distances between node pairs of the user's hand. By employing these measurements, the XR system effectively calculates a hand scale factor that is used for accurately integrating the user's hands into an XR user interface.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G02B 27/01 - Head-up displays

27.

HAND TOUCH DETECTION USING IMAGES

      
Application Number 18791097
Status Pending
Filing Date 2024-07-31
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor Alsalka, Fayez

Abstract

An XR system is provided. This system captures images including images of a first hand of a user and a second hand of the user using one or more cameras. The XR system generates cropped images using the images, each cropped image including a surface of the first hand. The XR system detects a hand touch of the surface of the hand by a digit of the second hand using the cropped images. The hand touch is used as an input into an XR user interface of the XR system. The surface of the hand can be palmar surface or a hand dorsal surface.

IPC Classes  ?

  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands

28.

HAND SCALE FACTOR ESTIMATION FROM MOBILE INTERACTIONS

      
Application Number 19348147
Status Pending
Filing Date 2025-10-02
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor
  • Zhou, Kai
  • Wei, Xinrong
  • Li, Xiao
  • Hu, Dunxu

Abstract

An XR system is provided that enhances user interaction within extended reality environments through precise hand scale estimation. The XR system is configured to capture tracking data of a user's hand as the user interacts with a mobile device. Concurrently, the XR system captures pose data of itself and uses the tracking data and the pose data to determine a reference line segment. This segment aids in calculating three-dimensional distances between node pairs of the user's hand. By employing these measurements, the XR system effectively calculates a hand scale factor that is used for accurately integrating the user's hands into an XR user interface.

IPC Classes  ?

  • G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

29.

SYSTEMS AND METHODS FOR LOW POWER COMMON ELECTRODE VOLTAGE GENERATION FOR DISPLAYS

      
Application Number 19351756
Status Pending
Filing Date 2025-10-07
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor Taylor, Stewart S.

Abstract

A system, circuit, and method for implementing a low power common electrode voltage for a display (e.g., LcoS display) having transistors with low to moderate breakdown voltages may include a first and a second low voltage amplifier, wherein the first amplifier generates a pixel voltage and the second amplifier generates a predetermined voltage. The circuit may include a common electrode circuit coupled to the first and second amplifier to generate a common electrode voltage. Particularly, the circuit may include a control circuit coupled to the common electrode circuit, wherein, during a first phase, the control circuit selectively controls the common electrode circuit to generate a low common electrode voltage based upon a negative value of the predetermined voltage. Further, during a second phase, the control circuit selectively controls the common electrode circuit to generate a high common electrode voltage based upon the sum of the predetermined voltage and the pixel voltage.

IPC Classes  ?

  • G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals

30.

BEAUTIFICATION TECHNIQUES FOR 3D DATA IN A MESSAGING SYSTEM

      
Application Number 19353066
Status Pending
Filing Date 2025-10-08
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Hare, Samuel Edward
  • Lazarov, Maxim Maximov
  • Mathew, Tony
  • Mcphee, Andrew James
  • Moreno, Daniel
  • Sagar, Dhritiman
  • Shang, Wentao

Abstract

The subject technology applies, to image data and depth data, a 3D effect including at least one beautification operation based on an augmented reality content generator, the 3D effect including a beautification operation, the beautification operation comprising modifying image data, the image data including a region corresponding to a representation of a face, the beautification operation comprising using a machine learning model for at least one of smoothing blemishes or preserving facial skin texture. The subject technology generates a depth map using at least the depth data. The subject technology generates a segmentation mask based at least on the image data. The subject technology performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/04842 - Selection of displayed objects or displayed text elements
  • G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
  • G06N 20/00 - Machine learning
  • G06T 7/194 - SegmentationEdge detection involving foreground-background segmentation
  • G06T 7/50 - Depth or shape recovery
  • G06T 7/507 - Depth or shape recovery from shading
  • G06T 15/50 - Lighting effects
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • H04L 67/131 - Protocols for games, networked simulations or virtual reality

31.

SOCIAL ACCOUNT RECOVERY

      
Application Number 19354496
Status Pending
Filing Date 2025-10-09
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor
  • Pihur, Vasyl
  • He, Jianping
  • Ramsey, Luke
  • Copping, Benedict

Abstract

Systems and methods are provided for performing operations including: receiving, via a messaging application of a user device, a request to recover access to an account of a user of the messaging application; accessing a first object corresponding to a first key; receiving, from a first friend of the user on the messaging application, a second object corresponding to a first portion of a second key; receiving, from a second friend of the user on the messaging application, a third object corresponding to a second portion of the second key; deriving the second key based on the second and third objects; and recovering access to the account of the user based on the first key and the second key.

IPC Classes  ?

  • G06F 21/45 - Structures or tools for the administration of authentication
  • G06F 21/40 - User authentication by quorum, i.e. whereby two or more security principals are required
  • H04L 9/08 - Key distribution
  • G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules

32.

SHARING CONTENT ITEM COLLECTIONS IN A CHAT

      
Application Number 19355112
Status Pending
Filing Date 2025-10-10
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Grippi, Daniel Vincent
  • Taitz, David Phillip
  • Xia, Xingnan
  • Moreno Cuellar, Daniel

Abstract

Methods and systems are disclosed for sharing collections of content items in chat sessions. The methods and systems receive a request to share a first content item and present a GUI comprising a first set of options and a second set of options, the first set of options being associated with adding the first content item to a collection of content items that is accessible to a plurality of recipients, the second set of options being associated with sending the first content item to individual recipients. The methods and systems determine a set of target recipients of the first content item and select a content sharing link between a first link to the collection of content items and a second link directly to the first content item. The methods and systems send, to a target recipient, the content sharing link that has been selected.

IPC Classes  ?

  • H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
  • H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
  • H04N 21/431 - Generation of visual interfacesContent or additional data rendering

33.

SELECTIVE COLLABORATIVE OBJECT ACCESS BASED ON TIMESTAMP

      
Application Number 19356824
Status Pending
Filing Date 2025-10-13
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor
  • Cho, Youjean
  • Ji, Chen
  • Liu, Fannie
  • Monroy-Hernández, Andrés
  • Tsai, Tsung-Yu
  • Vaish, Rajan

Abstract

Collaborative sessions in which access to added virtual content is selectively made available to participants/users by a collaborative system. The system receives a request from a user to join a session, and associates a timestamp with the user corresponding to receipt of the request. Users can edit the collaborative object if the timestamp is within the collaborative duration period and can view the collaborative object if the timestamp is after the collaborative duration period.

IPC Classes  ?

  • G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

34.

MONOCULAR CAMERA DEFOCUS FACE MEASURING

      
Application Number 19358057
Status Pending
Filing Date 2025-10-14
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor
  • Krishnan Gorumkonda, Gurunandan
  • Nayar, Shree K.
  • Wu, Yicheng

Abstract

A device that measures a size of a user's face, referred to as face scaling, using a monocular camera. Depth is calculated from sparse feature points. A face mesh is used to improve the estimation accuracy. A processing pipeline detects face features by applying a face landmark detection algorithm to find the important face feature points such as the eyes, nose, and mouth. The processing pipeline estimates feature points depth using depth obtained through image defocus. The processing pipeline further scales the face using an estimated depth of the face features.

IPC Classes  ?

  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

35.

HAND TOUCH DETECTION USING IMAGES

      
Application Number US2025039923
Publication Number 2026/030474
Status In Force
Filing Date 2025-07-30
Publication Date 2026-02-05
Owner SNAP INC. (USA)
Inventor Alsalka, Fayez

Abstract

An XR system is provided. This system captures images including images of a first hand of a user and a second hand of the user using one or more cameras. The XR system generates cropped images using the images, each cropped image including a surface of the first hand. The XR system detects a hand touch of the surface of the hand by a digit of the second hand using the cropped images. The hand touch is used as an input into an XR user interface of the XR system. The surface of the hand can be palmar surface or a hand dorsal surface.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
  • G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry

36.

3D MODELS FOR AUGMENTED REALITY (AR)

      
Application Number 19354360
Status Pending
Filing Date 2025-10-09
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor
  • Aljubeh, Marwan
  • Bakker, Gregory James
  • Nersesian, Eric
  • Tanathong, Supannee
  • Zhao, Yanli

Abstract

The present disclosure provides a method for creating a 3D model of a reference surface. The method includes capturing, using a capture device, a plurality of data points on the reference surface, determining a position and an orientation of the capture device related to the capture of the plurality of data points, creating a 3D data representation of the reference surface based on the plurality of data points, creating a location tracking data representation of the reference surface based the plurality of data points on the reference surface and the position and the orientation of the capture device, and creating the 3D model of the reference surface based on the 3D data representation and the location tracking data representation of the reference surface.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06T 7/579 - Depth or shape recovery from multiple images from motion
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

37.

MACHINE LEARNING-BASED MODIFICATION OF IMAGE CONTENT

      
Application Number 19358100
Status Pending
Filing Date 2025-10-14
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor Mourkogiannis, Celia Nicole

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for modifying a captured image. The program and method provide for displaying, by a messaging application, an image captured by a device camera; providing, by the messaging application, a user interface for selecting from among a plurality of content modifiers to modify the image, the plurality of content modifiers including a first content modifier corresponding to a machine learning model trained with a plurality of image pairs, each image pair including a first image and a second image corresponding to a modified version of the first image; receiving user selection of the first content modifier from among the plurality of content modifiers; determining, in response to receiving the user selection, a modified version of the image based on output from the machine learning model; and displaying the modified version of the image.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06N 20/00 - Machine learning
  • H04L 51/10 - Multimedia information
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

38.

EYEWEAR HAVING CURRENT CONSUMPTION OPTIMIZATION OF WIRELESS SYSTEM INTERFACE

      
Application Number 19358280
Status Pending
Filing Date 2025-10-14
First Publication Date 2026-02-05
Owner Snap Inc. (USA)
Inventor Vadivelu, Praveen Babu

Abstract

Eyewear having a high-speed wireless transceiver, including a processor having a high-speed interface for communicating high-speed communications and a low-speed interface for communicating low-speed communications. The high-speed interface is disabled to have no standby current and only the low-speed interface is used when only low-speed communications are needed to save current. When communications are received via the high-speed wireless transceiver, only the low-speed interface is initially used, and the high-power interface is later used if necessary. The high-speed interface can be a high-speed universal serial bus (USB) interface, and the low-speed interface can be a universal asynchronous receiver-transmitter (UART) interface. A USB hub is controlled by the processor to selectively enable the USB interface.

IPC Classes  ?

39.

PROCESSING AND TRANSMITTING ACTIVE REGIONS OF DISPLAY FOR IMPROVED PERFORMANCE

      
Application Number 19270225
Status Pending
Filing Date 2025-07-15
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor Boyce, Aaron L.

Abstract

A system is disclosed, including a display, a processor and a memory. The memory stores instructions that, when executed by the processor, configure the system to perform operations. Active region data is generated that includes, for each of one or more active regions, active region location data and active region content. The active region data is transmitted to a display having a display area. For each active region, the active region content is displayed at an active region location of the display area based on the active region location data, the active region content being displayed at a higher spatiotemporal information density than content displayed in the display area outside of the active regions.

IPC Classes  ?

  • G06F 3/14 - Digital output to display device
  • G06T 11/00 - 2D [Two Dimensional] image generation

40.

REAL-TIME FASHION ITEM TRANSFER SYSTEM

      
Application Number 19291479
Status Pending
Filing Date 2025-08-05
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Zhou, Kai
  • Luidolt, Laura Rosalia
  • Tam, Himmy
  • Guler, Riza Alp
  • Kokkinos, Iason
  • Assouline, Avihay

Abstract

Methods and systems are disclosed for transferring garments from a real-world object to a virtual object. The system receives, by a client device, an image that includes a depiction of a real-world object having a fashion item in a real-world environment. The system accesses a three-dimensional (3D) avatar model of a human and generates a graphic item corresponding to the fashion item being worn by the real-world object depicted in the image. The system modifies the 3D avatar model of the human based on the graphic item and presents the 3D avatar model that has been modified based on the graphic item within a view of the real-world environment on the client device.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

41.

FINGER GESTURE RECOGNITION VIA ACOUSTIC-OPTIC SENSOR FUSION

      
Application Number 19345158
Status Pending
Filing Date 2025-09-30
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Krishnan Gorumkonda, Gurunandan
  • Nayar, Shree K.
  • Xu, Chenhan
  • Zhou, Bing

Abstract

A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/16 - Sound inputSound output
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

42.

SOCIAL MEDIA POST SUBSCRIBE REQUESTS FOR BUFFER USER ACCOUNTS

      
Application Number 19345373
Status Pending
Filing Date 2025-09-30
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Allen, Nicholas R.
  • Burfitt, Joseph

Abstract

An approach for publishing posts on a social network through one or more user accounts with different levels of attribution is disclosed. A secure user account publishes a post through a programmatically linked buffer user account. The secure user account and the buffer user account are programmatically linked. Posts published via the buffer user account can be modified to add attribution image data or other visual indicators of the original post creator.

IPC Classes  ?

  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
  • H04L 9/40 - Network security protocols
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04L 67/306 - User profiles

43.

WEB DOCUMENT ENHANCEMENT

      
Application Number 19345895
Status Pending
Filing Date 2025-09-30
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Rotem, Efrat
  • Krieger, Ariel
  • Merali, Emmanuel

Abstract

A method for enhancing a presentation of a network document by a client terminal with real time social media content. The method comprises analyzing a content in a web document to identify a relation to a first of a plurality of multi participant events documented in an event dataset, each of the plurality of multi participant events is held in a geographical venue which hosts an audience of a plurality of participants, matching a plurality of event indicating lags of each of a plurality of user uploaded media content files with at least one feature of the first multi participant event to identify a group of user uploaded media content files selected from the plurality of user uploaded media content files, and forwarding at least some members of the group to a simultaneous presentation on a browser running on a client terminal and presenting the web document.

IPC Classes  ?

  • G06F 16/9536 - Search customisation based on social or collaborative filtering
  • G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
  • G06F 16/951 - IndexingWeb crawling techniques
  • G06F 16/9535 - Search customisation based on user profiles and personalisation
  • G06F 16/9538 - Presentation of query results
  • G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
  • G06F 40/20 - Natural language analysis
  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
  • H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]

44.

PROVIDING A TEMPLATE FOR MEDIA CONTENT GENERATION

      
Application Number 19347377
Status Pending
Filing Date 2025-10-01
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Mahar, Matthew
  • Kapil, Vineet
  • Anvaripour, Kaveh
  • Lankage, Ranidu
  • Shevchenko, Anton
  • Su, Xin
  • Lin, Benjamin
  • Lerner, Noam
  • Manandhar, Rasana
  • Tare, Prasad
  • Kendall, Dustin

Abstract

Aspects of the present disclosure involve a system comprising a storage medium storing a program and method for providing a template for media content generation. The program and method provide for receiving, from a first device of a first user, an indication of first user input setting properties to create a template for combining user-selected media with an audio track, the properties specifying the audio track, a sequence of media slots, a duration for each media slot, and predefined edits for applying to the media slots; causing display of a user interface on a second device of a second user, the user interface for assigning a respective video or photo to each media slot; and receiving, from the second device, a media content item generated based on second user input provided via the user interface, the second user input assigning the respective video or photo to each media slot.

IPC Classes  ?

  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus

45.

AUGMENTED REALITY EXPERIENCES OF COLOR PALETTES IN A MESSAGING SYSTEM

      
Application Number 19347483
Status Pending
Filing Date 2025-10-01
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Luo, Jean
  • Mourkogiannis, Celia Nicole

Abstract

The subject technology receives image data including a representation of a physical item. The subject technology analyzes the image data to determine an object corresponding to the physical item. The subject technology identifies a set of colors corresponding to a set of regions of the determined object. The subject technology analyzes second image data to detect a second object corresponding to a representation of a particular body part of a user. The subject technology generates augmented reality content based at least in part on the identified set of colors and the detected second object. The subject technology causes display, at a client device, the augmented reality content applied to the detected second object.

IPC Classes  ?

  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  • G06T 7/40 - Analysis of texture
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06V 10/56 - Extraction of image or video features relating to colour
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • H04L 51/10 - Multimedia information

46.

RECREATING PERIPHERAL VISION ON A WEARABLE DEVICE

      
Application Number 19347924
Status Pending
Filing Date 2025-10-02
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Zare Seisan, Farid
  • Loenngren, Ulf Oscar Michel

Abstract

A head-wearable apparatus includes a frame having a front piece configured to hold left and right lenses. A left temple is coupled to the front piece and a right temple coupled to the front piece. A camera system includes one or more cameras coupled to the front piece, one or more left peripheral cameras coupled to an outside surface of the frame, and one or more right peripheral cameras coupled to an outside surface of the frame. A left peripheral display is coupled to an inside surface of the frame. The left peripheral display is configured to receive and display input from the one or more left peripheral cameras. A right peripheral display is coupled to an inside surface of the frame. The right peripheral display is configured to receive and display input from the one or more right peripheral cameras.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

47.

CURSOR FUNCTIONALITY FOR AUGMENTED REALITY CONTENT IN MESSAGING SYSTEMS

      
Application Number 19348204
Status Pending
Filing Date 2025-10-02
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Lazarov, Maxim Maximov
  • Mcphee, Andrew James
  • Moreno, Daniel

Abstract

The subject technology detects a location and a position of a representation of a finger in a set of frames captured by a camera of a client device. The subject technology generates a first virtual object based at least in part on the location and the position of the representation of the finger. The subject technology renders the first virtual object within a first scene. The subject technology detects a first collision event corresponding to a first collider of the first virtual object intersecting with a second collider of a second virtual object. The subject technology modifies a set of dimensions of the second virtual object to a second set of dimensions. The subject technology renders the second virtual object based on the second set of dimensions within a second scene. The subject technology provides for display the rendered second virtual object within the second scene.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/20 - Analysis of motion
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

48.

AR GLASSES AS IOT REMOTE CONTROL

      
Application Number 19349680
Status Pending
Filing Date 2025-10-03
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Moll, Sharon
  • Gurgul, Piotr

Abstract

AR-enabled wearable electronic devices such as smart glasses are adapted for use as an (Internet of Things) IoT remote control device where the user can control a pointer on a television screen, computer screen, or other IoT enabled device to select items by looking at them and making selections using gestures. Built-in six-degrees-of-freedom (6DoF) tracking capabilities are used to move the pointer on the screen to facilitate navigation. The display screen is tracked in real-world coordinates to determine the point of intersection of the user's view with the screen using raycasting techniques. Hand and head gesture detection are used to allow the user to execute a variety of control actions by performing different gestures. The techniques are particularly useful for smart displays that offer AR-enhanced content that can be viewed in the displays of the AR-enabled wearable electronic devices.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G02B 27/01 - Head-up displays
  • G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
  • G06K 7/14 - Methods or arrangements for sensing record carriers by electromagnetic radiation, e.g. optical sensingMethods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
  • G06T 7/50 - Depth or shape recovery
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G16Y 40/30 - Control

49.

LOCATION VISUALIZATION ON MAP

      
Application Number 18785090
Status Pending
Filing Date 2024-07-26
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Camper, Brett
  • Grigsby, Travis M.
  • Kreiser, Kevin
  • Li, Mengyao
  • Samaranayake, Suraj Vindana
  • Thornberry, Kevin Joseph
  • Young, Patrick

Abstract

Described is a system for generation location visualization on a map interface by identifying a current location of a user that is initiating an interaction function of an interaction client; identifying a map corresponding to the current location of the user; identifying one or more map tiles associated with the map; receiving historical location data of the user that is associated with the current location of the user; converting the historical location data into an overall polygon that is comprised of a plurality of polygons based on the identified one or more map tiles; and displaying the map with the plurality of polygons on a user interface.

IPC Classes  ?

  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles

50.

SEARCHING SOCIAL MEDIA CONTENT

      
Application Number 19344052
Status Pending
Filing Date 2025-09-29
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Al Majid, Newar Husam
  • Dakka, Wisam
  • Giovannini, Donald
  • Madeira, Andre
  • Damian, Andrei
  • Mir Ghaderi, Seyed Reza
  • Lin, Yaming
  • Kunal, Ranveer
  • Cai, Congxing
  • Araujo, Robson
  • Fernandes, Guilherme
  • Ahn, Jungho

Abstract

Various embodiments provide for systems, methods, and computer-readable storage media that improve media content search functionality and curation of media content. For instance, various embodiments described in this document provide features that can present media content items in the form of dynamic collection of media content items upon a user typing into a search bar. In another instance, various embodiments described herein improve media content search functionality by ranking user facing search features using input signals.

IPC Classes  ?

  • G06F 16/951 - IndexingWeb crawling techniques
  • G06F 16/14 - Details of searching files based on file metadata
  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism

51.

AUGMENTED REALITY SHARED SCREEN SPACE

      
Application Number 19346863
Status Pending
Filing Date 2025-10-01
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Jung, Bernhard
  • Kang, Shin Hwun
  • Skrypnyk, Daria

Abstract

Systems, methods, and computer readable media for an augmented reality (AR) shared screen space. Examples relate to a host augmented realty (AR) device sharing a screen and a relative location of the AR device to the screen with guest AR devices where the guest AR devices share a relative location of the guest AR devices to a copy of the screen displayed on the display of the guest AR devices and where the users of the AR devices may see each other's location with the use of avatars around the shared screen and add augmentations to the shared screen. The yaw, roll, and pitch of the head of the avatars tracks the movement of the head of the user of the AR wearable device.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • H04L 67/131 - Protocols for games, networked simulations or virtual reality

52.

HAND SURFACE NORMAL ESTIMATION

      
Application Number 19347470
Status Pending
Filing Date 2025-10-01
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Guler, Riza Alp
  • Kulon, Dominik
  • Tam, Himmy
  • Wang, Haoyang

Abstract

An system for augmenting images using hand surface normal estimation is provided. In a model training phase, 3D models of hands are generated using 3D data of hands in a variety of positions. Target normal training data is generated that includes normals of surfaces of the 3D models and synthetic 2D image training data corresponding to the 3D models and the normals. The target normal training data and the synthetic image training data are used to train a normal estimation model. The normal estimation is used by an interactive application to generate augmentations that are applied to hand image data.

IPC Classes  ?

  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
  • G06T 7/40 - Analysis of texture
  • G06T 11/00 - 2D [Two Dimensional] image generation

53.

ADAPTIVE IMAGE PROCESSING FOR AUGMENTED REALITY DEVICE

      
Application Number 19348749
Status Pending
Filing Date 2025-10-02
First Publication Date 2026-01-29
Owner Snap Inc. (USA)
Inventor
  • Muttenthaler, Thomas
  • Zhou, Kai

Abstract

Examples describe adaptive image processing for an augmented reality (AR) device. An input image is captured by a camera of the AR device, and a region of interest of the input image is determined. The region of interest is associated with an object that is being tracked using an object tracking system. A crop-and-scale order of an image processing operation directed at the region of interest is determined for the input image. One or more object tracking parameters may be used to determine the crop-and-scale order. The crop-and-scale order is dynamically adjustable between a first order and a second order. An output image is generated from the input image by performing the image processing operation according to the determined crop-and-scale order for the particular input image. The output image can be accessed by the object tracking system to track the object.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G02B 27/01 - Head-up displays
  • G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
  • G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]

54.

PROCESSING AND TRANSMITTING ACTIVE REGIONS OF DISPLAY FOR IMPROVED PERFORMANCE

      
Application Number US2025038825
Publication Number 2026/024820
Status In Force
Filing Date 2025-07-23
Publication Date 2026-01-29
Owner SNAP INC. (USA)
Inventor Boyce, Aaron L.

Abstract

A system is disclosed, including a display, a processor and a memory. The memory stores instructions that, when executed by the processor, configure the system to perform operations. Active region data is generated that includes, for each of one or more active regions, active region location data and active region content. The active region data is transmitted to a display having a display area. For each active region, the active region content is displayed at an active region location of the display area based on the active region location data, the active region content being displayed at a higher spatiotemporal information density than content displayed in the display area outside of the active regions.

IPC Classes  ?

  • G09G 5/02 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
  • G09G 5/14 - Display of multiple viewports
  • G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position

55.

Eyewear with IR transparent frame

      
Application Number 18216711
Grant Number 12535697
Status In Force
Filing Date 2023-06-30
First Publication Date 2026-01-27
Grant Date 2026-01-27
Owner Snap Inc. (USA)
Inventor
  • Hintermann, Mathias
  • Streets, Nicholas
  • You, Choonshin
  • Zhang, Bo Ya

Abstract

An eyewear device having a frame formed of an infrared (IR) transmissive material in a continuous piece without discrete openings. IR cameras are placed behind areas of the frame designated as optical surfaces. In an example, the optical surfaces are positioned at lower corners of the frame and are angled inwardly to prevent obstructions, e.g., due to scratches and smudges. The optical surfaces have uniform thickness, are smooth, and are angled such that IR light communicated to the cameras pass at an angle through the optical surfaces. The IR cameras are angled downwardly to view objects forward and below a user wearing the eyewear device, e.g., to detect hand gestures.

IPC Classes  ?

  • H04N 23/50 - Constructional details
  • G02C 5/00 - Constructions of non-optical parts
  • G02C 11/00 - Non-optical adjunctsAttachment thereof
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • H04N 23/20 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from infrared radiation only

56.

Terraced battery system for wearable electronic device

      
Application Number 18196353
Grant Number 12535855
Status In Force
Filing Date 2023-05-11
First Publication Date 2026-01-27
Grant Date 2026-01-27
Owner Snap Inc. (USA)
Inventor
  • Hristov, Stoyan
  • Nilles, Gerald

Abstract

A terraced battery system is provided that may enhance the battery packaging efficiency within an organic shape of a wearable electronic device such as an electronic eyewear device. The terraced battery includes several stacked cells of different geometries. The terraced battery geometries are selected to better accommodate organic (non-trapezoidal and non-cylindrical) shapes of the battery housing in the wearable electronic device. In an example, the terraced battery geometry is adapted to accommodate the organic shape of a battery housing in the temples of an augmented reality electronic eyewear device. As the number of the battery cells or terraces increases, the battery packaging efficiency can be further improved within an organic shape of the battery housing. The increased packaging efficiency for the battery enables increased battery life within organically shaped enclosures.

IPC Classes  ?

  • G06F 1/16 - Constructional details or arrangements
  • G02C 11/00 - Non-optical adjunctsAttachment thereof
  • H01M 10/04 - Construction or manufacture in general

57.

Display screen or portion thereof with a graphical user interface

      
Application Number 29963286
Grant Number D1110367
Status In Force
Filing Date 2024-09-16
First Publication Date 2026-01-27
Grant Date 2026-01-27
Owner SNAP INC. (USA)
Inventor
  • Hwang, Viktoria
  • Stolzenberg, Karen
  • Vignau, Mathieu Emmanuel

58.

3D PAINTING ON AN EYEWEAR DEVICE

      
Application Number 19340401
Status Pending
Filing Date 2025-09-25
First Publication Date 2026-01-22
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Mcphee, Andrew James
  • Moreno, Daniel

Abstract

Systems and methods are provided for performing operations comprising: displaying a plurality of augmented reality painting options; detecting, by a touch input interface of the eyewear device, a first touch input comprising a single finger touching the touch input interface; selecting a first augmented reality painting option of the plurality of augmented reality painting options in response to the first touch input; while continuing to detect continuous touch between the single finger and the touch input interface following selection of the first augmented reality painting option, displaying a second augmented reality painting option related to the first augmented reality painting option; and performing a selection associated with the second augmented reality painting option in response to detecting, by the touch input interface, movement of the single finger along the touch input interface while continuing to detect the continuous touch.

IPC Classes  ?

  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • G06F 3/04842 - Selection of displayed objects or displayed text elements
  • G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
  • G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

59.

EMBEDDINGS REPRESENTING VISUAL AUGMENTATIONS

      
Application Number 19340574
Status Pending
Filing Date 2025-09-25
First Publication Date 2026-01-22
Owner Snap Inc. (USA)
Inventor
  • Zhou, Zhenpeng
  • Poirson, Patrick
  • Gusarov, Maksim
  • Wang, Chen
  • Tovstyi, Oleg

Abstract

An input video item that includes a target visual augmentation is accessed. A machine learning model uses the input video item to generate an embedding. The embedding may comprise a vector representation of a visual effect of the target visual augmentation. The machine learning model is trained, in an unsupervised training phase, to minimize loss between training video representations generated within each of a plurality of training sets. Each training set comprises a plurality of different training video items that each include a predefined visual augmentation. Based on the generation of the embedding of the input video item, the target visual augmentation is mapped to an augmentation identifier.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 1/00 - General purpose image data processing
  • G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects

60.

REAL-TIME ANONYMIZATION OF IMAGES AND AUDIO

      
Application Number 19340037
Status Pending
Filing Date 2025-09-25
First Publication Date 2026-01-22
Owner Snap Inc. (USA)
Inventor Mathur, Dheeresh Pratap

Abstract

Systems and methods are provided. A system includes a display and camera. The system additionally includes a secure data vault system. The secure data vault system includes a sandbox system operatively coupled to the camera and configured to receive camera data from the camera, wherein in operation of the sandbox system, the camera only sends camera data to the sandbox system, and wherein the sandbox system comprises an execution environment configured to restrict execution of instructions to a predefined memory address range. The secure data vault system additionally includes a display and rendering system operatively coupled to the sandbox system and configured to render an image based on the camera data processed via the instructions and to display the image via the display, wherein the display and rendering system is configured to blur sections of the image based on private information derived from the image.

IPC Classes  ?

  • G06F 21/84 - Protecting input, output or interconnection devices output devices, e.g. displays or monitors
  • G06F 3/16 - Sound inputSound output
  • G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
  • G06T 5/70 - DenoisingSmoothing
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 30/40 - Document-oriented image-based pattern recognition

61.

SECURING OF AUGMENTED REALITY (AR) SYSTEMS

      
Application Number 19340160
Status Pending
Filing Date 2025-09-25
First Publication Date 2026-01-22
Owner Snap Inc. (USA)
Inventor Mathur, Dheeresh Pratap

Abstract

An augmented reality (AR) system includes a display, a camera; and a secure data vault system. The secure data vault system includes a sandbox system operatively coupled to the camera and configured to receive camera data from the camera, wherein in operations of the AR system, the camera only sends camera data to the sandbox system, and wherein the sandbox system comprises an execution environment configured to restrict execution of instructions to a predefined memory address range. The secure data vault system additionally includes a display and rending system operatively coupled to the sandbox system and configured to render an image based on the camera data processed via the instructions and to display the image via the display, wherein the display is configured to show both the image and a real-world environment surrounding the AR system.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 21/44 - Program or device authentication
  • G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
  • G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
  • G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules

62.

CONTEXTUAL VISUAL AND VOICE SEARCH FROM ELECTRONIC EYEWEAR DEVICE

      
Application Number 19340232
Status Pending
Filing Date 2025-09-25
First Publication Date 2026-01-22
Owner Snap Inc. (USA)
Inventor
  • Meisenholder, David
  • Sheffield, Kameron
  • Fortier, Joseph Timothy
  • Zeng, Raymond
  • Rybin, Andrei
  • Geddes, Jonathan

Abstract

Augmented reality features are selected for presentation to a display of an electronic eyewear device by using a camera of the electronic eyewear device to capture a scan image and processing the scan image to extract contextual signals. Simultaneously, voice data from the user is captured by a microphone of the electronic eyewear device and voice-to-text conversion of the captured voice data is performed to identify keywords in the voice data. The extracted contextual signals and the identified keywords are then used to select at least one augmented reality feature that matches the extracted contextual signals and the identified keywords, and the selected augmented reality feature is presented to the display for user selection. The contextual information thus refines the search results to provide the augmented reality feature best suited for the context of the scan image captured by the electronic eyewear device.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/16 - Sound inputSound output
  • G06V 10/40 - Extraction of image or video features
  • G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
  • G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 20/60 - Type of objects
  • G10L 15/08 - Speech classification or search
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog

63.

ROBOTIC LEARNING OF ASSEMBLY TASKS USING AUGMENTED REALITY

      
Application Number 19340334
Status Pending
Filing Date 2025-09-25
First Publication Date 2026-01-22
Owner Snap Inc. (USA)
Inventor
  • Zhou, Kai
  • Schoisengeier, Adrian

Abstract

A method for programming a robotic system by demonstration is described. In one aspect, the method includes displaying a first virtual object in a display of an augmented reality (AR) device, the first virtual object corresponding to a first physical object in a physical environment of the AR device, tracking, using the AR device, a manipulation of the first virtual object by a user of the AR device, identifying an initial state and a final state of the first virtual object based on the tracking, the initial state corresponding to an initial pose of the first virtual object, the final state corresponding to a final pose of the first virtual object, and programming by demonstration a robotic system using the tracking of the manipulation of the first virtual object, the first initial state of the first virtual object, and the final state of the first virtual object.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • B25J 13/08 - Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 17/00 - 3D modelling for computer graphics
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 20/64 - Three-dimensional objects

64.

BUTTON-SWITCH ASSEMBLY FOR AR-VR DEVICE

      
Application Number 19340457
Status Pending
Filing Date 2025-09-25
First Publication Date 2026-01-22
Owner Snap Inc. (USA)
Inventor
  • Chen, Chao
  • Kraz, Mark
  • Streets, Nicholas

Abstract

A button-switch assembly provides a preloaded force design with an enhanced tactile feel while also providing a non-wobbly (stabilized) configuration and water/dust protection functions. Features of the button-switch assembly include excellent tactile feel through a stack up of a soft rubber layer of a deflection web and a hard PET film shim layer, a consistent pre-loaded push force through use of an angled deflection web, a button flange that minimizes rotation of the button while providing a consistent tactile feel even when the edge of the button is depressed, double sided sealing adhesive layers that seal off the opening in the housing for accepting the button to prevent water/dust from entering the opening, and gluing the button to the rubber deflection web in variable thicknesses to provide a stable tension force to minimize wobble of the button when depressed.

IPC Classes  ?

  • H01H 13/14 - Operating parts, e.g. push-button
  • H01H 13/06 - Dustproof, splashproof, drip-proof, waterproof, or flameproof casings
  • H01H 13/705 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard with contacts carried by or formed from layers in a multilayer structure, e.g. membrane switches characterised by construction, mounting or arrangement of operating parts, e.g. push-buttons or keys

65.

Augmented reality (AR) content-sharing with AR-deprived audience

      
Application Number 18950505
Grant Number 12530075
Status In Force
Filing Date 2024-11-18
First Publication Date 2026-01-20
Grant Date 2026-01-20
Owner Snap Inc. (USA)
Inventor
  • Evangelidis, Georgios
  • Konjahhin, Vsevolod
  • Luidolt, Laura Rosalia
  • Schreiberhuber, Simon
  • Siryk, Dmytro
  • Tymchyshyn, Ihor
  • Zhou, Kai

Abstract

A head-worn device system equipped with cameras, display devices, and processors uses stored instructions to facilitate interactions between an augmented reality (AR) device and an external display system. When executed, these instructions establish a communications link between the AR device and the external display. The system identifies the pose of the external display and receives user inputs from the AR device user, relating to interactions with virtual objects in a real-world setting. It also identifies the positions of viewers watching the external display. Based on the external display's pose and the viewer locations, the system generates display data for the virtual object, ensuring it appears correctly on the external display. This display data is then transmitted to the external display system, completing the interaction loop and enhancing the viewing experience for the audience.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G02B 27/01 - Head-up displays
  • G06T 7/20 - Analysis of motion
  • G06T 7/579 - Depth or shape recovery from multiple images from motion
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

66.

Location-based social media search mechanism with dynamically variable search period

      
Application Number 15965756
Grant Number 12530408
Status In Force
Filing Date 2018-04-27
First Publication Date 2026-01-20
Grant Date 2026-01-20
Owner Snap Inc. (USA)
Inventor
  • Amitay, Daniel
  • Brody, Jonathan
  • Garcia, Timothy Jordan
  • Gorkin, Leonid
  • Lin, Andrew
  • Lin, Walton
  • Spiegel, Evan

Abstract

A social media platform provides a map-based graphical user interface (GUI) for accessing social media content submitted for public accessibility via the social media platform supported by the map-based GUI. The GUI includes a map providing interactive location-based searching functionality in that selection of a target location by the user in the GUI, such as by tapping or clicking at the target location, triggers a search for social media content having geo-tag data indicating geographic locations within a geographical search area centered on the target location. A search period for which content is returned is dynamically variable based on the duration for which the tap or click is held.

IPC Classes  ?

  • G06F 3/04842 - Selection of displayed objects or displayed text elements
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
  • G06F 16/29 - Geographical information databases
  • G06F 16/903 - Querying
  • G06F 16/9038 - Presentation of query results
  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism

67.

MANUAL AND AUTOMATIC MAP TILT CONSOLIDATOR

      
Application Number 18767717
Status Pending
Filing Date 2024-07-09
First Publication Date 2026-01-15
Owner Snap Inc (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Camper, Brett
  • Gu, Yuanlong
  • Lin, Robert Derui
  • Rakhamimov, Daniel
  • Samaranayake, Suraj Vindana
  • Zuanovié, Luka

Abstract

Zoom and tilt operations are performed on a map display based on a predetermined relationship between zooming and tilting that is defined by a plurality of zoom-tilt transition points. If user zoom input is received after a user has tilted the map representation to a user-selected tilt angle, a next zoom-tilt transition point in the specified relationship, in the zoom direction, is determined. The display of the map representation is then updated at a rate of tilt and zoom that is based at least in part on the user zoom input, and the difference between the next zoom-tilt transition point and the initial zoom level and the user-selected tilt angle.

IPC Classes  ?

  • G06F 16/29 - Geographical information databases
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

68.

IMAGE-TO-TEXT LARGE LANGUAGE MODELS (LLM)

      
Application Number 19323734
Status Pending
Filing Date 2025-09-09
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Mikhailiuk, Aliaksei
  • Gao, Tianxiang
  • Smetanin, Sergey
  • Savchenkov, Pavel
  • Kim, Hee Hun
  • Yadav, Neha
  • L, Bingqian

Abstract

Described is a system for generating a textual response from a received image by determining participation in an interaction function by a first user of an interaction system, identifying an image associated with the participation, processing data associated with the image using a first machine learning model to identify one or more features within the image, and generating a prompt based on the identified one or more features. The system then identifying instructions for a second machine learning model, processing the prompt and the instructions using the second machine learning model to generate a textual response to the image, and causing display of the textual response within the interaction function to the first user.

IPC Classes  ?

  • G06F 40/40 - Processing or translation of natural language
  • G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/50 - Context or environment of the image

69.

INNER SPEECH ITERATIVE LEARNING LOOP

      
Application Number 19330498
Status Pending
Filing Date 2025-09-16
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Jimenez, Marcos
  • Meshulam, Meir
  • Ziv, Assif

Abstract

Methods and systems are disclosed for iteratively training a user and a ML model to produce accurate inner speech outputs. The methods and systems access a ML model and perform a first training iteration in which EMG data corresponding to inner speech is processed by the machine learning model to decode the EMG data into a set of predicted phonemes, phoneme sounds, words or phrases. The methods and systems present the set of predicted phonemes, phoneme sounds, words or phrases to the user and form a first set of training data comprising the set of predicted phonemes, phoneme sounds, words or phrases, the EMG data, and the set of specified phonemes, phoneme sounds, words or phrases as ground truth information. The methods and systems update parameters of the ML model based on the first set of training data prior to starting a second training iteration.

IPC Classes  ?

  • G10L 15/06 - Creation of reference templatesTraining of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
  • G10L 15/02 - Feature extraction for speech recognitionSelection of recognition unit
  • G10L 15/16 - Speech classification or search using artificial neural networks

70.

ALIGNMENT OF AUGMENTED REALITY COMPONENTS WITH THE PHYSICAL WORLD

      
Application Number 19331598
Status Pending
Filing Date 2025-09-17
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Tran, Lien Le Hong
  • Borys, Olha
  • Canberk, Ilteris Kaan
  • Maier, Tobias
  • Zillner, Jakob

Abstract

A system is disclosed, including a processor and a memory. The memory stores instructions that, when executed by the processor, configure the system to perform operations. Surface plane information is obtained, defining a surface plane passing through a surface location and oriented according to a surface normal. An edge is detected in an image. Virtual content is presented, having a virtual position based on an orientation of the edge and the surface plane information.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 5/20 - Image enhancement or restoration using local operators
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
  • G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces

71.

MULTI-SOC HAND-TRACKING PLATFORM

      
Application Number 19332707
Status Pending
Filing Date 2025-09-18
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Coconu, Liviu Marius
  • Colascione, Daniel
  • Zare Seisan, Farid
  • Harris, Daniel
  • Pounds, Jennica

Abstract

A multi-System on Chip (SoC) hand-tracking platform is provided. The multi-SoC hand-tracking platform includes a computer vision SoC and one or more application SoCs. The computer vision SoC hosts a hand-tracking input pipeline. The one or more application SoCs host one or more applications that are consumers of input event data generated by the hand-tracking input pipeline. The applications communicate with some components of the hand-tracking input pipeline using a shared-memory buffer and with some of the components of the hand-tracking input pipeline using Inter-Process Communication (IPC) method calls.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

72.

MONOLITHIC RGB MICROLED ARRAY

      
Application Number 19333019
Status Pending
Filing Date 2025-09-18
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Feng, Peng
  • Haggar, Jack
  • Lee, Kean Boon
  • Poyiatzis, Nicolas
  • Tian, Ye
  • Yu, Xiang

Abstract

A light emitting diode (LED) pixel array and method of fabrication thereof. A semiconductor wafer template includes a successively stacked lower n-GaN layer, lower MQW layer, lower p-GaN layer, upper n-GaN layer, and dielectric layer. A plurality of apertures is formed through the dielectric layer, extending to the upper n-GaN layer. A plurality of mesas is formed by forming, within each aperture, a mesa n-GaN layer, a mesa MQW layer above each mesa n-GaN layer, and a mesa p-GaN layer above each mesa MQW layer. The mesa n-GaN layer, mesa MQW layer, and mesa p-GaN layer of each mesa form a respective mesa LED. The lower n-GaN layer, lower MQW layer, and lower p-GaN layer form a lower LED.

IPC Classes  ?

  • H10H 29/10 - Integrated devices comprising at least one light-emitting semiconductor component covered by group
  • H10H 20/01 - Manufacture or treatment
  • H10H 20/812 - Bodies having quantum effect structures or superlattices, e.g. tunnel junctions within the light-emitting regions, e.g. having quantum confinement structures
  • H10H 20/821 - Bodies characterised by their shape, e.g. curved or truncated substrates of the light-emitting regions, e.g. non-planar junctions
  • H10H 20/825 - Materials of the light-emitting regions comprising only Group III-V materials, e.g. GaP containing nitrogen, e.g. GaN
  • H10H 20/832 - Electrodes characterised by their material
  • H10H 20/833 - Transparent materials

73.

ESTABLISHING CRYPTOGRAPHIC KEY FOR APPLICATIONS

      
Application Number 19334342
Status Pending
Filing Date 2025-09-19
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor Naveed, Muhammad

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for performing operations comprising: accessing, by a first application implemented on a client device, data collected from one or more entropy sources; causing a second application implemented on the client device to access the data collected from the one or more entropy sources; generating a shared cryptographic key using the data collected from one or more entropy sources; establishing a communication channel between the first application and the second application; and exchanging, over the communication channel between the first application and the second application, one or more messages that have been encrypted using the shared cryptographic key.

IPC Classes  ?

74.

CONTENT NAVIGATION WITH AUTOMATED CURATION

      
Application Number 19335387
Status Pending
Filing Date 2025-09-22
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Yang, Jianchao
  • Zhu, Yuke
  • Xu, Ning
  • Tang, Kevin Dechau
  • Li, Jia

Abstract

Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system communicates at least a portion of a first content collection to a first client device, and receives a first selection communication in response, the first selection communication identifying a first piece of content of the first plurality of pieces of content. The server analyzes analyzing the first piece of content to identify a set of context values for the first piece of content, and accesses accessing a second content collection comprising pieces of content sharing at least a portion of the set of context values of the first piece of content. In various embodiments, different content values, image processing operations, and content selection operations are used to curate the content collections.

IPC Classes  ?

  • G06F 16/55 - ClusteringClassification
  • G06F 16/22 - IndexingData structures thereforStorage structures
  • G06F 16/2457 - Query processing with adaptation to user needs
  • G06F 16/51 - IndexingData structures thereforStorage structures
  • G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/951 - IndexingWeb crawling techniques
  • G06F 16/9535 - Search customisation based on user profiles and personalisation
  • G06F 16/954 - Navigation, e.g. using categorised browsing
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06N 3/08 - Learning methods
  • G06N 20/00 - Machine learning
  • G06T 7/00 - Image analysis
  • G06V 20/00 - ScenesScene-specific elements
  • H04L 51/10 - Multimedia information
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04L 67/55 - Push-based network services

75.

CIRCUITS AND METHODS FOR WEARABLE DEVICE CHARGING AND WIRED CONTROL

      
Application Number 19335471
Status Pending
Filing Date 2025-09-22
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Patton, Russell Douglas
  • Rodriguez, Ii, Jonathan M
  • Steger, Stephen Andrew

Abstract

Methods and devices for wired charging and communication with a wearable device are described. In one embodiment, a symmetrical contact interface comprises a first contact pad and a second contact pad, and particular wired circuitry is coupled to the first and second contact pad to enable charging as well as receive and transmit communications via the contact pads as part of various device states.

IPC Classes  ?

  • H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
  • G02C 1/00 - Assemblies of lenses with bridges or browbars
  • G02C 5/14 - Side-members
  • G02C 11/00 - Non-optical adjunctsAttachment thereof
  • H01R 13/62 - Means for facilitating engagement or disengagement of coupling parts or for holding them in engagement
  • H02J 7/04 - Regulation of the charging current or voltage
  • H02J 7/34 - Parallel operation in networks using both storage and other DC sources, e.g. providing buffering
  • H03K 19/0185 - Coupling arrangementsInterface arrangements using field-effect transistors only
  • H04B 3/54 - Systems for transmission via power distribution lines
  • H04B 3/56 - Circuits for coupling, blocking, or by-passing of signals
  • H10D 89/60 - Integrated devices comprising arrangements for electrical or thermal protection, e.g. protection circuits against electrostatic discharge [ESD]

76.

TECHNIQUES FOR USING 3-D AVATARS IN AUGMENTED REALITY MESSAGING

      
Application Number 19336972
Status Pending
Filing Date 2025-09-23
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor Tran, Lien Le Hong

Abstract

Described herein is a messaging application that executes on a wearable augmented reality device. The messaging application facilitates the anchoring or pinning of a 3-D avatar representing another end-user. An end-user wearing the AR device facilitates messaging with the other end-user via interactions with the 3-D avatar representing the other end-user. As such, the AR device processes various sensor inputs to detect when the end-user wearing the AR device is “targeting” the 3-D avatar, and enables an audio recording device to record an audible message for communicating to the other end-user.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 13/20 - 3D [Three Dimensional] animation
  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • G10L 15/26 - Speech to text systems
  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • H04L 51/043 - Real-time or near real-time messaging, e.g. instant messaging [IM] using or handling presence information
  • H04L 51/046 - Interoperability with other network applications or services
  • H04L 51/10 - Multimedia information

77.

FIELD CALIBRATION OF AN AUGMENTED REALITY DEVICE

      
Application Number 19337207
Status Pending
Filing Date 2025-09-23
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Birklbauer, Clemens
  • Halmetschlager-Funek, Georg
  • Kalkgruber, Matthias
  • Pereira Torres, Tiago Miguel
  • Schreiberhuber, Simon

Abstract

A method for recalibrating an augmented reality (AR) device includes generating and storing a ground truth map of a real-world environment when the AR device is operating with a high likelihood of having an accurate factory calibration. During operation of the AR device, new map data is generated for the real-world environment. The new map data is compared to the ground truth map to detect potential calibration errors. If calibration errors are detected, a recalibration procedure is executed by determining an optimal path through the real-world environment that allows for observing parameters requiring recalibration. Visual cues are generated to guide a user of the AR device through the optimal path. As the user follows the visual cues, calibration parameters are iteratively adjusted to eliminate detected calibration errors. The recalibration procedure may be presented as an interactive game to improve user engagement, with rewards provided for accurately following guidance.

IPC Classes  ?

  • G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
  • G01C 21/00 - NavigationNavigational instruments not provided for in groups
  • G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

78.

IDENTIFICATION OF PHYSICAL PRODUCTS FOR AUGMENTED REALITY EXPERIENCES IN A MESSAGING SYSTEM

      
Application Number 19338717
Status Pending
Filing Date 2025-09-24
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Luo, Jean
  • Mourkogiannis, Celia Nicole

Abstract

The subject technology receives image data including a representation of a physical item. The subject technology analyzes the image data to determine an object corresponding to the physical item. The subject technology extracts product metadata based on the determined object. The subject technology sends, to a server, the product metadata to determine second product metadata associated with the product metadata. The subject technology receives, from the server, the second product metadata, the second product metadata including additional information related to the physical item. The subject technology causes display, at a client device, the additional information related to the physical item based at least in part on the second product metadata.

IPC Classes  ?

  • G06Q 30/0601 - Electronic shopping [e-shopping]
  • G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06K 7/14 - Methods or arrangements for sensing record carriers by electromagnetic radiation, e.g. optical sensingMethods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
  • G06Q 20/32 - Payment architectures, schemes or protocols characterised by the use of specific devices using wireless devices

79.

FINGERNAIL SEGMENTATION AND TRACKING

      
Application Number 19338931
Status Pending
Filing Date 2025-09-24
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Bekuzarov, Maksym
  • Didkivskyi, Andrii
  • Korolev, Sergei
  • Tymchyshyn, Ihor

Abstract

An extended Reality (XR) system provides methodologies for displaying virtual objects in a hand-centric XR experience. The XR system provides an XR user interface of an XR system to a user. The XR system captures video frame data of a hand of the user and detects the hand of the user based on the video frame data and a hand-detecting model. The XR system generates a cropping boundary box based on the detection of the hand and the video frame data and generates cropped video frame data based on the cropping boundary box and the video frame data. The XR system generates a 3D model of a portion of the hand of the user based on the cropped video frame data and a virtual object based on the 3D model of the portion of the hand of the user and a 3D texture. The XR displays the virtual object in the XR user interface.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

80.

AVATAR BASED IDEOGRAM GENERATION

      
Application Number 19277254
Status Pending
Filing Date 2025-07-22
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Bondich, Artem
  • Maltsev, Vladimir

Abstract

Systems, devices, media, and methods are presented for generating ideograms from a set of images received in an image stream. The systems and methods detect at least a portion of a face within the image and identify a set of facial landmarks within the portion of the face. The systems and methods determine one or more characteristics representing the portion of the face, in response to detecting the portion of the face. Based on the one or more characteristics and the set of facial landmarks, the systems and methods generate a representation of a face. The systems and methods position one or more graphical elements proximate to the graphical model of the face and generate an ideogram from the graphical model and the one or more graphical elements.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

81.

SECURING OF SANDBOXED GENERATIVE AI MODELS

      
Application Number 19327798
Status Pending
Filing Date 2025-09-12
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor Mathur, Dheeresh Pratap

Abstract

A generative artificial intelligence (AI) system includes a generative AI model configured to generate outputs based on a training data set. The AI system additionally includes a secure data vault system. The secure data vault system additionally includes a sandbox system storing the generative AI model and operatively coupled to the generative AI model to send inputs to generate the outputs from the generative AI model, wherein the sandbox system comprises an execution environment configured to restrict execution of the generative AI model to a predefined memory address range. The secure data vault system further includes a secure network service communicatively coupled to the sandbox system and configured to authenticate a connection to an external system and to download from the external system an update package for the generative AI model when the connection is authenticated.

IPC Classes  ?

  • G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
  • G06N 3/0475 - Generative networks

82.

LOCATION-BASED TIMELINE MEDIA CONTENT SYSTEM

      
Application Number 19329022
Status Pending
Filing Date 2025-09-15
First Publication Date 2026-01-15
Owner Snap Inc (USA)
Inventor
  • Collins, Alexander
  • Vodovoz, Alexander

Abstract

Systems and methods for receiving a set of media content items including a geohash defining a captured time and a captured location of the media content item, identifying a first subset of media content items from the set of media content items comprising a geohash that equals a precision level threshold, and identifying a second subset of media content items from the set of media content items that include a geohash that exceeds the precision level threshold. The system also includes generating a timeline media content item collection including the second subset of media content items each including a geohash that exceeds the precisions level threshold, and causing display of a media content collection interface, the media content collection interface including the timeline media content item collection.

IPC Classes  ?

  • G06F 16/44 - BrowsingVisualisation therefor
  • G06F 16/48 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 16/487 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
  • H04L 51/10 - Multimedia information
  • H04W 4/029 - Location-based management or tracking services

83.

TIMELAPSE RE-EXPERIENCING SYSTEM

      
Application Number 19331520
Status Pending
Filing Date 2025-09-17
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Vaish, Rajan
  • Kratz, Sven
  • Monroy-Hernàndez, Andrès
  • Smith, Brian Anthony

Abstract

A system captures via one or more sensors of a computing device, data of an environment observed by the one or more sensors at a first timeslot, and stores the data in a data store as a first portion of a timelapse memory experience. The system also captures, via the one or more sensors of a computing device, data of the environment observed by the one or more sensors at a second timeslot, and stores the data in a data store as a second portion of the timelapse memory experience. The system additionally associates the timelapse memory experience with a memory experience trigger, wherein the memory experience trigger can initiate a presentation of the timelapse memory experience.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

84.

MACHINE LEARNING MODEL CONTINUOUS TRAINING SYSTEM

      
Application Number 19332706
Status Pending
Filing Date 2025-09-18
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Dela Rosa, Kevin Sarabia
  • Hu, Hao
  • Li, Yanjia

Abstract

Described is a system for performing a set of machine learning model training operations that include: accessing media content items associated with interaction functions initiated by users of an interaction system, generating training data including labels for the media content items, extracting features from a media content item of the media content items, identifying additional media content items to include in the training data based on the extracted features from the media content item, processing the training data using a machine learning model to generate a media content item output; and updating one or more parameters of the machine learning model based on the media content item output. The system checks whether retraining criteria has been met, and repeats the set of machine learning model training operations to retrain the machine learning model.

IPC Classes  ?

  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 20/40 - ScenesScene-specific elements in video content
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs

85.

AUTOMATIC TECHNIQUES FOR CONSTRUCTING AN EVOLVING INTEREST TAXONOMY FROM USER-GENERATED CONTENT

      
Application Number 19337224
Status Pending
Filing Date 2025-09-23
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Brewer, Jason
  • Han, Shuo
  • Huang, Chang Kuang
  • Li, James
  • Ma, Yiwei
  • Malik, Manish
  • Na, Yinan
  • Xie, Dan
  • Ye, Jinchao
  • Zhang, Lili
  • Zhang, Mingtao
  • Zhang, Yining
  • Zhao, Hangqi
  • Zhou, Ding
  • Zhou, Yang

Abstract

Techniques for creating an interest graph include obtaining content items from multiple content sources and applying tailored (e.g., source-specific) preprocessing to the content items based on their respective content source. Text is extracted and salient keywords and key phrases are identified using unsupervised machine learning models. The keywords and key phrases become nodes in an interest graph, each node comprising an embedding of a keyword or key phrase in a common embedding space, with edges representing semantic similarity based on embeddings or co-engagement patterns. The graph provides an expansive, granular, and dynamic taxonomy easily adaptable to emerging interests. The interest graph overcomes limitations of conventional taxonomies that lack depth, fail to capture niche interests, and cannot adapt to reflect evolving user preferences. The described techniques construct a rich interest graph from diverse content for improved content understanding.

IPC Classes  ?

86.

EYEWEAR WITH STRAIN GAUGE ESTIMATION

      
Application Number 19338581
Status Pending
Filing Date 2025-09-24
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor
  • Heger, Jason
  • Patton, Russell

Abstract

Eyewear including a sensor integrated into frame of eyewear. In one example, the sensor comprises a strain gauge, such as a metallic foil gauge, that is configured to sense and measure distortion of the frame when worn by a user and under different force profiles, by measuring a strain in the frame when bent. The measured strain by strain gauge is sensed by a processor, and the processor performs dynamic calibration of image processing based on the measured strain. The distortion measured by the strain gauge is used by the processor to correct calibration of the cameras, and the displays.

IPC Classes  ?

  • H04N 13/246 - Calibration of cameras
  • G01B 7/16 - Measuring arrangements characterised by the use of electric or magnetic techniques for measuring the deformation in a solid, e.g. by resistance strain gauge
  • G01B 11/22 - Measuring arrangements characterised by the use of optical techniques for measuring depth
  • H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
  • H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
  • H04N 13/327 - Calibration thereof
  • H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
  • H04N 13/38 - Image reproducers using viewer tracking for tracking vertical translational head movements
  • H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

87.

STYLIZED IMAGE PAINTING

      
Application Number 19338917
Status Pending
Filing Date 2025-09-24
First Publication Date 2026-01-15
Owner Snap Inc. (USA)
Inventor Katz, Sagi

Abstract

A photo filter (e.g., artistic/stylized painting) light field effect system includes an eyewear device having a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the stylized image painting effect system to apply a photo filter selection to: (i) a left raw image or a left processed image to create a left photo filter image, and (ii) a right raw image or a right processed image to create a right photo filter image. The stylized image painting effect system generates a photo filter stylized painting effect image with an appearance of a spatial rotation or movement, by blending together the left photo filter image and the right photo filter image based on a left image disparity map and a right image disparity map.

IPC Classes  ?

  • G06T 15/02 - Non-photorealistic rendering
  • G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
  • H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
  • H04N 13/111 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation

88.

OPTICAL FLOW LATENT SPACE SMOOTHING

      
Application Number US2025033379
Publication Number 2026/015242
Status In Force
Filing Date 2025-06-12
Publication Date 2026-01-15
Owner SNAP INC. (USA)
Inventor
  • Malbin, Nir
  • Heimann, Jonathan
  • Assouline, Avihay
  • Berger, Itamar

Abstract

Methods and systems are disclosed for using machine learning models to perform smoothing in latent space using optical flow information. The methods and systems access a first frame of a video depicting an object and a second frame of the video, the second frame corresponding to a later time period in the video than the first frame. The methods and systems generate optical flow information based on the first frame and the second frame, the optical flow information describing movement of the object from the first frame to the second frame. The methods and systems smooth a latent space generated by one or more neural network encoders of a machine learning model using the optical flow information and process the smoothed latent space by one or more neural network decoders to generate a result of the machine learning model.

IPC Classes  ?

  • G06V 10/40 - Extraction of image or video features
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
  • G06V 10/62 - Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extractionPattern tracking
  • G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands

89.

User calibration of EMG speech signal detection

      
Application Number 18135287
Grant Number 12525240
Status In Force
Filing Date 2023-04-17
First Publication Date 2026-01-13
Grant Date 2026-01-13
Owner SNAP INC. (USA)
Inventor
  • Kliger, Mark
  • Laufer, Yaron
  • Meshulam, Meir
  • Ziv, Assif

Abstract

Methods and systems are disclosed for training a user-specific machine learning (ML) model to detect inner speech. The system accesses the ML model trained to detect inner speech based on a general population dataset. The system collects, by an electromyograph (EMG) communication device, a set of EMG signals generated based on an individual user of the EMG communication device. The system updates parameters of the ML model based on the set of EMG signals associated with the individual user. The system detects inner speech of the individual user by applying the ML model with the updated parameters to a new set of EMG signals received from the EMG communication device.

IPC Classes  ?

  • G10L 15/24 - Speech recognition using non-acoustical features
  • A61B 5/00 - Measuring for diagnostic purposes Identification of persons
  • A61B 5/394 - Electromyography [EMG] specially adapted for electroglottography or electropalatography
  • A61B 5/397 - Analysis of electromyograms
  • G10L 15/06 - Creation of reference templatesTraining of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
  • G10L 15/07 - Adaptation to the speaker
  • G10L 15/16 - Speech classification or search using artificial neural networks
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • G10L 25/78 - Detection of presence or absence of voice signals
  • H04N 23/60 - Control of cameras or camera modules

90.

OPTICAL FLOW LATENT SPACE SMOOTHING

      
Application Number 18766059
Status Pending
Filing Date 2024-07-08
First Publication Date 2026-01-08
Owner Snap Inc (USA)
Inventor
  • Malbin, Nir
  • Heimann, Jonathan
  • Assouline, Avihay
  • Berger, Itamar

Abstract

Methods and systems are disclosed for using machine learning models to perform smoothing in latent space using optical flow information. The methods and systems access a first frame of a video depicting an object and a second frame of the video, the second frame corresponding to a later time period in the video than the first frame. The methods and systems generate optical flow information based on the first frame and the second frame, the optical flow information describing movement of the object from the first frame to the second frame. The methods and systems smooth a latent space generated by one or more neural network encoders of a machine learning model using the optical flow information and process the smoothed latent space by one or more neural network decoders to generate a result of the machine learning model.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 5/70 - DenoisingSmoothing

91.

HIGH DYNAMIC RANGE FOR DUAL PIXEL SENSORS

      
Application Number 19180932
Status Pending
Filing Date 2025-04-16
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor
  • Katz, Sagi
  • Kligler, Netanel
  • Refael, Gilad

Abstract

A method for increasing a dynamic range of a dual-pixel image sensor is described. The method includes detecting an intensity level of a full pixel from a plurality of pixels of an optical sensor, one or more full pixels of the plurality of pixels includes at least two sub-pixels, detecting an intensity level of one or more sub-pixels, detecting that the intensity level of the full pixel of the optical sensor has reached a saturation level of the full pixel, and in response to detecting that the intensity level of the full pixel of the optical sensor has reached the saturation level of the full pixel, computing an extrapolated intensity level of the full pixel based on the intensity level of the one or more sub-pixels.

IPC Classes  ?

92.

CUSTOM CODES FOR DATABASE-DRIVEN OFFER REDEMPTION

      
Application Number 19324666
Status Pending
Filing Date 2025-09-10
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor Jayaram, Krish

Abstract

A system receives an offer code from a device associated with a user account. The offer code is obtained from an optical code featuring a custom graphic. The system identifies an offer from a merchant in a database based on the offer code, with the database specifying one or more parameters for the offer. An association between the offer and the user account is stored. The system receives a purchase code from the device and determines that the purchase code is associated with the merchant's offer. Upon authorizing a transaction linked to the purchase code, the system applies an offer benefit to the transaction, based on the one or more parameters, and presents a notification of the applied offer benefit at the device.

IPC Classes  ?

  • G06Q 20/12 - Payment architectures specially adapted for electronic shopping systems
  • G06Q 20/24 - Credit schemes, i.e. "pay after"
  • G06Q 20/32 - Payment architectures, schemes or protocols characterised by the use of specific devices using wireless devices
  • G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
  • G06Q 30/0234 - Rebates after completed purchase
  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

93.

SCALING A 3D VOLUME IN EXTENDED REALITY

      
Application Number 19325384
Status Pending
Filing Date 2025-09-10
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor Spong, Mason

Abstract

An extended Reality (XR) system provides methodologies for scaling a virtual object in an XR user interface of the XR system. The methodologies include providing to a user an XR user interface of an XR system, where the XR user interface includes a virtual object displayed to the user. The XR system determines a pinch location of a pinch hand pose being made by the user and scales the virtual object based on the pinch location and a virtual object center point of the virtual object. The XR system redisplays the scaled virtual object to the user in the XR user interface.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

94.

SPATIAL SCANNING FOR EXTENDED REALITY

      
Application Number 19325387
Status Pending
Filing Date 2025-09-10
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor Zakrzewski, Tomasz

Abstract

An extended Reality (XR) system that provides services for determining 3D data of physical objects in a real-world scene. The XR system receives a request from an application to initiate a spatial scan of a real-world scene. In response, the XR system captures video frame data of the real-world scene and captures a pose of the XR system. The XR system determines a physical object in the real-world scene and determines a 2D position of the physical object, using the video frame data. The XR system determines a depth of the physical object using the 2D position and determines a 3D position of the physical object in the real-world scene using the 2D position of the physical object, the depth of the physical object, and the pose of the XR system. The XR system communicates the 3D position data to the application.

IPC Classes  ?

  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 7/50 - Depth or shape recovery
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

95.

LENS STUDIO

      
Application Number 1895671
Status Registered
Filing Date 2025-11-03
Registration Date 2025-11-03
Owner Snap Inc. (USA)
NICE Classes  ? 42 - Scientific, technological and industrial services, research and design

Goods & Services

Providing online non-downloadable computer software for creating augmented reality experiences; providing online non-downloadable computer software for creating video games; providing online non-downloadable computer software for modifying images, video, audio, and audio-visual content with digital filters and augmented reality effects, namely, text, graphics, animations, and links; providing online non-downloadable computer software, namely, software for integrating electronic data with real world environments for the purpose of experiencing, viewing, capturing, recording, manipulating, and editing augmented real-time views, images, videos, audio, and sensory content; providing online non-downloadable computer software, namely, augmented reality software for object segmentation, object tracking, tracking of dwellings and structures, and hand tracking; providing online non-downloadable augmented reality software for modifying images, video, audio and audio-visual content that are used in the fields of entertainment, utilities, and communication; providing online non-downloadable software for creating augmented reality software.

96.

EGOCENTRIC HUMAN BODY POSE TRACKING

      
Application Number 19180873
Status Pending
Filing Date 2025-04-16
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor
  • Arakawa, Riku
  • Krishnan Gorumkonda, Gurunandan
  • Nayar, Shree K.
  • Zhou, Bing

Abstract

A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a full-body pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
  • G06T 17/00 - 3D modelling for computer graphics

97.

VIRTUAL SELFIE STICK

      
Application Number 19181056
Status Pending
Filing Date 2025-04-16
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor
  • Zhou, Kai
  • Micusik, Branislav

Abstract

A method for generating a virtual selfie stick image is described. In one aspect, the method includes generating, at a device, an original self-portrait image with an optical sensor of the device, the optical sensor directed at a face of a user of the device, the device being held at an arm length from the face of the user, displaying, on a display of the device, an instruction guiding the user to move the device at the arm length about the face of the user within a limited range at a plurality of poses, accessing, at the device, image data generated by the optical sensor at the plurality of poses, and generating a virtual selfie stick self-portrait image based on the original self-portrait image and the image data.

IPC Classes  ?

  • G06T 7/194 - SegmentationEdge detection involving foreground-background segmentation
  • G06T 3/4007 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
  • G06T 7/11 - Region-based segmentation
  • G06T 7/50 - Depth or shape recovery
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • H04N 5/265 - Mixing
  • H04N 23/60 - Control of cameras or camera modules
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders

98.

LIGHT ESTIMATION METHOD FOR THREE-DIMENSIONAL (3D) RENDERED OBJECTS

      
Application Number 19181068
Status Pending
Filing Date 2025-04-16
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor
  • Chai, Menglei
  • Demyanov, Sergey
  • Hu, Yunqing
  • Marton, Istvan
  • Ostashev, Daniil
  • Podkin, Aleksei

Abstract

A method is disclosed comprising accessing an image; identifying a virtual object corresponding to a physical object depicted in the image; and determining shading parameters for the virtual object based on a machine learning model. The model is trained by generating a synthetic face image using a first renderer; predicting lighting parameters from the synthetic face image with a neural network; generating a predicted sphere image using a second renderer based on the predicted lighting parameters; generating a synthetic sphere image using a third renderer; comparing the predicted sphere image with the synthetic sphere image; and training the neural network based on the comparison. The method further comprises generating a shaded virtual object by applying the shading parameters to the virtual object and displaying the shaded virtual object as a layer over the image.

IPC Classes  ?

  • G06T 15/50 - Lighting effects
  • G06T 15/80 - Shading
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

99.

AUGMENTED REALITY ERGONOMICS EVALUATION SYSTEM

      
Application Number 19181075
Status Pending
Filing Date 2025-04-16
First Publication Date 2026-01-08
Owner Snap Inc. (USA)
Inventor
  • Xi, Yubin
  • Zhou, Kai

Abstract

An ergonomics evaluation system is described. In one aspect, a method includes identifying a placement location of a user interface element of an augmented reality application, identifying simulated user interactions of the augmented reality application based on the placement location of the user interface element, applying a computer vision algorithm to the simulated user interactions, identifying a joint angle of a user based on an output of the computer vision algorithm, and identifying an ergonomic risk level of the joint angle.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

100.

DEVICE-TO-DEVICE COLLOCATED AR USING HAND TRACKING

      
Application Number 19181086
Status Pending
Filing Date 2025-04-16
First Publication Date 2026-01-08
Owner Snap (USA)
Inventor
  • Ajanohoun, Jordy Innocentius
  • Diem, Markus
  • Evangelidis, Georgios
  • Penney, Matthew

Abstract

A method for aligning coordinate systems between devices is described. The method comprises accessing pose data from a first device that defines a first coordinate system and receiving pose data from a second device that defines a second coordinate system. The method further includes identifying a coordinate transformation that relates the first device, a hand joint of a user holding the second device, and the second device, and then aligning the first coordinate system with the second based on the pose data and the identified transformation.

IPC Classes  ?

  • G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  1     2     3     ...     65        Next Page