Adobe Inc.

United States of America

Back to Profile

1-100 of 7,452 for Adobe Inc. and 1 subsidiary Sort by
Query
Aggregations
IP Type
        Patent 6,915
        Trademark 537
Jurisdiction
        United States 7,084
        Europe 158
        World 117
        Canada 93
Owner / Subsidiary
[Owner] Adobe Inc. 7,261
Adobe Systems Incorporated 191
Date
New (last 4 weeks) 39
2025 December (MTD) 14
2025 November 30
2025 October 50
2025 September 29
See more
IPC Class
G06T 11/60 - Editing figures and textCombining figures or text 531
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints 528
G06K 9/62 - Methods or arrangements for recognition using electronic means 504
G06N 3/08 - Learning methods 487
G06F 17/30 - Information retrieval; Database structures therefor 441
See more
NICE Class
09 - Scientific and electric apparatus and instruments 396
42 - Scientific, technological and industrial services, research and design 266
35 - Advertising and business services 103
16 - Paper, cardboard and goods made from these materials 86
41 - Education, entertainment, sporting and cultural services 63
See more
Status
Pending 804
Registered / In Force 6,648
  1     2     3     ...     75        Next Page

1.

SYSTEMS AND TECHNIQUES TO PERFORM 4D-GUIDED VIDEO GENERATION WITH DIFFUSION MODELS

      
Application Number 18738823
Status Pending
Filing Date 2024-06-10
First Publication Date 2025-12-11
Owner Adobe Inc. (USA)
Inventor
  • Cai, Shengqu
  • Ceylan Aksit, Duygu
  • Gadelha, Matheus
  • Huang, Chun-Hao
  • Wang, Yangtuanfeng

Abstract

Embodiments include systems and techniques for receiving a prompt and an input mesh to generate a four-dimensional (4D) video and generating keyframes from a depth map and a UV coordinate map of the input mesh. Embodiments further include extracting features from the keyframes processed through a diffusion model, generating frames of the 4D video based on the prompt, UV-guided noise initialization of each object, and injecting the features extracted from each of the keyframes into the diffusion model and the prompt during a regeneration process.

IPC Classes  ?

  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
  • G06T 7/40 - Analysis of texture
  • G06T 7/50 - Depth or shape recovery
  • G06V 10/771 - Feature selection, e.g. selecting representative features from a multi-dimensional feature space

2.

CAPTIONING FOR IMAGE PERSONALIZATION

      
Application Number 18736289
Status Pending
Filing Date 2024-06-06
First Publication Date 2025-12-11
Owner ADOBE INC. (USA)
Inventor
  • Agarwal, Dhwanit
  • Kolkin, Nicholas Isaac
  • Revanur, Ambareesh
  • Harikumar, Midhun
  • Agrawal, Shradha
  • Kale, Ajinkya Gorakhnath
  • Shechtman, Elya
  • Munshi, Jalansh Saumil

Abstract

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image processing include obtaining a plurality of images and a plurality of tags, wherein each of the plurality of tags represents a corresponding element of at least one of the plurality of images, computing a plurality of image-tag similarity scores, wherein each of the plurality of image-tag similarity scores indicate a similarity between one of the plurality of images and one of the plurality of tags, computing a plurality of classification scores corresponding to the plurality of tags, respectively, by averaging a subset of the plurality of image-tag similarity scores corresponding to each of the plurality of tags, and selecting a representative tag for the plurality of images based on the representative tag having a highest classification score among the plurality of classification scores.

IPC Classes  ?

  • G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
  • G06F 40/166 - Editing, e.g. inserting or deleting
  • G06F 40/279 - Recognition of textual entities
  • G06F 40/40 - Processing or translation of natural language
  • G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting

3.

Token Pruning for Image Generation

      
Application Number 18736340
Status Pending
Filing Date 2024-06-06
First Publication Date 2025-12-11
Owner ADOBE INC. (USA)
Inventor
  • Liu, Yuchen
  • Wang, Hongjie
  • Liu, Difan
  • Kang, Yan
  • Li, Yijun
  • Lin, Zhe

Abstract

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image processing include obtaining an input prompt; generating a plurality of tokens for an attention layer of a generative machine learning model based on an intermediate noise map; generating, using the attention layer, an attention map based on the plurality of tokens; pruning the plurality of tokens based on the attention map to obtain a pruned set of tokens; denoising the intermediate noise map based on the pruned set of tokens to obtain a denoised map; and generating a synthetic image based on the denoised map.

IPC Classes  ?

4.

DOCUMENT-BASED PRESENTATION GENERATION

      
Application Number 18675451
Status Pending
Filing Date 2024-05-28
First Publication Date 2025-12-04
Owner ADOBE INC. (USA)
Inventor
  • Mondal, Ishani
  • Somasundaram, Shwetha
  • Natarajan, Anandha Velu
  • Garimella, Aparna
  • Bandyopadhyay, Sambaran

Abstract

A method, apparatus, non-transitory computer readable medium, and system for natural language processing include obtaining a source document and a user characteristic that indicates a complexity preference of a user. A topic description is generated, using a language generation model, based on the source document and the user characteristic. The language generation model is trained based on an objective function that measures a complexity of the topic description.

IPC Classes  ?

  • G06F 16/438 - Presentation of query results
  • G06F 16/33 - Querying
  • G06F 16/35 - ClusteringClassification
  • G06V 30/416 - Extracting the logical structure, e.g. chapters, sections or page numbersIdentifying elements of the document, e.g. authors

5.

DATA EXPLORATION USING NATURAL LANGUAGE WITH DATA SAMPLING

      
Application Number 18675930
Status Pending
Filing Date 2024-05-28
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Mitra, Subrata
  • Agarwal, Shubham
  • Chan, Yeuk-Yin
  • Garg, Shaddy
  • Yu, Tong

Abstract

In various examples, an exploratory data analytics tool obtains a natural language query and generates a structured data query for execution on a sample of a dataset based on the natural language query. In an example, an intent is determined for the query and the intent is used, at least in part, to determine the most appropriate sample. In addition, the intent, in some examples, is used to generate recommended queries. A user interface of the exploratory data analytics tool, for example, can display the recommended queries and/or the results of the structured data query on the sample.

IPC Classes  ?

6.

BLIND FACE RESTORATION WITH CONSTRAINED GENERATIVE PRIOR

      
Application Number 18678221
Status Pending
Filing Date 2024-05-30
First Publication Date 2025-12-04
Owner ADOBE INC. (USA)
Inventor
  • Ding, Zheng
  • Zhang, Xuaner
  • Xia, Zhihao

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image depicting an entity and having a first quality level, adding noise to the input image based on the first quality level to obtain an intermediate noise image, and generating a restored image depicting the entity by denoising the intermediate noise image, where the restored image has a second quality level higher than the first quality level.

IPC Classes  ?

  • G06T 5/60 - Image enhancement or restoration using machine learning, e.g. neural networks
  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • G06T 5/70 - DenoisingSmoothing

7.

PATTERN DATA GENERATION

      
Application Number 18678757
Status Pending
Filing Date 2024-05-30
First Publication Date 2025-12-04
Owner ADOBE INC. (USA)
Inventor
  • Rai, Abhishek
  • Bit, Indranil
  • Dey, Arup
  • Karnam, Sai Keerthana
  • Tejaswini, R
  • Dhingra, Sumit
  • Phogat, Ankit
  • Batra, Vineet
  • Aggarwal, Pranav Vineet
  • Kale, Ajinkya Gorakhnath

Abstract

A method, apparatus, non-transitory computer readable medium, apparatus, and system for generating pattern data include obtaining an input image including a pattern element. Then, embodiments generate a pattern image including the pattern element based on the input image. The pattern image includes a plurality of versions of the pattern element. Subsequently, embodiments generate a pattern caption based on the pattern image. Embodiments then utilize the pattern image and the pattern caption for training an image generation model to generate pattern images based on a text prompt.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 7/10 - SegmentationEdge detection
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations

8.

Human-body-aware visual SLAM in metric scale

      
Application Number 18679225
Status Pending
Filing Date 2024-05-30
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Huang, Chun-Hao
  • Zhao, Yizhou
  • Wang, Yangtuanfeng
  • Yang, Jimei
  • Aksit, Duygu Ceylan

Abstract

In implementation of techniques for scene reconstruction from digital video of moving humans, a computing device implements a scene reconstruction system to receive a digital video depicting a scene including a human and an object. The scene reconstruction system then determines a depth of the human and a depth of the object in the digital video and generates a human mesh modeled from the human in the digital video. Using a machine learning model, the scene reconstruction system determines a size of the object by comparing the depth of the human, the depth of the object, and an estimated dimension of the human mesh. The scene reconstruction system then generates a scene reconstruction including the human mesh and a three-dimensional representation of the object based on the size of the object.

IPC Classes  ?

  • G06T 7/579 - Depth or shape recovery from multiple images from motion
  • G06T 7/215 - Motion-based segmentation
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation

9.

MUSIC GENERATION WITH TIME VARYING CONTROLS

      
Application Number 18680091
Status Pending
Filing Date 2024-05-31
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Wu, Shih-Lun
  • Bryan, Nicholas J.

Abstract

Embodiments are disclosed for music generation. The method may include receiving a music prompt and one or more time-varying controls. A text-to-music generative model may generate a representation of music. The text-to-music generative model comprises a pretrained conditional generative model and an adapter control branch. The text-to-music generative model has been fine-tuned to generate the representation of music based on the music prompt and the one or more time-varying controls. The representation of music is converted to music audio and the music audio is output.

IPC Classes  ?

  • G10H 1/00 - Details of electrophonic musical instruments
  • G06F 40/40 - Processing or translation of natural language

10.

CONTENT AWARE BACKGROUND GENERATION

      
Application Number 18680351
Status Pending
Filing Date 2024-05-31
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Agarwal, Rishav
  • Jain, Sanyam
  • Bhatt, Harshit
  • Rompilli, Jnaneswara Rao
  • Sethi, Garvit
  • Singhal, Shreya

Abstract

Content aware background generation techniques are described. In one or more examples, a background generation system forms a mask from a digital image and receives an input specifying one or more parameters. The background generation system then generates a background using a machine-learning model and generative artificial intelligence by predicting pixel values based on the digital image, the one or more parameters, and the mask using a loss function. The background is then applied to the digital image and presented for display in a user interface.

IPC Classes  ?

11.

ATTRIBUTION OF DECOMPOSED PARAGRAPHS TO SUPPORTING DOCUMENTS

      
Application Number 18680983
Status Pending
Filing Date 2024-05-31
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Sancheti, Abhilasha
  • Goswami, Koustava
  • Srinivasan, Balaji Vasan

Abstract

In accordance with the described techniques, a processing device receives one or more documents and one or more paragraphs formulated from content of the one or more documents. Using a text decomposition model, the processing device decomposes the one or more paragraphs into a plurality of statements. Using a natural language inference model, the processing device attributes a statement of the plurality of statements to one or more sentences of the one or more documents. Further, the processing device generates one or more annotated documents including at least one visual indication associating the statement with the one or more sentences.

IPC Classes  ?

  • G06F 40/169 - Annotation, e.g. comment data or footnotes
  • G06F 40/289 - Phrasal analysis, e.g. finite state techniques or chunking

12.

TECHNIQUES FOR JOINT CONTEXT QUERY REWRITE AND INTENT DETECTION

      
Application Number 18679973
Status Pending
Filing Date 2024-05-31
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Chen, Xiang
  • Bhattacharya, Uttaran
  • Yu, Tong
  • Kim, Sungchul
  • Kobeissi, Said
  • Rossi, Ryan Anthony
  • Sinha, Ritwik
  • Balan, Razvan-Alexandru
  • Bhutani, Prithvi
  • Tanjim, Md Mehrab
  • Walker, Jordan
  • Mooso, Brandon Galen
  • Zugravu, Andrei
  • Trivedi, Abhisek

Abstract

Artificial intelligence techniques for query management are described. A method comprises generating, by a context detection module, context information for a first query comprising natural language information to request a result from one of a plurality of machine learning models, modifying, by a query modification module, the first query based the context information to form a first modified query, determining, by an intent module, an intent type for the first modified query, selecting, by a routing module, a machine learning model from the plurality of machine learning models based on the intent type, and routing, by the routing module, the first modified query to the selected machine learning model. Other embodiments are described and claimed.

IPC Classes  ?

13.

ROBUST AND CONSISTENT VIDEO INSTANCE SEGMENTATION

      
Application Number 18680579
Status Pending
Filing Date 2024-05-31
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Lee, Joon-Young
  • Oh, Seoung Wug
  • Heo, Miran

Abstract

Embodiments are disclosed for performing video instance segmentation to mask objects across frames of a video. The method may include obtaining a frame of a video sequence where the frame depicts an object. The method further includes determining a calibrated feature of the frame using temporal information associated with a past frame. The method further includes determining a pixel embedding using the calibrated feature. The method further includes determining an object token using a past object token associated with the past frame and the pixel embedding. The method further includes generating a masked frame using the object token and the pixel embedding. The masked frame includes a masked object corresponding to the object.

IPC Classes  ?

  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 10/62 - Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extractionPattern tracking
  • G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning

14.

Visually Similar Variable Font Custom Instance Extraction using Differentiable Rasterizer

      
Application Number 18680687
Status Pending
Filing Date 2024-05-31
First Publication Date 2025-12-04
Owner Adobe Inc. (USA)
Inventor
  • Jindal, Nipun
  • Wang, Zhaowen
  • Brdiczka, Oliver

Abstract

Variable font visual similarity search techniques are described. In an implementation, a query is received referencing an input font for performing a visual similarity search. A search result is generated specifying at least one variable font that is visually similar to the input font by searching a plurality of variable fonts based on the query. The search includes forming a plurality of instances for the at least one variable font, respectively, by adjusting a plurality of axes usable to change an appearance of the at least one variable font and identifying the at least one variable font by comparing the plurality of instances with the input font using a machine-learning model. The search result is presented for display in a user interface.

IPC Classes  ?

  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles
  • G06F 40/109 - Font handlingTemporal or kinetic typography

15.

GENERATING HOW-TO GUIDES GROUNDED IN ELEMENTS OF IN-USE USER INTERFACES VIA VIRTUAL ASSISTANTS

      
Application Number 18670398
Status Pending
Filing Date 2024-05-21
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Bursztyn, Victor Soares
  • Kim, Minsoo
  • Guo, Shunan
  • Koh, Eunyee

Abstract

The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate instructions for performing a next action of a task. For instance, in some cases, the disclosed systems receive, from a client device interacting with a software application, a query for performing a task via a user interface of the application. The disclosed systems generate a lookahead prompt having an execution example corresponding to the task, the execution example including an example task and an example action sequence for the example task. The disclosed systems also generate, from the lookahead prompt using a large language model, an estimated lookahead plan describing one or more actions for performing the task. The disclosed systems also use one or more large language models to generate, from the estimated lookahead plan, instructions to perform a next action for the task via user interaction with an interactive element of the user interface.

IPC Classes  ?

  • G06F 9/451 - Execution arrangements for user interfaces
  • G06F 16/9538 - Presentation of query results
  • G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
  • G06F 40/20 - Natural language analysis

16.

DIRECT MANIPULATION OF IMPLICITLY DEFINED DIGITAL 3D SHAPES

      
Application Number 18672648
Status Pending
Filing Date 2024-05-23
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Michel, Élie Louis Simon
  • Deschaintre, Valentin
  • Gaillard, Mathieu Kevin Pascal
  • Paris, Axel Florent Jacques
  • Kaiser, Adrien
  • Riso, Marzia

Abstract

Techniques are disclosed for direct manipulation of implicitly defined digital three-dimensional (3D) shapes. In an example method, a computing device renders a 3D shape based on an implicit definition including one or more parameters. The computing device receives an indication of an input indicating a modification to the 3D shape at a point. The computing device determines an alternative representation of the point. The computing device determines a position of the point based on the alternative representation. The computing device determines a transformation that relates the position to the one or more parameters. The computing device determines a change in at least one parameter based on the transformation and the input. The computing device re-renders the 3D shape based on the implicit definition and the change in the at least one parameter. The re-rendered 3D shape includes the modification indicated by the input.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

17.

CHAIN-OF-THOUGHT MACHINE-LEARNING MODEL DEBIASING

      
Application Number 18673547
Status Pending
Filing Date 2024-05-24
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Wang, Haoliang
  • Chen, Xiang
  • Yu, Tong
  • Kim, Sungchul
  • Rossi, Ryan A.
  • Wu, Junda
  • Rao, Anup Bandigadi

Abstract

Change-of-thought machine-learning model debiasing techniques and systems are described. A query is received and context data is produced based on the query, e.g., from an external source. A prompt is generated that includes the context data, the query, and a chain-of-though prompt, which is processed by a machine-learning model. A candidate result based on processing of the prompt using the machine-learning model. The candidate result includes a candidate answer and a chain-of-thought result describing reasoning indicated by the machine-learning model as used in generating the candidate answer.

IPC Classes  ?

18.

AUTOMATED GENERATION OF GOVERNING LABEL RECOMMENDATIONS

      
Application Number 18674253
Status Pending
Filing Date 2024-05-24
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Roy, Tathagato
  • Manjrekar, Nikhil
  • Mukherjee, Koyel
  • Tyagi, Atharv
  • Shah, Raunak

Abstract

Methods and systems are provided for facilitating generation and/or presentation of governing label recommendations for data. In embodiments described herein, a representation of a data schema associated with a dataset having a plurality of attributes is obtained. A governing label for a particular attribute of the plurality of attributes is identifying, via a machine learning model, based on the representation of the data schema associated with the dataset. Thereafter, a recommendation to assign the governing label to the particular attribute in the dataset is presented.

IPC Classes  ?

  • G06F 16/35 - ClusteringClassification
  • G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

19.

CONTENT RELEVANCE BASED TABLE QUERY ANSWERING

      
Application Number 18674598
Status Pending
Filing Date 2024-05-24
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Kumar, Yaman
  • Bhatia, Sumit
  • Aggarwal, Milan
  • Krishnamurthy, Balaji
  • Patnaik, Sohan
  • Changwal, Heril

Abstract

Content relevance based table query answering is described. In one or more examples, a query and a table are received. The table includes a plurality of cells. A plurality of scores for calculated that correspond to the plurality of cells based on the query. One or more machine-learning models are then leveraged to generate a search result from the query, table, and scores, which is presented in a user interface for display.

IPC Classes  ?

20.

GENERATING ASSISTIVE GUIDES OF CANDIDATE PATHS IN AN IMAGE FOR USER TRACING INPUTS

      
Application Number 18670383
Status Pending
Filing Date 2024-05-21
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Kumar, Harish
  • Kumar, Apurva
  • Nellutla, Aditya

Abstract

The present disclosure is directed toward systems, methods, and non-transitory computer readable media that provide assistive guides for path tracing of raster images. In particular, in one or more implementations, the disclosed systems determine a set of outlines corresponding to boundaries of a set of segments within a raster image. The disclosed systems select, from the set of outlines, an outline corresponding to a segment in response to a client device input indicating point(s) located within a threshold distance of the outline. The disclosed systems provide, for display within a graphical user interface of a client device, a highlighted indication of the outline corresponding to the segment. The disclosed systems generate, within a vector image, a vector path based on the outline corresponding to the segment in response to a selection of the outline via the graphical user interface.

IPC Classes  ?

  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles
  • G06T 5/30 - Erosion or dilatation, e.g. thinning
  • G06T 7/11 - Region-based segmentation
  • G06T 7/12 - Edge-based segmentation
  • G06T 7/136 - SegmentationEdge detection involving thresholding

21.

DETERMINING LARGE LANGUAGE MODEL EFFECTIVENESS UTILIZING DEEP LEARNING

      
Application Number 18671265
Status Pending
Filing Date 2024-05-22
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Shekhar, Shivanshu
  • Dubey, Tanishq
  • Mukherjee, Koyel
  • Saxena, Apoorv Umang
  • Tyagi, Atharv
  • Kotla, Nishanth

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for predicting summary quality scores and determining summary generation costs of large language models to generate a digital document summary. In particular, in one or more embodiments, the disclosed systems extract one or more text segments from a digital document. Further, the disclosed systems generate, utilizing a quality prediction neural network, a predicted summary quality score for each of a plurality of large language models for the one or more text segments. Furthermore, the disclosed systems select a large language model from the plurality of large language models based on the predicted summary quality scores. Moreover, the disclosed systems generate, utilizing the selected large language model, a summary of the digital document.

IPC Classes  ?

22.

EVALUATING EDGES OF COLLAPSED IDENTITY GRAPHS FOR IDENTITY RESOLUTION

      
Application Number 18672990
Status Pending
Filing Date 2024-05-23
First Publication Date 2025-11-27
Owner ADOBE INC. (USA)
Inventor
  • Ido, Shota
  • Kadiyala, Sai Venkatesh
  • Biswas, Rahul
  • Anand, Mansi
  • Gandhi, Mandeep
  • Reddy, Kiran Kumar
  • Bhaskaran, Harikrishnan

Abstract

Methods and systems are provided for evaluating edges of collapsed identity graphs for identity resolution. In embodiments described herein, a collapsed state of identity graphs, such as based on an identity namespace limit being exceeded by the identity graphs, is determined by applying an identity node and edge of an incoming record to the identity graphs. A temporary state of the identity graphs is determined by pruning edges of the collapsed state. A non-collapsed state of the identity graphs that includes the edge of the incoming record is determined by applying the edge of the incoming record to the temporary state. A different edge is determined to be pruned from the non-collapsed state as when the different edge is applied to the temporary state with the edge of the incoming record, the temporary state collapses into the collapsed state. An identity graph is updated based on the non-collapsed state.

IPC Classes  ?

  • G06Q 30/0201 - Market modellingMarket analysisCollecting market data

23.

DOCUMENT AGNOSTIC PERFORMANCE ENHANCEMENT THROUGH MULTI-STAGE MIXED LEVEL OF DETAIL RENDERING

      
Application Number 18673861
Status Pending
Filing Date 2024-05-24
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Sharma, Deepak Kumar
  • Gautam, Ankur Krishna
  • Gupta, Angad Kumar

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for providing a mixed level-of-detail rendering of a vector graphics document for display and modification via a client device while downloading and rendering the full vector graphics document. In particular, the disclosed systems download, in response to a request to load a vector graphics document at a client device, a raster image of the vector graphics document. Moreover, the disclosed systems select and download a vector graphic subunit of the vector graphics document, for example, by selecting a priority graphic design layout boundary. Furthermore, the disclosed systems provide, for display via the client device, a mixed level-of-detail rendering comprising the raster image of the vector graphics document and the vector graphic subunit as an overlay of the raster image such that the client device can modify the vector graphic subunit while downloading an additional vector graphic subunit.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction

24.

GENERATIVE INPAINTING UTILIZING SOURCE INPUTS WITH INTELLIGENT BOUNDS

      
Application Number 18920088
Status Pending
Filing Date 2024-10-18
First Publication Date 2025-11-27
Owner Adobe Inc. (USA)
Inventor
  • Erickson, Alan L.
  • Zhou, Yuqian

Abstract

The present disclosure relates to systems, methods, and non-transitory computer-readable media that intelligently resize fill regions when generating content for a digital image. For instance, in one or more embodiments, the disclosed systems identify a fill region for a digital image. The disclosed systems intelligently deriving source image bounds based on one or more parameters of a generative model. Furthermore, the disclosed systems generate, utilizing the generative model, a content fill from the source image bounds and the digital image. The disclosed systems resize the content fill and generate a modified digital image including the resized content fill in a location of the fill region of the digital image.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06T 11/40 - Filling a planar surface by adding surface attributes, e.g. colour or texture

25.

ADOBE FIREFLY

      
Application Number 019279771
Status Pending
Filing Date 2025-11-20
Owner Adobe Inc. (USA)
NICE Classes  ? 09 - Scientific and electric apparatus and instruments

Goods & Services

Downloadable software for using artificial intelligence models for content generation and management; downloadable software for using artificial intelligence models for content generation and management, namely, image, video, sound, audio, and music generation from user prompts, image editing, and for generating translations; downloadable application programming interface (API) software.

26.

Vector Object Generation from Raster Objects using Semantic Vectorization

      
Application Number 19282906
Status Pending
Filing Date 2025-07-28
First Publication Date 2025-11-20
Owner Adobe Inc. (USA)
Inventor Tailang, Nikhil

Abstract

Semantic vectorization techniques are described that support generating and editing of vector objects from raster objects. A raster object, for instance, is received as an input by a semantic vectorization system. The raster object is utilized by the semantic vectorization system to generate a semantic classification for the raster object. The semantic classification identifies semantic objects in the raster image. The semantic vectorization system leverages the semantic classification to generate vector objects. As a result, the vector objects resemble the semantic objects in the raster object.

IPC Classes  ?

  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06F 18/22 - Matching criteria, e.g. proximity measures
  • G06F 18/23 - Clustering techniques
  • G06F 18/2431 - Multiple classes
  • G06T 7/10 - SegmentationEdge detection
  • G06V 30/19 - Recognition using electronic means
  • G06V 30/262 - Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context

27.

ALIGNED VISION-LANGUAGE MODEL FOR TEXT-RICH IMAGE UNDERSTANDING

      
Application Number 18666519
Status Pending
Filing Date 2024-05-16
First Publication Date 2025-11-20
Owner Adobe Inc. (USA)
Inventor
  • Zhang, Ruiyi
  • Gu, Jiuxiang
  • Zhou, Yufan
  • Lipka, Nedim
  • Zhang, Yanzhe
  • Sun, Tong

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and implementing a vision-language model that identifies and understands text-rich content depicted in digital images. For example, the disclosed systems determine, from among a plurality of digital images with at least a threshold probability of depicting text-rich content, a subset of digital images corresponding to a set of text-rich image classifications. In some embodiments, the disclosed systems generate a ground truth text phrase utilizing an optical character recognition model to process a digital image from the subset of digital images. In certain embodiments, the disclosed systems also generate a predicted text phrase utilizing a vision-language model and compare the ground truth text phrase with the predicted text phrase. In some embodiments, the disclosed systems modify parameters of the vision-language model based on comparing the ground truth text phrase and the predicted text phrase.

IPC Classes  ?

  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
  • G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
  • G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 30/10 - Character recognition

28.

RESPONDING TO A USER QUERY USING MACHINE LEARNING

      
Application Number 18667690
Status Pending
Filing Date 2024-05-17
First Publication Date 2025-11-20
Owner ADOBE INC. (USA)
Inventor
  • Saad-Falcon, Jon
  • Barrow, Joseph D.
  • Manjunatha, Varun
  • Prakash, Anusha
  • Rossi, Ryan A.
  • Dernoncourt, Franck
  • Siu, Alexa F.
  • Nenkova, Ani Nenkova
  • Yoon, Seunghyun

Abstract

A method, apparatus, non-transitory computer readable medium, and system for data processing include obtaining a query relating to a document and identifying metadata for the document based on the query, where the metadata describes a structure including a plurality of portions of the document. Some embodiments including generating, using a machine learning model, a retrieval command based on the query and the metadata, selectively retrieving at least one of the plurality of portions of the document based on the retrieval command, and generating, using the machine learning model, a response to the query based on the at least one of the plurality of portions of the document.

IPC Classes  ?

  • G06F 16/14 - Details of searching files based on file metadata

29.

NEURAL BASED GEOMETRY IN BOUNDING VOLUME HEIRARCHY

      
Application Number 18669509
Status Pending
Filing Date 2024-05-20
First Publication Date 2025-11-20
Owner Adobe Inc. (USA)
Inventor
  • Michel, Élie Louis Simon
  • Boubekeur, Tamy
  • Thiery, Jean Marc Christian Marie
  • Georgiev, Iliyan Atanasov
  • Weier, Philip

Abstract

Techniques for neural based geometry in bounding volume hierarchies are described for enabling identification of properties of geometric objects of a scene. In an example, a processing device is operable to receive a bounding volume hierarchy that partitions geometric objects of a three-dimensional scene into bounding volumes individually assigned to respective nodes. At least one said node includes a neural representation encoding neural network information representing a respective said geometric object. The processing device is further operable to render the scene using the bounding volume hierarchy by constructing the respective said geometric object using the neural representation. The processing device is further operable to present the rendered scene for display in a user interface.

IPC Classes  ?

30.

MODULARIZED AND EXTENSIBLE FRAMEWORK FOR VISUALIZATION-TO-CAPTION GENERATION

      
Application Number 18663531
Status Pending
Filing Date 2024-05-14
First Publication Date 2025-11-20
Owner Adobe Inc. (USA)
Inventor
  • Guo, Shunan
  • Chan, Yeuk-Yin
  • Zhang, Wei
  • Soares Bursztyn, Victor
  • Bhutani, Prithvi
  • Hoffswell, Jane Elizabeth
  • Koh, Eunyee

Abstract

Some aspects relate to technologies providing a framework for generating captions from chart visualizations. In accordance with some aspects, input data for a chart is received that includes an indication of the chart type and chart data for the chart. Using the chart data, insight data is determined for each of a number of insight types defined for the chart type. The insight data can be generated using a rule set defined for each insight type. Using the insight data, a caption is generated with natural language text for each insight type. A user interface is provided that includes the chart and at least one of the captions.

IPC Classes  ?

31.

LOCALIZED ATTENTION-GUIDED SAMPLING FOR IMAGE GENERATION

      
Application Number 18664600
Status Pending
Filing Date 2024-05-15
First Publication Date 2025-11-20
Owner ADOBE INC. (USA)
Inventor
  • Ham, Cusuh
  • Fisher, Matthew David
  • Kolkin, Nicholas Isaac
  • Liu, Yuchen
  • Zhang, Richard
  • Hinz, Tobias

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image generation include obtaining an input prompt. A customized residual is added to a base parameter of an image generation model based on an element of the input prompt to obtain an updated parameter. The customized residual is determined based on the element of the input prompt. A synthesized image is generated using the image generation model with the updated parameter. The synthesized image depicts the element based on the input prompt.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • G06T 7/194 - SegmentationEdge detection involving foreground-background segmentation

32.

AUTOMATED MANAGEMENT OF BRAND REPRESENTATION USING ARTIFICIAL INTELLIGENCE

      
Application Number 18665157
Status Pending
Filing Date 2024-05-15
First Publication Date 2025-11-20
Owner Adobe Inc. (USA)
Inventor
  • Tripathi, Anubhav
  • Gupta, Rahul
  • Bothra, Prerna
  • Gupta, Mayank
  • Jakhar, Kailash Chand
  • Goel, Ishika

Abstract

Methods, computer systems, computer storage media, and graphical user interfaces are provided for facilitating management of brand representations. In one implementation, a set of brand guidelines associated with various guideline categories (e.g., text and imagebased guidelines) is obtained. Thereafter, a set of actionable guidelines is identified for the various guideline categories using an artificial intelligence model(s) (e.g., LLM). In accordance with obtaining brand-inclusive content associated with a brand, brand conformity data is generated, via the artificial intelligence model(s), to indicate an extent of conformity of the brand-inclusive content to at least one actionable guideline. Such brand conformity data can be provided for display to convey brand conformance of the brand-inclusive content.

IPC Classes  ?

33.

RESOURCE-AWARE MODEL-DRIVEN LATENCY PREDICTION FOR MODEL SERVING

      
Application Number 18669193
Status Pending
Filing Date 2024-05-20
First Publication Date 2025-11-20
Owner Adobe Inc. (USA)
Inventor
  • Liang, Qianlin
  • Wang, Haoliang

Abstract

Some aspects relate to technologies for using machine learning models to predict latency for executing neural networks on various hardware configurations. In accordance with some aspects, a neural network representation for a target neural network having a plurality of layers is received. A first machine learning model groups layers of the target neural network to provide a plurality of layer groups based on the neural network representation, with at least one layer group comprising multiple layers from the target neural network that can be executed by a single operation. A second machine learning model generates a latency prediction for executing the target neural network on a target hardware configuration based on the layer groups.

IPC Classes  ?

  • G06N 3/0895 - Weakly supervised learning, e.g. semi-supervised or self-supervised learning
  • G06N 3/04 - Architecture, e.g. interconnection topology

34.

ADOBE FIREFLY

      
Application Number 243752000
Status Pending
Filing Date 2025-11-14
Owner Adobe Inc. (USA)
NICE Classes  ? 09 - Scientific and electric apparatus and instruments

Goods & Services

(1) Downloadable software for using artificial intelligence models for content generation and management, namely, image, video, sound, audio, and music generation from user prompts, image editing, and for generating translations; downloadable software for using artificial intelligence models for content generation and management; downloadable application programming interface (API) software.

35.

PERSONALIZED FORM ERROR CORRECTION PROPAGATION

      
Application Number 19272806
Status Pending
Filing Date 2025-07-17
First Publication Date 2025-11-13
Owner Adobe Inc. (USA)
Inventor
  • Singh, Silky
  • Jandial, Surgan
  • Deshmukh, Shripad Vilasrao
  • Aggarwal, Milan
  • Sarkar, Mausoom
  • Krishnamurthy, Balaji
  • Jain, Arneh
  • Java, Abhinav

Abstract

A corrective noise system receives an electronic version of a fillable form generated by a segmentation network and receives a correction to a segmentation error in the electronic version of the fillable form. The corrective noise system is trained to generate noise that represents the correction and superimpose the noise on the fillable form. The corrective noise system is further trained to identify regions in a corpus of forms that are semantically similar to a region that was subject to the correction. The generated noise is propagated to the semantically similar regions in the corpus of forms and the noisy corpus of forms is provided as input to the segmentation network. The noise causes the segmentation network to accurately identify fillable regions in the corpus of forms and output a segmented version of the corpus of forms having improved fidelity without retraining or otherwise modifying the segmentation network.

IPC Classes  ?

  • G06V 30/262 - Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
  • G06V 30/14 - Image acquisition
  • G06V 30/19 - Recognition using electronic means
  • G06V 30/414 - Extracting the geometrical structure, e.g. layout treeBlock segmentation, e.g. bounding boxes for graphics or text

36.

RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER

      
Application Number 18657567
Status Pending
Filing Date 2024-05-07
First Publication Date 2025-11-13
Owner Adobe Inc. (USA)
Inventor
  • Jandial, Surgan
  • Shahid, Simra
  • Krishnamurthy, Balaji
  • Java, Abhinav
  • Deshmukh, Shripad

Abstract

An image generation system accessing an input image displayed on a user interface. The image generation system receives, via the user interface, a target style text defining a target style for a stylized image to be generated based on the input image and a request to generate the stylized image. The image generation system generates the stylized image. Generating the stylized image includes applying a text guided image generation model to the input image and the target style text, wherein the text guided image generation model minimizes a loss between a first relationship between the generated stylized image and a set of style templates and a second relationship between the target style text and the set of style templates. The image generation system displays, via the user interface responsive to receiving the request, the generated stylized image.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06F 40/47 - Machine-assisted translation, e.g. using translation memory

37.

GENERATING VISUAL DEPICTIONS OF CORRELATIONS BETWEEN IMAGE LAYER MASKS WITH SUGGESTIVE BOUNDARIES

      
Application Number 18659329
Status Pending
Filing Date 2024-05-09
First Publication Date 2025-11-13
Owner Adobe Inc. (USA)
Inventor
  • Garg, Aasma
  • Green, Peter

Abstract

Methods, systems, and non-transitory computer readable storage media are disclosed for generating visualizations of mask correlations for a layer of a digital image. The disclosed system determines one or more bounding boxes corresponding to one or more hidden areas or one or more visible areas of a layer of a digital image according to a raster mask or a vector mask corresponding to the layer. The disclosed system determines display attributes for the one or more bounding boxes in response to determining that the one or more bounding boxes correspond to the one or more hidden areas or the one or more visible areas. The disclosed system generates, for display with the layer within a graphical user interface, one or more boundary highlights representing the one or more bounding boxes with the display attributes.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06T 7/12 - Edge-based segmentation
  • G06T 7/136 - SegmentationEdge detection involving thresholding
  • G06T 7/90 - Determination of colour characteristics

38.

TECHNIQUES TO PREDICT INTERACTIONS UTILIZING HIDDEN MARKOV MODELS

      
Application Number 18662025
Status Pending
Filing Date 2024-05-13
First Publication Date 2025-11-13
Owner Adobe Inc. (USA)
Inventor
  • Ju, Wangqian
  • Lou, Hsin-Ya
  • Zhou, Tian
  • Chen, Yuting

Abstract

Embodiments include a method, apparatus, system and computer-readable medium for generating a set of input features based on user account data associated with a user account, generating a hidden Markov model based on the set of input features, generating a predicted subscription probability matrix comprising probability values representing potential account interactions between the user account a set of computing applications, modifying one or more probability values of the predicted subscription probability matrix to form a modified predicted subscription probability matrix, and determining a predicted account interaction metric for the user account based on the modified predicted subscription probability matrix. Other embodiments are described and claimed.

IPC Classes  ?

  • G06Q 30/0202 - Market predictions or forecasting for commercial activities
  • G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
  • G06Q 30/0201 - Market modellingMarket analysisCollecting market data

39.

Type font

      
Application Number 29935941
Grant Number D1101843
Status In Force
Filing Date 2024-04-04
First Publication Date 2025-11-11
Grant Date 2025-11-11
Owner Adobe Inc. (USA)
Inventor Slimbach, Robert Joseph

40.

UTILIZING MACHINE LEARNING MODELS FOR PATCH RETRIEVAL AND DEFORMATION IN COMPLETING THREE-DIMENSIONAL DIGITAL SHAPES

      
Application Number 19270209
Status Pending
Filing Date 2025-07-15
First Publication Date 2025-11-06
Owner Adobe Inc. (USA)
Inventor
  • Chaudhuri, Siddhartha
  • Sun, Bo
  • Kim, Vladimir
  • Aigerman, Noam

Abstract

Methods, systems, and non-transitory computer readable storage media are disclosed that utilizes machine learning models for patch retrieval and deformation in completing three-dimensional digital shapes. In particular, in one or more implementations the disclosed systems utilize a machine learning model to predict a coarse completion shape from an incomplete 3D digital shape. The disclosed systems sample coarse 3D patches from the coarse 3D digital shape and learn a shape distance function to retrieve detailed 3D shape patches in the input shape. Moreover, the disclosed systems learn a deformation for each retrieved patch and blending weights to integrate the retrieved patches into a continuous surface.

IPC Classes  ?

  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
  • G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a patternLocating or processing of specific regions to guide the detection or recognition
  • G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries

41.

IDENTIFYING AND ALIGNING VIDEO CLIPS FROM LARGE-SCALE VIDEO DATASETS

      
Application Number 18653577
Status Pending
Filing Date 2024-05-02
First Publication Date 2025-11-06
Owner Adobe Inc. (USA)
Inventor
  • Jenni, Simon
  • Dave, Ishan Rajendrakumar
  • Heilbron, Fabian David Caba

Abstract

Embodiments are disclosed for retrieving videos for a semantic and temporal alignment between a pair of video clips. The method may include receiving a query video clip. The method may further include determining alignment ratios between the query video clip and one or more candidate video clips. The method may further include identifying an alignable video clip from the one or more candidate video clips based on the alignment ratios. The method may further include aligning the alignable video clip with the query video clip.

IPC Classes  ?

  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06F 16/735 - Filtering based on additional data, e.g. user or group profiles
  • G11B 27/036 - Insert-editing
  • G11B 27/10 - IndexingAddressingTiming or synchronisingMeasuring tape travel

42.

EDITING DIGITAL IMAGES USING EXECUTABLE CODE GENERATED BY LARGE LANGUAGE MODELS FROM NATURAL LANGUAGE INPUT

      
Application Number 18654904
Status Pending
Filing Date 2024-05-03
First Publication Date 2025-11-06
Owner Adobe Inc. (USA)
Inventor
  • Zhao, Handong
  • Wu, Qiucheng
  • Bui, Trung
  • Yoon, Seunghyun
  • Tran, Quan
  • Shi, Jing

Abstract

The present disclosure relates to systems, methods, and non-transitory computer-readable media that perform text-to-image editing using executable code generated from natural language text input. For instance, in one or more embodiments, the disclosed systems receive, from a client device, a digital image and natural language text input providing instructions for modifying the digital image. The disclosed systems also generate, using a large language model, executable action code for modifying the digital image in accordance with the instructions of the natural language text input, the executable action code being compatible with an editing application. The disclosed systems further modify the digital image by executing the executable action code via the editing application and provide the modified digital image for display via a graphical user interface of the client device.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 7/11 - Region-based segmentation

43.

USING SHAPLEY VALUES TO EVALUATE PROMPT GENERATION PARAMETERS

      
Application Number 18655047
Status Pending
Filing Date 2024-05-03
First Publication Date 2025-11-06
Owner ADOBE INC. (USA)
Inventor
  • Venkitachalam, Shankar
  • M Y, Meghanath
  • Pai, Deepak
  • Basu, Debraj Debashish
  • Narang, Anish

Abstract

Methods and systems are provided for using Shapley values to evaluate prompt generation parameters. In embodiments described herein, a selection of prompt parameters are accessed. A plurality of prompts are generated as a function of a combination of the prompt parameters. A corresponding quality metric is determined for each of the prompts. Prompt parameter contribution metrics are determined using a Shapley-value-based determination corresponding to a contribution of each of the prompt parameters to the corresponding content quality metric for each of the prompts. The prompt parameter contribution metrics are then displayed.

IPC Classes  ?

44.

Computer display screen with icon

      
Application Number 29926615
Grant Number D1100980
Status In Force
Filing Date 2024-01-31
First Publication Date 2025-11-04
Grant Date 2025-11-04
Owner ADOBE INC. (USA)
Inventor
  • Walter, Julia
  • Reinemann, Bettina

45.

SAMPLING LIGHT DIRECTIONS ON NEURAL MATERIALS

      
Application Number 18644322
Status Pending
Filing Date 2024-04-24
First Publication Date 2025-10-30
Owner
  • Adobe Inc. (USA)
  • THE REGENTS OF THE UNIVERSITY OF CALIFORNIA (USA)
Inventor
  • Luan, Fujun
  • Xu, Zexiang
  • Hašan, Miloš
  • Georgiev, Iliyan Atanasov
  • Wu, Liwen
  • Xu, Bing
  • Ramamoorthi, Ravi

Abstract

In implementation of techniques for sampling light directions on neural materials, a computing device implements a light direction system to receive neural features of a material and an indication of a view direction toward the material. Using a mixture of analytical lobes, a normalizing flow, or a histogram prediction, the light direction system predicts a probability density function (PDF). The light direction system then samples the PDF, calculates prominence values for each of a plurality of candidate light directions based on the PDF, and determines a light direction based on the prominence values.

IPC Classes  ?

46.

EDITING SHADOWS IN DIGITAL IMAGES UTILIZING MACHINE LEARNING MODELS

      
Application Number 18651176
Status Pending
Filing Date 2024-04-30
First Publication Date 2025-10-30
Owner Adobe Inc. (USA)
Inventor
  • Shu, Zhixin
  • Hou, Andrew
  • Zhang, He
  • Zhang, Xuaner
  • Hold-Geoffroy, Yannick
  • Yoon, Jae Shin

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for editing shadows in digital images. In particular, in some embodiments, the disclosed systems determine, utilizing a lighting estimation network, an environment map for a digital image, the environment map comprising a dominant light. In addition, in some embodiments, the disclosed systems generate, utilizing a lighting diffusion network, a diffused image from the digital image, the diffused image comprising smoothed shading. Moreover, in some embodiments, the disclosed systems generate, utilizing a shadow synthesis network, a shadowed image from the diffused image and a modified environment map comprising a modified dominant light. Furthermore, in some embodiments, the disclosed systems generate, from the diffused image and the shadowed image, a modified digital image comprising an edited shadow.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

47.

THREE-DIMENSIONAL RECONSTRUCTIONS BASED ON GAUSSIAN PRIMITIVES

      
Application Number 18646503
Status Pending
Filing Date 2024-04-25
First Publication Date 2025-10-30
Owner Adobe Inc. (USA)
Inventor
  • Zhang, Kai
  • Tan, Hao
  • Bi, Sai
  • Xu, Zexiang
  • Zhao, Nanxuan
  • Sunkavalli, Kalyan Krishna

Abstract

In implementation of techniques for three-dimensional reconstructions based on Gaussian primitives, a computing device implements a reconstruction system to receive a first digital image depicting an object from a first angle and a second digital image depicting the object from a second angle. The reconstruction system segments the first digital image and the second digital image into patches. The reconstruction system then generates, using a machine learning model, three-dimensional Gaussian primitives that predict parameters of points of the object in a three-dimensional space that correspond on a per-pixel basis to pixels of the patches. The reconstruction system then forms a three-dimensional reconstruction of the object for display in a user interface by merging the three-dimensional Gaussian primitives.

IPC Classes  ?

  • G06T 17/10 - Volume description, e.g. cylinders, cubes or using CSG [Constructive Solid Geometry]
  • G06T 7/11 - Region-based segmentation
  • G06T 7/55 - Depth or shape recovery from multiple images

48.

UPSCALING AI-GENERATED DIGITAL CONTENT WITHIN DIGITAL IMAGES VIA TILE-BASED SUPER RESOLUTION

      
Application Number 18646543
Status Pending
Filing Date 2024-04-25
First Publication Date 2025-10-30
Owner Adobe Inc. (USA)
Inventor
  • Barnes, Connelly
  • Lin, Zhe
  • Liu, Xiaoyang
  • Amirghodsi, Sohrab
  • Liu, Qing

Abstract

The present disclosure relates to systems, methods, and non-transitory computer-readable media that upscale AI-generated digital content via tile-based super resolution. For instance, in one or more embodiments, the disclosed systems determine a first set of tiles from a digital image having a set of pixels to be replaced with a generated content portion. The disclosed systems further determine a second set of tiles from a first modified digital image that corresponds to the digital image and includes the generated content portion at a first resolution. Based on the first set of tiles and the second set of tiles, the disclosed systems use a super resolution neural network to generate a second modified digital image that corresponds to the digital image and includes the generated content portion at a second resolution that is higher than the first resolution.

IPC Classes  ?

  • G06T 3/4053 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
  • G06T 3/4046 - Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

49.

Interactive Network for Selecting, Ranking, Summarizing, and Exploring Data Insights

      
Application Number 18649468
Status Pending
Filing Date 2024-04-29
First Publication Date 2025-10-30
Owner Adobe Inc. (USA)
Inventor
  • Hoffswell, Jane Elizabeth
  • Zhang, Wei
  • Soares Bursztyn, Victor
  • Guo, Shunan
  • Bhutani, Prithvi
  • Martinez, Jesse
  • Koh, Eunyee
  • Trivedi, Abhisek

Abstract

Insight summary and prompt generation techniques are described. In one or more examples, a plurality of insights is generated from data extracted from digital content. A network representation is produced having a plurality of nodes based on the plurality of insights and a plurality of connections between corresponding insights. A selection is received of a subset of nodes from the plurality of nodes. A prompt is formed by grouping respective insights from the subset of nodes. An insight summary of the digital content is generated based on the prompt using generative artificial intelligence as implemented using one or more machine-learning models. The insight summary is then presented for output in a user interface.

IPC Classes  ?

  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles
  • G06F 40/40 - Processing or translation of natural language

50.

JOINT FRAMEWORK FOR OBJECT-CENTERED SHADOW DETECTION, REMOVAL, AND SYNTHESIS

      
Application Number 18651376
Status Pending
Filing Date 2024-04-30
First Publication Date 2025-10-30
Owner Adobe Inc. (USA)
Inventor
  • Wang, Tianyu
  • Kim, Soo Ye
  • Figueroa, Luis
  • Zheng, Haitian
  • Zhang, Jianming
  • Ding, Zhihong
  • Cohen, Scott
  • Lin, Zhe
  • Xiong, Wei

Abstract

The present disclosure relates to systems, methods, and non-transitory computer-readable media that detects shadows, removes shadows, and synthesizes shadows in a joint-framework. In particular, the disclosed systems access an object mask of an object and a digital image depicting the object and a shadow of the object. Furthermore, the disclosed systems perform object-centered shadow detection and removal to generate a modified digital image without the shadow by utilizing a shadow analyzer model. Moreover, the disclosed systems receive a user interaction to manipulate an object and generate a modified shadow utilizing a shadow synthesis model where the shadow synthesis model is conditioned on a shadow mask generated by the shadow analyzer model.

IPC Classes  ?

  • G06T 11/40 - Filling a planar surface by adding surface attributes, e.g. colour or texture
  • G06T 5/60 - Image enhancement or restoration using machine learning, e.g. neural networks
  • G06T 5/77 - RetouchingInpaintingScratch removal

51.

Cross-lingual meta-transfer learning adaptation to natural language understanding

      
Application Number 17655395
Grant Number 12455911
Status In Force
Filing Date 2022-03-18
First Publication Date 2025-10-28
Grant Date 2025-10-28
Owner ADOBE INC. (USA)
Inventor
  • M'Hamdi, Meryem
  • Kim, Doo Soon
  • Dernoncourt, Franck
  • Bui, Trung Huu

Abstract

Systems and methods for natural language processing are described. Embodiments of the present disclosure identify a task set including a plurality of pseudo tasks, wherein each of the plurality of pseudo tasks includes a support set corresponding to a first natural language processing (NLP) task and a query set corresponding to a second NLP task; update a machine learning model in an inner loop based on the support set; update the machine learning model in an outer loop based on the query set; and perform the second NLP task using the machine learning model.

IPC Classes  ?

52.

Type font

      
Application Number 29935942
Grant Number D1100037
Status In Force
Filing Date 2024-04-04
First Publication Date 2025-10-28
Grant Date 2025-10-28
Owner Adobe Inc. (USA)
Inventor Slimbach, Robert Joseph

53.

REMOVING OBJECTS AT IMAGE CAPTURE TIME

      
Application Number 19255350
Status Pending
Filing Date 2025-06-30
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Shukla, Sankalp
  • Gupta, Angad Kumar
  • Gupta, Sourabh

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for removing objects from an image stream at capture time of a digital image. For example, the disclosed system contemporaneously detects and segments objects from a digital image stream being previewed in a camera viewfinder graphical user interface of a client device. The disclosed system removes selected objects from the image stream and fills a hole left by the removed object with a content aware fill. Moreover, the disclosed system displays the image stream with the removed object and content fill as the image stream is previewed by a user prior to capturing a digital image from the image stream.

IPC Classes  ?

  • H04N 5/272 - Means for inserting a foreground image in a background image, i.e. inlay, outlay
  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
  • H04N 23/61 - Control of cameras or camera modules based on recognised objects
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders

54.

CUSTOMIZATION ASSISTANT FOR TEXT-TO-IMAGE GENERATION

      
Application Number 18637914
Status Pending
Filing Date 2024-04-17
First Publication Date 2025-10-23
Owner ADOBE INC. (USA)
Inventor
  • Zhou, Yufan
  • Zhang, Ruiyi
  • Gu, Jiuxiang
  • Sun, Tong

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image and a text prompt including an image modification request, generating a text response based on the input image and the text prompt, where the text response describes a modification to the input image corresponding to the image modification request, and generating a synthetic image based on the input image and an output embedding of a language generation model, where the synthetic image depicts the modification to the input image.

IPC Classes  ?

55.

ONE-STEP DIFFUSION WITH DISTRIBUTION MATCHING DISTILLATION

      
Application Number 18639301
Status Pending
Filing Date 2024-04-18
First Publication Date 2025-10-23
Owner ADOBE INC. (USA)
Inventor
  • Yin, Tianwei
  • Gharbi, Michaël
  • Zhang, Richard
  • Shechtman, Elya
  • Park, Taesung

Abstract

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image generation include obtaining a text prompt and a noise input, and then generating a synthetic image based on the text prompt and the noise input by performing a single pass with an image generation model. The image generation model is trained based on a multi-term loss comprising a positive term based on an output of a pre-trained model, and a negative term based on an output of a jointly-trained model.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

56.

GENERATING DIGITAL IMAGES UTILIZING A DIFFUSION-BASED NETWORK CONDITIONED ON LIGHTING-AWARE FEATURE REPRESENTATIONS

      
Application Number 18640429
Status Pending
Filing Date 2024-04-19
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Ren, Mengwei
  • Zhang, He
  • Xiong, Wei
  • Shu, Zhixin
  • Yoon, Jae Shin
  • Zhang, Jianming

Abstract

Methods, systems, and non-transitory computer readable storage media are disclosed for generating digital images with a diffusion-based generative neural network conditioned on background-extracted lighting features. The disclosed system determines, in response to a request to generate a digital image, a target background image for inserting a foreground object into the target background image. The disclosed system generates, from the target background image and utilizing a lighting conditioning neural network, a lighting feature representation indicating one or more lighting parameters of the target background image. Additionally, the disclosed system generates, utilizing a diffusion-based generative neural network conditioned on the lighting feature representation, the digital image including the foreground object inserted into the target background image based on a composite image comprising the foreground object and the target background image with a foreground mask corresponding to the foreground object.

IPC Classes  ?

  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • G06T 5/77 - RetouchingInpaintingScratch removal
  • G06V 10/56 - Extraction of image or video features relating to colour
  • G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
  • G06V 10/771 - Feature selection, e.g. selecting representative features from a multi-dimensional feature space
  • H04N 5/272 - Means for inserting a foreground image in a background image, i.e. inlay, outlay

57.

Techniques for Triangle-level Rejection Sampling in Three-dimensional Object Meshes

      
Application Number 18643369
Status Pending
Filing Date 2024-04-23
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Boubekeur, Tamy
  • Thonat, Theo
  • Schertzer, Jèrémie

Abstract

A graphics generation computing device applies triangle-level rejection sampling to generate a set of surface mesh point samples. A highly parallelized processor included in the graphics generation computing device generates a triangle-level sampling array that includes triangle-level sampling data for each triangle included in a 3D object mesh. Based on the data in the triangle-level sampling array, the highly parallelized processor determines a quantity of point samples in each triangle. The highly parallelized processor calculates, for each point sample, point sample location data that indicates a location of the point sample on a triangle. The highly parallelized processor modifies a set of point samples to include the location data. In some cases, the set of point samples is used to generate digital fibers or other structure data objects at the point sample locations indicated by the set of point samples.

IPC Classes  ?

  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
  • G06T 1/20 - Processor architecturesProcessor configuration, e.g. pipelining

58.

MASKED LATENT DECODER FOR IMAGE INPAINTING

      
Application Number 18957817
Status Pending
Filing Date 2024-11-24
First Publication Date 2025-10-23
Owner ADOBE INC. (USA)
Inventor
  • Zheng, Haitian
  • Zhang, Zhifei
  • Lin, Zhe
  • Zhou, Yuqian

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image and an input mask, where the input image depicts a scene and the input mask indicates an inpainting region of the input image. A latent code is generated, using a generator network of an image generation model, based on the input image and the input mask. The latent code includes synthesized content in the inpainting region. A synthetic image is generated, using a decoder network of the image generation model, based on the latent code and the input image. The synthetic image depicts the scene from the input image outside the inpainting region and includes the synthesized content within the inpainting region, and the synthetic image comprises a seamless transition across a boundary of the inpainting region.

IPC Classes  ?

  • G06T 5/77 - RetouchingInpaintingScratch removal
  • G06T 5/60 - Image enhancement or restoration using machine learning, e.g. neural networks

59.

CONTRIBUTION DATA CALIBRATION

      
Application Number 18637718
Status Pending
Filing Date 2024-04-17
First Publication Date 2025-10-23
Owner ADOBE INC. (USA)
Inventor
  • Huang, Bei
  • Yuan, Yuan
  • Xu, Yiming
  • Yuan, Qilong
  • Xu, Jin
  • Wang, Lijing
  • Wang, Bowen
  • Li, Yancheng
  • Yan, Zhenyu

Abstract

A method, non-transitory computer readable medium, apparatus, and system for data processing include obtaining, by a multi-touch attribution model, individual-level user interaction data from a digital content channel, and computing, using the multi-touch attribution model, channel contribution data based on the individual-level user interaction data. Some embodiments include training, using a training component, an aggregate attribution model based on the channel contribution data. Some embodiments include generating, using a calibration component, an individual channel contribution value for the digital content channel based on the channel contribution data and the aggregate attribution model.

IPC Classes  ?

60.

OBJECT-CENTRIC CONTACT MODELING AND HAND GRASP GENERATION

      
Application Number 18638487
Status Pending
Filing Date 2024-04-17
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Zhou, Yang
  • Liu, Shaowei
  • Yang, Jimei

Abstract

In some embodiments, a computing system receives a representation of an object from a client device. The computing system generates a contact representation for hand-object interaction based on the representation of the object. The object-centric contact representation includes a contact map indicating contact points on the representation of the object, a hand part map indicating hand parts contacting the object, and a direction map comprising contact directions of the hand parts contacting the object. The computing system generates a hand grasp representation with respect to the object based on the contact representation using a model-based optimization algorithm. The computing system provides the hand grasp representation to the client device.

IPC Classes  ?

  • G06F 30/23 - Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
  • G06T 17/10 - Volume description, e.g. cylinders, cubes or using CSG [Constructive Solid Geometry]

61.

Relightable Scene Reconstructions Using Radiance Guided Material Extraction

      
Application Number 18639346
Status Pending
Filing Date 2024-04-18
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Michel, Élie Louis Simon
  • Philip, Julien Olivier Victor
  • Gomez Mijangos, Diego Andre
  • Kaiser, Adrien Michel Paul

Abstract

Techniques for relightable scene reconstructions using radiance guided material extraction are described to accurately render 3D scenes under different lighting conditions and perspectives than original source images from which the scenes are constructed. In an example, a processing device is operable to receive a plurality of digital images that depict a scene from multiple perspectives, determine a view-independent radiance of the scene based on the plurality of digital images, and determine a view-dependent radiance of the scene based on the plurality of digital images. The processing device is further operable to determine a set of lighting conditions associated with an input perspective, generate a synthesized image having a reconstruction of the scene based on the set of lighting conditions using the view-independent radiance and the view-dependent radiance, and output the synthesized image.

IPC Classes  ?

62.

GENERATING AND MODIFYING DIGITAL IMAGE DATABASES THROUGH FAIRNESS DEDUPLICATION

      
Application Number 18639568
Status Pending
Filing Date 2024-04-18
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Slyman, Eric
  • Kafle, Kushal
  • Cohen, Scott

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying databases using a fairness deduplication algorithm. In particular, in one or more embodiments, the disclosed systems generate, within an embedding space, semantic embeddings from a plurality of digital images stored in a database. In some embodiments, the disclosed systems identify, from among the semantic embeddings in the embedding space, a preservable embedding according to a preservation prototype indicating a semantic concept to preserve within the database. In one or more embodiments, the disclosed systems generate a modified database by pruning one or more digital images corresponding to semantic embeddings other than the preservable embedding from the database.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

63.

GENERATING VISUALLY AWARE DESIGN LAYOUTS USING A MULTI-DOMAIN DIFFUSION NEURAL NETWORK

      
Application Number 18641137
Status Pending
Filing Date 2024-04-19
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Wang, Zhaowen
  • Zhao, Nanxuan
  • Yang, Jimei
  • Liu, Difan
  • Shabani, Mohammad Amin

Abstract

The present disclosure relates to systems, methods, and non-transitory computer readable media that generate layouts for digital designs from image elements via multi-domain diffusion. For instance, in some embodiments, the disclosed systems receive, from a client device, a plurality of image elements for generating a digital design. The disclosed systems generate, using an encoder of a multi-domain diffusion neural network, embeddings representing visual characteristics and bounding box characteristics of the plurality of image elements. The disclosed systems further generate, using the multi-domain diffusion neural network, a layout for the digital design from the visual characteristics and bounding box characteristics of the embeddings. Additionally, the disclosed systems provide the layout for display on the client device.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text

64.

GENERATIVE ARTIFICAL INTELLIGENCE VISUAL EFFECTS

      
Application Number 18677874
Status Pending
Filing Date 2024-05-30
First Publication Date 2025-10-23
Owner Adobe Inc. (USA)
Inventor
  • Agarwal, Rishav
  • Iyer, Siddharth Srinivasan
  • Yadav, Shubbham
  • Jain, Sanyam
  • Brdiczka, Oliver
  • Katakol, Sudeep Siddheshwar
  • Bourgin, David Davenport
  • Darabi, Aliakbar

Abstract

Generative artificial intelligence visual effect techniques are described. A prompt, for example, is received. The prompt includes text specifying a visual effect and text specifying a shape. A mask is formed defining a portion of digital content based on an object selected from digital content. The visual effect is generated using generative artificial intelligence by one or more machine-learning models based on the text specifying the visual effect, the text specifying the shape, and the mask. The digital content is presented as having the visual effect applied to the portion of the digital content for display in a user interface.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 11/00 - 2D [Two Dimensional] image generation

65.

PROXY-GUIDED IMAGE EDITING

      
Application Number 18956284
Status Pending
Filing Date 2024-11-22
First Publication Date 2025-10-23
Owner ADOBE INC. (USA)
Inventor
  • Zhou, Yuqian
  • Singh, Krishna Kumar
  • Lin, Zhe

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image and an input mask, wherein the input mask indicates a region of the input image to be modified and generating, using a first image generation model, an intermediate result based on the input image and the input mask, wherein the intermediate result modifies the region of the input image indicated by the input mask. A second image generation model generates a synthetic image based on the input image and the intermediate result, wherein the synthetic image depicts the input image with content from the modified region at a higher level of detail than the intermediate result.

IPC Classes  ?

  • G06T 5/77 - RetouchingInpaintingScratch removal
  • G06T 5/70 - DenoisingSmoothing
  • G06T 7/11 - Region-based segmentation
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting

66.

ACTIVELY-LEARNED CONTEXT MODELING FOR IMAGE COMPRESSION

      
Application Number 19044293
Status Pending
Filing Date 2025-02-03
First Publication Date 2025-10-16
Owner Adobe Inc. (USA)
Inventor
  • Wu, Gang
  • Li, Yang
  • Petrangeli, Stefano
  • Swaminathan, Viswanathan
  • Wang, Haoliang
  • Rossi, Ryan A.
  • Song, Zhao

Abstract

Embodiments described herein provide methods and systems for facilitating actively-learned context modeling. In one embodiment, a subset of data is selected from a training dataset corresponding with an image to be compressed, the subset of data corresponding with a subset of data of pixels of the image. A context model is generated using the selected subset of data. The context model is generally in the form of a decision tree having a set of leaf nodes. Entropy values corresponding with each leaf node of the set of leaf nodes are determined. Each entropy value indicates an extent of diversity of context associated with the corresponding leaf node. Additional data from the training dataset is selected based on the entropy values corresponding with the leaf nodes. The updated subset of data is used to generate an updated context model for use in performing compression of the image.

IPC Classes  ?

  • H04N 19/96 - Tree coding, e.g. quad-tree coding
  • G06N 20/00 - Machine learning
  • H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
  • H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
  • H04N 19/50 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
  • H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

67.

COLLABORATION CONTROLS FOR DOCUMENT SECTIONS

      
Application Number 19192026
Status Pending
Filing Date 2025-04-28
First Publication Date 2025-10-16
Owner Adobe Inc. (USA)
Inventor
  • Bansal, Ayush
  • Sinha, Deep

Abstract

Methods and systems are provided for facilitating document collaboration in accordance with collaboration controls. In embodiments, an indication of a collaboration control for a collaborator of a document is obtained. The collaboration control generally indicates an edit permission for a document section of the document in relation to the collaborator. Thereafter, a set of collaboration control data for the document is generated. In embodiments, the set of collaboration control data includes the collaboration control indicating the edit permission for the document section of the document in relation to the collaborator. Based on an input (e.g., edit) by the collaborator to the document section of the document, a determination is made, using the set of collaboration control data, as to whether to enable an edit to the document section of the document.

IPC Classes  ?

  • H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
  • G06F 40/166 - Editing, e.g. inserting or deleting
  • H04L 9/40 - Network security protocols

68.

GENERATING SCALABLE VECTOR TEXT EFFECTS

      
Application Number 18631521
Status Pending
Filing Date 2024-04-10
First Publication Date 2025-10-16
Owner ADOBE INC. (USA)
Inventor
  • Ungureanu-Contes, Adrian-Stefan
  • Lupascu, Marian
  • Lungu-Stan, Vlad-Constantin
  • Mironicä, Ionut

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a pattern prompt and a text image, where the pattern prompt describes a visual pattern and the text image depicts text, generating a pattern image based on the pattern prompt, where the pattern image depicts the visual pattern, and generating a patterned text image based on the pattern image and the pattern prompt.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

69.

INTERMEDIATE NOISE RETRIEVAL FOR IMAGE GENERATION

      
Application Number 18637024
Status Pending
Filing Date 2024-04-16
First Publication Date 2025-10-16
Owner ADOBE INC. (USA)
Inventor
  • Agarwal, Shubham
  • Mitra, Subrata
  • Karanam, Srikrishna
  • Mukherjee, Koyel
  • Saini, Shiv Kumar

Abstract

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image processing include obtaining an input prompt and retrieving an intermediate noise state based on a similarity between the input prompt and a candidate prompt corresponding to the intermediate noise state. An image generation model generates a synthetic image based on the input prompt and the intermediate noise state.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06T 1/60 - Memory management
  • G06T 5/60 - Image enhancement or restoration using machine learning, e.g. neural networks
  • G06T 5/70 - DenoisingSmoothing

70.

TEXTURE BASED CONSISTENCY FOR GENERATIVE AI ASSETS, EFFECTS AND ANIMATIONS

      
Application Number 18665130
Status Pending
Filing Date 2024-05-15
First Publication Date 2025-10-16
Owner ADOBE INC. (USA)
Inventor
  • Lungu-Stan, Vlad-Constantin
  • Ungureanu-Contes, Adrian-Stefan
  • Mironica, Ionuţ

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input texture image and a plurality of image masks, generating a plurality of image assets corresponding to the plurality of image masks based on the input texture image, and generating a combined asset including the plurality of image assets. The plurality of image assets have a consistent texture based on the input texture image.

IPC Classes  ?

  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • G06T 7/40 - Analysis of texture
  • G06T 13/80 - 2D animation, e.g. using sprites

71.

MULTI-MODAL RETRIEVAL USING AN INTERMEDIATE NOISE STATE

      
Application Number 18632414
Status Pending
Filing Date 2024-04-11
First Publication Date 2025-10-16
Owner ADOBE INC. (USA)
Inventor
  • Tanjim, Md Mehrab
  • Lu, Chen-Yi
  • Mahadik, Kanak
  • Rao, Anup Bandigadi

Abstract

A method, apparatus, non-transitory computer readable medium, and system for data processing include obtaining a text prompt and generating a first intermediate noise state based on the text prompt, retrieving a second intermediate noise state based on the text prompt and the first intermediate noise state, and generating a synthetic image based on the text prompt and the second intermediate noise state.

IPC Classes  ?

72.

GENERATING HIERARCHICAL ENTITY SEGMENTATIONS UTILIZING SELF-SUPERVISED MACHINE LEARNING MODELS

      
Application Number 18632933
Status Pending
Filing Date 2024-04-11
First Publication Date 2025-10-16
Owner Adobe Inc. (USA)
Inventor
  • Gu, Jiuxiang
  • Kuen, Jason Wen Yong
  • Tan, Hao
  • Zhang, Ruiyi
  • Zhao, Handong
  • Nenkova, Ani
  • Sun, Tong
  • Cao, Shengcao

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for hierarchical entity segmentation. In particular, in one or more embodiments, the disclosed systems receive a digital image comprising a plurality of object entities. In addition, in some embodiments, the disclosed systems generate, utilizing a segmentation model comprising parameters generated according to pseudo-labels indicating hierarchies of segmentation masks for a set of training digital images, a hierarchical segmentation indicating hierarchical relations of the plurality of object entities of the digital image. Moreover, in some embodiments, the disclosed systems generate, for the digital image, a segmentation map from the hierarchical segmentation of the plurality of object entities.

IPC Classes  ?

  • G06T 7/12 - Edge-based segmentation
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
  • G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
  • G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations

73.

GENERATING DIGITAL CONTENT CONSISTENT WITH CONTEXT-SPECIFIC GUIDELINES UTILIZING PROMPT AUGMENTATION AND MODEL TUNING

      
Application Number 18634240
Status Pending
Filing Date 2024-04-12
First Publication Date 2025-10-16
Owner Adobe Inc. (USA)
Inventor
  • Sankar, Varsha
  • Venkitachalam, Shankar
  • Yadagiri, Meghanath Macha
  • Moosaei, Maryam
  • Pai, Deepak
  • Basu, Debraj Debashish

Abstract

The present disclosure is directed toward systems, methods, and non-transitory computer readable media that provide a contextual content generation system that trains and implements a unique machine learning architecture to generate context-specific digital content items based on a digital guideline document. In particular, the disclosed systems select a content generation method from among prompt engineering and/or updating one or more machine learning models to generate digital content. For example, the disclosed systems utilize machine learning models to extract key elements from a digital guideline document comprising context-specific guidelines for digital content. Further, the disclosed systems generate an augmented prompt comprising indications of key elements from the digital guideline document. In addition, the disclosed systems select a content generation method from among prompt engineering and/or updating machine learning models to generate the digital content item which incorporates digital content corresponding to the context-specific guidelines based on the augmented prompt.

IPC Classes  ?

74.

ENHANCING LIGHT TEXT IN SCANNED DOCUMENTS WHILE PRESERVING DOCUMENT FIDELITY

      
Application Number 18931424
Status Pending
Filing Date 2024-10-30
First Publication Date 2025-10-16
Owner Adobe Inc. (USA)
Inventor
  • Mondal, Prasenjit
  • Soni, Sachin

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement an image filter for enhancing light text and removing document shadows. In particular embodiments, the disclosed systems use a modified adaptive thresholding approach the relies on image gradients to efficiently guide the thresholding process. In addition, the disclosed systems use a machine-learning model to generate a document shadow map. The document shadow map can include text reflections. Accordingly, the disclosed systems remove text reflections from the document shadow map (e.g., by using an interpolated shadow intensity value of neighboring shadow map pixels). In turn, the disclosed systems use the document text mask and the document shadow map cleaned of text reflections to remove shadows from the digital image. Further, the disclosed systems enhance text in the shadow-removed digital image based on contrast stretching.

IPC Classes  ?

  • G06T 5/80 - Geometric correction
  • G06T 5/40 - Image enhancement or restoration using histogram techniques
  • G06T 5/60 - Image enhancement or restoration using machine learning, e.g. neural networks
  • G06T 5/92 - Dynamic range modification of images or parts thereof based on global image properties
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

75.

STYLE KITS GENERATION AND CUSTOMIZATION

      
Application Number 18958842
Status Pending
Filing Date 2024-11-25
First Publication Date 2025-10-16
Owner ADOBE INC. (USA)
Inventor
  • Hurlburt, Kelly
  • Hopper, Brooke
  • Vuong, Minh-Anh
  • Tall, Tidjane

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a style kit including a first image generation input indicating a first image attribute, a second image generation input indicating a second image attribute, and a selectability parameter indicating that the second image generation input is selectable. A third image generation input is received from a user based on the selectability parameter, wherein the third image generation input indicates a third image attribute different from the second image attribute of the second image generation input. An image generation model generates a synthetic image based on the style kit, the first image generation input, and the third image generation input, wherein the synthetic image has the first image attribute and the third image attribute.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

76.

Using reinforcement learning to recommend data visualizations

      
Application Number 18668888
Grant Number 12443621
Status In Force
Filing Date 2024-05-20
First Publication Date 2025-10-14
Grant Date 2025-10-14
Owner ADOBE INC. (USA)
Inventor
  • Porwal, Vibhor
  • Mitra, Subrata
  • Agarwal, Shubham
  • Rossi, Ryan A
  • Ahmad, Ghazi Shazan
  • Doshi, Manav Ketan
  • Kumar Paila, Syam Manoj

Abstract

Methods and systems are provided for using reinforcement learning to recommend data visualizations. In embodiments described herein, statistical features for each sample of corresponding samples of a dataset are determined by applying each sample of the dataset to a data visualization recommendation model. The computational cost of each of the statistical features for each of the samples is determined based via a regression model. Recommended statistical features are determined by sequentially applying each sample to a reinforcement learning model with a computational budget and with the corresponding computational costs of the statistical features of each sample. A data visualization is then displayed that is generated by applying the dataset and the recommended statistical features to the data visualization recommendation model.

IPC Classes  ?

  • G06F 16/26 - Visual data miningBrowsing structured data

77.

SELF-SUPERVISED AUDIO-VISUAL LEARNING FOR CORRELATING MUSIC AND VIDEO

      
Application Number 19246631
Status Pending
Filing Date 2025-06-23
First Publication Date 2025-10-09
Owner Adobe Inc. (USA)
Inventor
  • Salamon, Justin
  • Russell, Bryan
  • Suris Coll-Vinent, Didac

Abstract

Embodiments are disclosed for correlating video sequences and audio sequences by a media recommendation system using a trained encoder network. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a training input including a media sequence, including a video sequence paired with an audio sequence, segmenting the media sequence into a set of video sequence segments and a set of audio sequence segments, extracting visual features for each video sequence segment and audio features for each audio sequence segment, generating, by transformer networks, contextualized visual features from the extracted visual features and contextualized audio features from the extracted audio features, the transformer networks including a visual transformer and an audio transformer, generating predicted video and audio sequence segment pairings based on the contextualized visual and audio features, and training the visual transformer and the audio transformer to generate the contextualized visual and audio features.

IPC Classes  ?

  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G10L 25/03 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters
  • G10L 25/57 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for processing of video signals

78.

GROUP PORTRAIT PHOTO EDITING

      
Application Number 18625461
Status Pending
Filing Date 2024-04-03
First Publication Date 2025-10-09
Owner ADOBE INC. (USA)
Inventor
  • Jiang, Yuming
  • Zhao, Nanxuan
  • Liu, Qing
  • Singh, Krishna Kumar

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image generation includes obtaining an input image depicting an entity and a skeleton map depicting a pose of the entity and performing a cross-attention mechanism between image features of the input image and entity features representing the pose to obtain modified image features. An output image is generated based on the modified image features that depicts the entity with the pose.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 5/77 - RetouchingInpaintingScratch removal

79.

GENERATIVE ARTIFICIAL INTELLIGENCE (AI) CONTENT STRATEGY

      
Application Number 18625484
Status Pending
Filing Date 2024-04-03
First Publication Date 2025-10-09
Owner Adobe Inc. (USA)
Inventor
  • Xiao, Chang
  • Courtois, Zeus
  • Surange, Sonali
  • Hanson-Regalado, Jacob Benjamin
  • Koh, Eunyee
  • Miller, Gavin Stuart Peter

Abstract

Generative artificial intelligence (AI) content strategy techniques are described. In one or more examples, a content brief is received describing a goal to be achieved in controlling digital content output. Content brief data is extracted from the content brief and a content strategy is generated based on the content brief data using generative artificial intelligence implemented using one or more machine-learning models.

IPC Classes  ?

  • G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
  • G06Q 30/0204 - Market segmentation
  • G06Q 30/0226 - Incentive systems for frequent usage, e.g. frequent flyer miles programs or point systems

80.

INJECTIVE 3D DEFORMATIONS BASED ON 2D MESH DEFORMATIONS

      
Application Number 18630007
Status Pending
Filing Date 2024-04-09
First Publication Date 2025-10-09
Owner Adobe Inc. (USA)
Inventor
  • Sun, Bo
  • Groueix, Thibault
  • Aigerman, Noam

Abstract

Aspects and features of the present disclosure relate to providing injective three-dimensional (3D) deformations based on two-dimensional (2D) mesh deformations. For example, a method involves defining at least one 2D mesh deformation based on a designated position of an object represented by an input neural radiance field (NeRF). The method also involves applying the 2D mesh deformation(s) to a 3D piecewise-linear map that operates over a plane and preserves a normal direction to produce prismatic maps. The method further involves composing a 3D deformation for the object from layers defined by the prismatic maps, and parameterizing the 3D piecewise-linear map. The method additionally involves storing or rendering, using the 3D piecewise-linear map, a deformed NeRF injectively representing the object in the designated position. Aspects also include computer systems, apparatus, and computer programs configured to perform the method.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation

81.

Enhancing artificial intelligence responses with contextual usage insights

      
Application Number 18626551
Grant Number 12468746
Status In Force
Filing Date 2024-04-04
First Publication Date 2025-10-09
Grant Date 2025-11-11
Owner ADOBE INC. (USA)
Inventor
  • Maharaj, Akash Vivek
  • Muppala, Vaishnavi
  • Vaithyanathan, Shivakumar
  • Garg, Manas
  • Russell, Kenneth George
  • Dasgupta, Ishita
  • Rao, Anup Bandigadi
  • Pejcic, Aleksander

Abstract

Some aspects relate to technologies for an artificial intelligent (AI) system that, among other things, enhances responses to concepts questions for an application with contextual usage insights. In accordance with some aspects, a user query is determined to comprise a concepts question regarding an application. Responsive to determining the user query comprises the concepts question, documentation regarding the application relevant to the user query is identified. A generative model generates text for a response to the concepts question using the documentation regarding the application. Additionally, a determination is made to add contextual usage insights to the response. Responsive to determining to add contextual usage insights to the response, usage data relevant to the user query and/or the response is retrieved. The generative model generates text for a final response using the response and the usage data, and the final response is provided to a user device for presentation.

IPC Classes  ?

82.

GENERATING CUSTOMIZED ARROW HEADS UTILIZING DEEP LEARNING

      
Application Number 18628250
Status Pending
Filing Date 2024-04-05
First Publication Date 2025-10-09
Owner Adobe Inc. (USA)
Inventor
  • Gehlaut, Tarun
  • Jain, Stuti

Abstract

The present disclosure is directed toward systems, methods, and non-transitory computer readable media that provide a digital design interface for intuitively creating custom arrows that demonstrate both visual consistency and inherent directionality within vector-based design applications. In particular, in one or more implementations, the disclosed systems receive a request to create a custom arrow from a digital object and a path segment. In addition, the disclosed systems detect that the digital object is within a threshold distance of the path segment and combine the digital object with the path segment to create a custom arrow object. In particular, the disclosed systems utilize a bilateral segmentation machine-learning model to segment the digit object and a symmetry axis detection model to determine an axis of symmetry of the digital object. Moreover, the disclosed systems attach the digital object to an endpoint of the path segment at the axis of symmetry.

IPC Classes  ?

83.

Miscellaneous Design

      
Application Number 019256499
Status Pending
Filing Date 2025-10-03
Owner Adobe Inc. (USA)
NICE Classes  ? 41 - Education, entertainment, sporting and cultural services

Goods & Services

Educational and training services; educational and training services in the form of classroom training, online training, web based training, and video training in the fields of computer software, cloud computing, desktop publishing, digital publishing, electronic publishing, graphic design, marketing, advertising, analytics, e-commerce, digital asset management, data management, business management, business process management, business document and forms creation, and automation of business document and forms processing and workflow; educational services; educational services in the form of arranging professional workshops and training courses, conducting classes, seminars, conferences, and workshops in the fields of computer software, cloud computing, desktop publishing, digital publishing, electronic publishing, graphic design, marketing, advertising, analytics, e-commerce, digital asset management, data management, business management, business process management, business document and forms creation, and automation of business document and forms processing and workflow; educational and training sessions in the field of organization and business matters relating to creative professionals.

84.

META-LEARNING FOR ADAPTIVE FILTERS

      
Application Number 19239430
Status Pending
Filing Date 2025-06-16
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Casebeer, Jonah
  • Bryan, Nicholas J.

Abstract

Embodiments are disclosed for performing a using a neural network to optimize filter weights of an adaptive filter. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving, by a filter, an input audio signal, wherein the input audio signal is a far-end audio signal, the filter including a transfer function with adaptable filter weights, generating a response audio signal modeling the input audio signal passing through the acoustic environment, receiving a target response signal, including the input audio signal and near-end audio signals, calculating an adaptive filter loss, generating, by a trained recurrent neural network, a filter weight update using the calculated adaptive filter loss, updating the adaptable filter weights of the transfer function to create an updated transfer function, generating an updated response audio signal based on the updated transfer function, and providing the updated response audio signal as an output audio signal.

IPC Classes  ?

  • G10L 21/0232 - Processing in the frequency domain
  • G10L 21/0208 - Noise filtering
  • G10L 21/0224 - Processing in the time domain
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
  • G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks

85.

EDITING DIGITAL IMAGES WITH LOCAL REFINEMENT VIA SELECTIVE FEATURE TRIMMING

      
Application Number 18617032
Status Pending
Filing Date 2024-03-26
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Nitzan, Yotam
  • Wu, Zongze
  • Park, Taesung
  • Zhang, Richard
  • Gharbi, Michael
  • Shechtman, Elya

Abstract

Methods, systems, and non-transitory computer readable storage media are disclosed for modifying digital images via a generative neural network with local refinement. The disclosed system generates, utilizing an encoder neural network, a latent feature vector of a digital image by encoding global context information of the digital image into the latent feature vector. The disclosed system also determines a modified latent feature vector by trimming the latent feature vector to a feature subset corresponding to a masked portion of the digital image. Additionally, the disclosed system generates, utilizing a generative decoder neural network on the modified latent feature vector, digital image data corresponding to the masked portion of the digital image. The disclosed system also generates a modified digital image including the digital image data corresponding to the masked portion combined with additional portions of the digital image.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 5/60 - Image enhancement or restoration using machine learning, e.g. neural networks

86.

GENERATING A DIGITAL POSTER INCLUDING MULTIMODAL CONTENT EXTRACTED FROM A SOURCE DOCUMENT

      
Application Number 18619667
Status Pending
Filing Date 2024-03-28
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Jaisankar, Vijay
  • Chaitanya, Varre Suman
  • Vyas, Kalp Sachinkumar
  • Bandyopadhyay, Sambaran
  • Somasundaram, Shwetha

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating digital posters from digital documents with multimodal content using a deep submodular function. Specifically, the disclosed systems generate embedding vectors representing multimodal content of a digital document comprising text and images. Further, disclosed systems determine, utilizing a deep submodular function on the embedding vectors, a content subset comprising one or more digital images aligned with one or more text segments representative of the digital document. Moreover, the disclosed systems generate, utilizing a large language model, a summary of the multimodal content of the digital document from a prompt based on the content subset. Additionally, the disclosed systems generate, for display at a client device, a digital poster comprising the summary of the multimodal content generated via the large language model.

IPC Classes  ?

  • G06N 3/0455 - Auto-encoder networksEncoder-decoder networks

87.

IMAGE RELIGHTING USING MACHINE LEARNING

      
Application Number 18949023
Status Pending
Filing Date 2024-11-15
First Publication Date 2025-10-02
Owner ADOBE INC. (USA)
Inventor
  • Revanur, Ambareesh
  • Kolkin, Nicholas Isaac
  • Agarwal, Dhwanit
  • Agrawal, Shradha
  • Zhang, He
  • Harikumar, Midhun
  • Shechtman, Elya

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image generation includes obtaining an input image and an input prompt, where the input image depicts an object and the input prompt describes a lighting condition for the object, generating relighted image features based on the input image and the input prompt, where the relighted image features represent the object with the lighting condition, and generating a synthetic image based on the relighted image features, where the synthetic image depicts the object with the lighting condition.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 11/00 - 2D [Two Dimensional] image generation

88.

DOCUMENT BOUNDARY DETECTION USING THE CURVATURE OF TEXT LINES

      
Application Number 18617279
Status Pending
Filing Date 2024-03-26
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Mondal, Prasenjit
  • Soni, Sachin

Abstract

Embodiments are disclosed for using the curvature of text lines to detect a document boundary. The method may include receiving a warped image depicting a page of a document having an incomplete document boundary, the page including a plurality of text lines. A complete document boundary may be identified based on the incomplete document boundary and the plurality of text lines. A dewarped image corresponding to the warped image may be determined using the complete document boundary. The dewarped image may then be provided for display on a client device.

IPC Classes  ?

89.

DIFFUSION WATERMARKING FOR CAUSAL ATTRIBUTION

      
Application Number 18617969
Status Pending
Filing Date 2024-03-27
First Publication Date 2025-10-02
Owner ADOBE INC. (USA)
Inventor
  • Agarwal, Shruti
  • Collomosse, John
  • Asnani, Vishal

Abstract

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image processing include obtaining an input prompt describing an image element, generating, using an image generation model, an output image depicting the image element and including a watermark, and identifying the training image as a source of the output image based on the watermark. The image generation model is trained using a training image including the image element and the watermark.

IPC Classes  ?

  • G06T 1/00 - General purpose image data processing
  • G06T 11/60 - Editing figures and textCombining figures or text

90.

ADAPTIVE DYNAMIC GUIDANCE IN DATA ANALYSIS TOOLS

      
Application Number 18618638
Status Pending
Filing Date 2024-03-27
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Narechania, Arpit Ajay
  • Hoffswell, Jane
  • Guo, Shunan
  • Koh, Eunyee
  • Bhutani, Prithvi

Abstract

In one aspect, a computer-implemented method includes accessing, by a guidance module of an analysis application executing on a processor, wildcard data associated with data in a data repository. The method further includes displaying, by the guidance module based on the wildcard data, one or more wildcard elements in a graphical user interface (GUI). The method further includes receiving, by the analysis application, selection of a first wildcard element of the one or more wildcard elements. The method further includes displaying, by the guidance module, a suggestion based on the selection of the first wildcard element.

IPC Classes  ?

  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 9/451 - Execution arrangements for user interfaces

91.

GENERATING SYNTHETIC DIGITAL IMAGES UTILIZING A TEXT-TO-IMAGE GENERATION NEURAL NETWORK WITH LOCALIZED CONSTRAINTS

      
Application Number 18619587
Status Pending
Filing Date 2024-03-28
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Feng, Weixi
  • Li, Yijun
  • Bui, Trung
  • Hinz, Tobias
  • Cohen, Scott
  • Tran, Quan
  • Zhang, Jianming
  • Zhao, Handong
  • Dernoncourt, Franck

Abstract

Methods, systems, and non-transitory computer readable storage media are disclosed for generating digital images via a generative neural network with localized constraints. The disclosed system generates, utilizing one or more encoder neural networks, a sequence of embeddings comprising a prompt embedding representing a text prompt and an object text embedding representing a phrase indicating an object in the text prompt. The disclosed system generates, utilizing the one or more encoder neural networks, a visual embedding representing an object image corresponding to the object. The disclosed system determines a modified sequence of embeddings by replacing the object text embedding with the visual embedding in the sequence of embeddings. The disclosed system also generates, utilizing a generative neural network, a synthetic digital image from the modified sequence of embeddings comprising the visual embedding.

IPC Classes  ?

  • G06N 3/0455 - Auto-encoder networksEncoder-decoder networks
  • G06T 11/00 - 2D [Two Dimensional] image generation

92.

VECTORIZING DIGITAL IMAGES WITH SUB-PIXEL ACCURACY USING DYNAMIC UPSCALING

      
Application Number 18619610
Status Pending
Filing Date 2024-03-28
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Harpavat, Keerti
  • Chakraborty, Souymodip
  • Gharbi, Michael
  • Fisher, Matthew
  • Ranawat, Jaswant Singh
  • Phogat, Ankit
  • Batra, Vineet

Abstract

The present disclosure relates to systems, methods, and non-transitory computer-readable media that selectively utilizes an image super-resolution model to upscale image patches corresponding to high frequency portions. In particular, the disclosed systems select a set of image patches corresponding to high frequency portions of a digital image at a first resolution. Furthermore, the disclosed systems utilize an image super-resolution model to generate upscaled image patches for the set of image patches of the high-frequency portions to a second resolution higher than the first resolution according to an upscaling factor of at least two. The disclosed systems generate a segmentation map of the digital image based on the upscaled image patches and an upscaled segmentation corresponding to low-frequency portions of the digital image. Further, the disclosed systems generate a vectorized digital image for the digital image according to the segmentation map.

IPC Classes  ?

  • G06T 3/4053 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
  • G06T 5/70 - DenoisingSmoothing
  • G06T 7/11 - Region-based segmentation
  • G06T 7/13 - Edge detection
  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06V 10/56 - Extraction of image or video features relating to colour

93.

THREE DIMENSIONAL AWARE VIDEO COMPOSITING

      
Application Number 18623377
Status Pending
Filing Date 2024-04-01
First Publication Date 2025-10-02
Owner Adobe Inc. (USA)
Inventor
  • Xu, Zhan
  • Pimmel, Kim P.
  • Yang, Jimei

Abstract

Three dimensional aware video compositing techniques are described. In one or more examples, subject data is produced that defines a subject depicted in frames of a subject video and viewpoint data describing movement of a viewpoint with respect to the frames of the subject video. Three-dimensional data is formed that defines a three-dimensional representation of an environment depicted in frames of an environment video. A composited video is generated by aligning the environment with the movement of the viewpoint of the subject based on the subject data and the three-dimensional data, which is then rendered, e.g., presented for display in a user interface.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 7/11 - Region-based segmentation
  • G06T 7/215 - Motion-based segmentation
  • G06T 7/579 - Depth or shape recovery from multiple images from motion
  • G06T 15/20 - Perspective computation

94.

REFERENCE IMAGE STRUCTURE MATCH USING DIFFUSION MODELS

      
Application Number 18947959
Status Pending
Filing Date 2024-11-14
First Publication Date 2025-10-02
Owner ADOBE INC. (USA)
Inventor
  • Kelkar, Sachin Madhav
  • Chen, Fengbin
  • Ravi, Hareesh
  • Zhang, Zhifei
  • Kale, Ajinkya Gorakhnath
  • Lin, Zhe

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a structural input indicating a target spatial structure, encoding, using a condition encoder, the structural input to obtain a structural encoding representing the target spatial structure, and generating, using an image generation model, a synthetic image based on the structural encoding, where the synthetic image depicts an object having the target spatial structure.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

95.

TEXT TO COLOR PALETTE GENERATION USING DIFFUSION MODELS

      
Application Number 18609102
Status Pending
Filing Date 2024-03-19
First Publication Date 2025-09-25
Owner ADOBE INC. (USA)
Inventor
  • Aggarwal, Pranav Vineet
  • Kale, Ajinkya Gorakhnath

Abstract

A method, apparatus, non-transitory computer readable medium, and system for text-to-color palette generation include encoding a text prompt to obtain text embedding. A color embedding is generated based on the text embedding by performing a diffusion process. Then a color palette is generated based on the color embedding. The color palette includes a plurality of colors corresponding to the text prompt.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06F 40/284 - Lexical analysis, e.g. tokenisation or collocates
  • G06F 40/40 - Processing or translation of natural language
  • G06T 5/70 - DenoisingSmoothing

96.

CLUSTERING USERS ACCORDING TO CAUSAL RELATIONSHIPS AMONG USER DATA

      
Application Number 18609625
Status Pending
Filing Date 2024-03-19
First Publication Date 2025-09-25
Owner ADOBE INC. (USA)
Inventor
  • Porwal, Vibhor
  • Chopra, Harshita
  • Sinha, Atanu R.
  • Modanwal, Sharda Kriti
  • Narayanaswamy, Chetan Reddy
  • Niaz, Zainul

Abstract

Methods, non-transitory computer readable media, apparatuses, and systems for data processing include obtaining, by a machine learning model, a user cluster and interaction data for users in the user cluster, where the interaction data relates to interactions between the users and a digital platform. Some embodiments further include generating, by the machine learning model, a directed graph based on the user cluster and the interaction data, where the directed graph represents causal relationships among the interactions. Some embodiments further include updating, by the machine learning model, the user cluster based on the directed graph. Some embodiments further include providing, by a content component, customized content to a user via the digital platform based on the updated user cluster.

IPC Classes  ?

97.

SELECTIVE OBJECT-LEVEL UNDO

      
Application Number 18610628
Status Pending
Filing Date 2024-03-20
First Publication Date 2025-09-25
Owner Adobe Inc. (USA)
Inventor
  • Soni, Nikita
  • Bui, Trung
  • Smith, Kevin Gary

Abstract

The present disclosure relates to systems, non-transitory computer-readable media, and methods for modifying a digital design by performing a selective object-level undo operation. In one or more embodiments, the disclosed systems generate a modified object by performing a series of operations on an object depicted within the digital design. In some embodiments, the disclosed systems receive a selective object-level undo operation on the modified object, wherein the request specifies an operation to undo from among the series of operations performed on the object. In one or more embodiments, the disclosed systems modify the modified object by performing the selective object-level undo operation on the modified object to undo the operation from among the series of operations. In some embodiments, the disclosed systems provide an updated digital design depicting the modified object reflecting modifications from the series of operations excluding the operation undone by the selective object-level undo operation.

IPC Classes  ?

  • G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
  • G06F 3/04842 - Selection of displayed objects or displayed text elements
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06F 30/12 - Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD

98.

DIGITAL IMAGE VISUAL AESTHETIC SCORE GENERATION

      
Application Number 18611886
Status Pending
Filing Date 2024-03-21
First Publication Date 2025-09-25
Owner Adobe Inc. (USA)
Inventor
  • Jenni, Simon
  • Wang, Zhaowen
  • Collomosse, John Philip

Abstract

Digital image visual aesthetic score generation techniques are described. In one or more examples, these techniques are implemented by a system including a training data collection module implemented by a processing device to collect training data including training digital images and user interaction data describing user interaction with the training digital images, respectively. A training module is configured to train a machine-learning model using the training data to generate an aesthetic score based on an input digital image. The aesthetic score is configured to specify an amount of visual aesthetics exhibited by the input digital image.

IPC Classes  ?

  • G06T 7/00 - Image analysis
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations

99.

CONTROLLABLE VISUAL TEXT GENERATION WITH ADAPTER-ENHANCED DIFFUSION MODELS

      
Application Number 18612100
Status Pending
Filing Date 2024-03-21
First Publication Date 2025-09-25
Owner ADOBE INC. (USA)
Inventor
  • Ji, Jiabao
  • Wang, Zhaowen
  • Zhang, Zhifei
  • Price, Brian Lynn

Abstract

A method, apparatus, non-transitory computer readable medium, and system for image generation include obtaining a text content image and a text style image. The text content image is encoded to obtain content guidance information and the text style image is encoded to obtain style guidance information. Then a synthesized image is generated based on the content guidance information and the style guidance information. The synthesized image includes text from the text content image having a text style from the text style image.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06F 40/109 - Font handlingTemporal or kinetic typography
  • G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
  • G06V 30/10 - Character recognition

100.

REDUCING HALLUCINATIONS FOR GENERATIVE TEXT RESPONSES USING A MACHINE LEARNING PROMPT ENSEMBLE

      
Application Number 18612566
Status Pending
Filing Date 2024-03-21
First Publication Date 2025-09-25
Owner Adobe Inc. (USA)
Inventor
  • Yu, Tong
  • Chen, Xiang
  • Bursztyn, Victor Soares
  • Kim, Sungchul
  • Rossi, Ryan A
  • Zhang, Ruiyi
  • Wang, Rui

Abstract

The present disclosure relates to systems, methods, and non-transitory computer-readable media that iteratively generates, utilizing a machine learning model, text responses to reduce hallucinated content. In particular, in some embodiments, the disclosed systems receive a digital query and selects one or more supporting digital documents for the digital query. Furthermore, in some embodiments the disclosed systems generate a first text response from a first text prompt generated by using the digital query. Moreover, in some embodiments the disclosed systems extract a misalignment portion of the first text response by comparing the first text response and the one or more supporting digital documents. Additionally, from the misalignment portion of the first text response and the digital query, the disclosed systems further generate a second text response.

IPC Classes  ?

  • G06F 16/33 - Querying
  • G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  1     2     3     ...     75        Next Page