Adobe Inc.

États‑Unis d’Amérique

Retour au propriétaire

1-100 de 7 232 pour Adobe Inc. Trier par
Recheche Texte
Excluant les filiales
Affiner par Reset Report
Type PI
        Brevet 6 696
        Marque 536
Juridiction
        États-Unis 6 969
        Europe 157
        Canada 92
        International 14
Date
Nouveautés (dernières 4 semaines) 43
2025 novembre (MACJ) 5
2025 octobre 50
2025 septembre 29
2025 août 28
Voir plus
Classe IPC
G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte 519
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales 517
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques 501
G06N 3/08 - Méthodes d'apprentissage 488
G06F 17/30 - Recherche documentaire; Structures de bases de données à cet effet 416
Voir plus
Classe NICE
09 - Appareils et instruments scientifiques et électriques 395
42 - Services scientifiques, technologiques et industriels, recherche et conception 266
35 - Publicité; Affaires commerciales 103
16 - Papier, carton et produits en ces matières 86
41 - Éducation, divertissements, activités sportives et culturelles 63
Voir plus
Statut
En Instance 807
Enregistré / En vigueur 6 425
  1     2     3     ...     73        Prochaine page

1.

UTILIZING MACHINE LEARNING MODELS FOR PATCH RETRIEVAL AND DEFORMATION IN COMPLETING THREE-DIMENSIONAL DIGITAL SHAPES

      
Numéro d'application 19270209
Statut En instance
Date de dépôt 2025-07-15
Date de la première publication 2025-11-06
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Chaudhuri, Siddhartha
  • Sun, Bo
  • Kim, Vladimir
  • Aigerman, Noam

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed that utilizes machine learning models for patch retrieval and deformation in completing three-dimensional digital shapes. In particular, in one or more implementations the disclosed systems utilize a machine learning model to predict a coarse completion shape from an incomplete 3D digital shape. The disclosed systems sample coarse 3D patches from the coarse 3D digital shape and learn a shape distance function to retrieve detailed 3D shape patches in the input shape. Moreover, the disclosed systems learn a deformation for each retrieved patch and blending weights to integrate the retrieved patches into a continuous surface.

Classes IPC  ?

  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06V 10/75 - Organisation de procédés de l’appariement, p. ex. comparaisons simultanées ou séquentielles des caractéristiques d’images ou de vidéosApproches-approximative-fine, p. ex. approches multi-échellesAppariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexteSélection des dictionnaires

2.

IDENTIFYING AND ALIGNING VIDEO CLIPS FROM LARGE-SCALE VIDEO DATASETS

      
Numéro d'application 18653577
Statut En instance
Date de dépôt 2024-05-02
Date de la première publication 2025-11-06
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jenni, Simon
  • Dave, Ishan Rajendrakumar
  • Heilbron, Fabian David Caba

Abrégé

Embodiments are disclosed for retrieving videos for a semantic and temporal alignment between a pair of video clips. The method may include receiving a query video clip. The method may further include determining alignment ratios between the query video clip and one or more candidate video clips. The method may further include identifying an alignable video clip from the one or more candidate video clips based on the alignment ratios. The method may further include aligning the alignable video clip with the query video clip.

Classes IPC  ?

  • G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
  • G06F 16/735 - Filtrage basé sur des données supplémentaires, p. ex. sur des profils d'utilisateurs ou de groupes
  • G11B 27/036 - Montage par insertion
  • G11B 27/10 - IndexationAdressageMinutage ou synchronisationMesure de l'avancement d'une bande

3.

USING SHAPLEY VALUES TO EVALUATE PROMPT GENERATION PARAMETERS

      
Numéro d'application 18655047
Statut En instance
Date de dépôt 2024-05-03
Date de la première publication 2025-11-06
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Venkitachalam, Shankar
  • M Y, Meghanath
  • Pai, Deepak
  • Basu, Debraj Debashish
  • Narang, Anish

Abrégé

Methods and systems are provided for using Shapley values to evaluate prompt generation parameters. In embodiments described herein, a selection of prompt parameters are accessed. A plurality of prompts are generated as a function of a combination of the prompt parameters. A corresponding quality metric is determined for each of the prompts. Prompt parameter contribution metrics are determined using a Shapley-value-based determination corresponding to a contribution of each of the prompt parameters to the corresponding content quality metric for each of the prompts. The prompt parameter contribution metrics are then displayed.

Classes IPC  ?

4.

EDITING DIGITAL IMAGES USING EXECUTABLE CODE GENERATED BY LARGE LANGUAGE MODELS FROM NATURAL LANGUAGE INPUT

      
Numéro d'application 18654904
Statut En instance
Date de dépôt 2024-05-03
Date de la première publication 2025-11-06
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhao, Handong
  • Wu, Qiucheng
  • Bui, Trung
  • Yoon, Seunghyun
  • Tran, Quan
  • Shi, Jing

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that perform text-to-image editing using executable code generated from natural language text input. For instance, in one or more embodiments, the disclosed systems receive, from a client device, a digital image and natural language text input providing instructions for modifying the digital image. The disclosed systems also generate, using a large language model, executable action code for modifying the digital image in accordance with the instructions of the natural language text input, the executable action code being compatible with an editing application. The disclosed systems further modify the digital image by executing the executable action code via the editing application and provide the modified digital image for display via a graphical user interface of the client device.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 7/11 - Découpage basé sur les zones

5.

Computer display screen with icon

      
Numéro d'application 29926615
Numéro de brevet D1100980
Statut Délivré - en vigueur
Date de dépôt 2024-01-31
Date de la première publication 2025-11-04
Date d'octroi 2025-11-04
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Walter, Julia
  • Reinemann, Bettina

6.

SAMPLING LIGHT DIRECTIONS ON NEURAL MATERIALS

      
Numéro d'application 18644322
Statut En instance
Date de dépôt 2024-04-24
Date de la première publication 2025-10-30
Propriétaire
  • Adobe Inc. (USA)
  • THE REGENTS OF THE UNIVERSITY OF CALIFORNIA (USA)
Inventeur(s)
  • Luan, Fujun
  • Xu, Zexiang
  • Hašan, Miloš
  • Georgiev, Iliyan Atanasov
  • Wu, Liwen
  • Xu, Bing
  • Ramamoorthi, Ravi

Abrégé

In implementation of techniques for sampling light directions on neural materials, a computing device implements a light direction system to receive neural features of a material and an indication of a view direction toward the material. Using a mixture of analytical lobes, a normalizing flow, or a histogram prediction, the light direction system predicts a probability density function (PDF). The light direction system then samples the PDF, calculates prominence values for each of a plurality of candidate light directions based on the PDF, and determines a light direction based on the prominence values.

Classes IPC  ?

7.

EDITING SHADOWS IN DIGITAL IMAGES UTILIZING MACHINE LEARNING MODELS

      
Numéro d'application 18651176
Statut En instance
Date de dépôt 2024-04-30
Date de la première publication 2025-10-30
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Shu, Zhixin
  • Hou, Andrew
  • Zhang, He
  • Zhang, Xuaner
  • Hold-Geoffroy, Yannick
  • Yoon, Jae Shin

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for editing shadows in digital images. In particular, in some embodiments, the disclosed systems determine, utilizing a lighting estimation network, an environment map for a digital image, the environment map comprising a dominant light. In addition, in some embodiments, the disclosed systems generate, utilizing a lighting diffusion network, a diffused image from the digital image, the diffused image comprising smoothed shading. Moreover, in some embodiments, the disclosed systems generate, utilizing a shadow synthesis network, a shadowed image from the diffused image and a modified environment map comprising a modified dominant light. Furthermore, in some embodiments, the disclosed systems generate, from the diffused image and the shadowed image, a modified digital image comprising an edited shadow.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

8.

THREE-DIMENSIONAL RECONSTRUCTIONS BASED ON GAUSSIAN PRIMITIVES

      
Numéro d'application 18646503
Statut En instance
Date de dépôt 2024-04-25
Date de la première publication 2025-10-30
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhang, Kai
  • Tan, Hao
  • Bi, Sai
  • Xu, Zexiang
  • Zhao, Nanxuan
  • Sunkavalli, Kalyan Krishna

Abrégé

In implementation of techniques for three-dimensional reconstructions based on Gaussian primitives, a computing device implements a reconstruction system to receive a first digital image depicting an object from a first angle and a second digital image depicting the object from a second angle. The reconstruction system segments the first digital image and the second digital image into patches. The reconstruction system then generates, using a machine learning model, three-dimensional Gaussian primitives that predict parameters of points of the object in a three-dimensional space that correspond on a per-pixel basis to pixels of the patches. The reconstruction system then forms a three-dimensional reconstruction of the object for display in a user interface by merging the three-dimensional Gaussian primitives.

Classes IPC  ?

  • G06T 17/10 - Description de volumes, p. ex. de cylindres, de cubes ou utilisant la GSC [géométrie solide constructive]
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images

9.

UPSCALING AI-GENERATED DIGITAL CONTENT WITHIN DIGITAL IMAGES VIA TILE-BASED SUPER RESOLUTION

      
Numéro d'application 18646543
Statut En instance
Date de dépôt 2024-04-25
Date de la première publication 2025-10-30
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Barnes, Connelly
  • Lin, Zhe
  • Liu, Xiaoyang
  • Amirghodsi, Sohrab
  • Liu, Qing

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that upscale AI-generated digital content via tile-based super resolution. For instance, in one or more embodiments, the disclosed systems determine a first set of tiles from a digital image having a set of pixels to be replaced with a generated content portion. The disclosed systems further determine a second set of tiles from a first modified digital image that corresponds to the digital image and includes the generated content portion at a first resolution. Based on the first set of tiles and the second set of tiles, the disclosed systems use a super resolution neural network to generate a second modified digital image that corresponds to the digital image and includes the generated content portion at a second resolution that is higher than the first resolution.

Classes IPC  ?

  • G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
  • G06T 3/4046 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement utilisant des réseaux neuronaux

10.

Interactive Network for Selecting, Ranking, Summarizing, and Exploring Data Insights

      
Numéro d'application 18649468
Statut En instance
Date de dépôt 2024-04-29
Date de la première publication 2025-10-30
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Hoffswell, Jane Elizabeth
  • Zhang, Wei
  • Soares Bursztyn, Victor
  • Guo, Shunan
  • Bhutani, Prithvi
  • Martinez, Jesse
  • Koh, Eunyee
  • Trivedi, Abhisek

Abrégé

Insight summary and prompt generation techniques are described. In one or more examples, a plurality of insights is generated from data extracted from digital content. A network representation is produced having a plurality of nodes based on the plurality of insights and a plurality of connections between corresponding insights. A selection is received of a subset of nodes from the plurality of nodes. A prompt is formed by grouping respective insights from the subset of nodes. An insight summary of the digital content is generated based on the prompt using generative artificial intelligence as implemented using one or more machine-learning models. The insight summary is then presented for output in a user interface.

Classes IPC  ?

  • G06T 11/20 - Traçage à partir d'éléments de base, p. ex. de lignes ou de cercles
  • G06F 40/40 - Traitement ou traduction du langage naturel

11.

JOINT FRAMEWORK FOR OBJECT-CENTERED SHADOW DETECTION, REMOVAL, AND SYNTHESIS

      
Numéro d'application 18651376
Statut En instance
Date de dépôt 2024-04-30
Date de la première publication 2025-10-30
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Wang, Tianyu
  • Kim, Soo Ye
  • Figueroa, Luis
  • Zheng, Haitian
  • Zhang, Jianming
  • Ding, Zhihong
  • Cohen, Scott
  • Lin, Zhe
  • Xiong, Wei

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that detects shadows, removes shadows, and synthesizes shadows in a joint-framework. In particular, the disclosed systems access an object mask of an object and a digital image depicting the object and a shadow of the object. Furthermore, the disclosed systems perform object-centered shadow detection and removal to generate a modified digital image without the shadow by utilizing a shadow analyzer model. Moreover, the disclosed systems receive a user interaction to manipulate an object and generate a modified shadow utilizing a shadow synthesis model where the shadow synthesis model is conditioned on a shadow mask generated by the shadow analyzer model.

Classes IPC  ?

  • G06T 11/40 - Remplissage d'une surface plane par addition d'attributs de surface, p. ex. de couleur ou de texture
  • G06T 5/60 - Amélioration ou restauration d'image utilisant l’apprentissage automatique, p. ex. les réseaux neuronaux
  • G06T 5/77 - RetoucheRestaurationSuppression des rayures

12.

Cross-lingual meta-transfer learning adaptation to natural language understanding

      
Numéro d'application 17655395
Numéro de brevet 12455911
Statut Délivré - en vigueur
Date de dépôt 2022-03-18
Date de la première publication 2025-10-28
Date d'octroi 2025-10-28
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • M'Hamdi, Meryem
  • Kim, Doo Soon
  • Dernoncourt, Franck
  • Bui, Trung Huu

Abrégé

Systems and methods for natural language processing are described. Embodiments of the present disclosure identify a task set including a plurality of pseudo tasks, wherein each of the plurality of pseudo tasks includes a support set corresponding to a first natural language processing (NLP) task and a query set corresponding to a second NLP task; update a machine learning model in an inner loop based on the support set; update the machine learning model in an outer loop based on the query set; and perform the second NLP task using the machine learning model.

Classes IPC  ?

13.

Type font

      
Numéro d'application 29935942
Numéro de brevet D1100037
Statut Délivré - en vigueur
Date de dépôt 2024-04-04
Date de la première publication 2025-10-28
Date d'octroi 2025-10-28
Propriétaire Adobe Inc. (USA)
Inventeur(s) Slimbach, Robert Joseph

14.

CUSTOMIZATION ASSISTANT FOR TEXT-TO-IMAGE GENERATION

      
Numéro d'application 18637914
Statut En instance
Date de dépôt 2024-04-17
Date de la première publication 2025-10-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Zhou, Yufan
  • Zhang, Ruiyi
  • Gu, Jiuxiang
  • Sun, Tong

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image and a text prompt including an image modification request, generating a text response based on the input image and the text prompt, where the text response describes a modification to the input image corresponding to the image modification request, and generating a synthetic image based on the input image and an output embedding of a language generation model, where the synthetic image depicts the modification to the input image.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 9/00 - Codage d'image

15.

ONE-STEP DIFFUSION WITH DISTRIBUTION MATCHING DISTILLATION

      
Numéro d'application 18639301
Statut En instance
Date de dépôt 2024-04-18
Date de la première publication 2025-10-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Yin, Tianwei
  • Gharbi, Michaël
  • Zhang, Richard
  • Shechtman, Elya
  • Park, Taesung

Abrégé

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image generation include obtaining a text prompt and a noise input, and then generating a synthetic image based on the text prompt and the noise input by performing a single pass with an image generation model. The image generation model is trained based on a multi-term loss comprising a positive term based on an output of a pre-trained model, and a negative term based on an output of a jointly-trained model.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

16.

GENERATING DIGITAL IMAGES UTILIZING A DIFFUSION-BASED NETWORK CONDITIONED ON LIGHTING-AWARE FEATURE REPRESENTATIONS

      
Numéro d'application 18640429
Statut En instance
Date de dépôt 2024-04-19
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Ren, Mengwei
  • Zhang, He
  • Xiong, Wei
  • Shu, Zhixin
  • Yoon, Jae Shin
  • Zhang, Jianming

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating digital images with a diffusion-based generative neural network conditioned on background-extracted lighting features. The disclosed system determines, in response to a request to generate a digital image, a target background image for inserting a foreground object into the target background image. The disclosed system generates, from the target background image and utilizing a lighting conditioning neural network, a lighting feature representation indicating one or more lighting parameters of the target background image. Additionally, the disclosed system generates, utilizing a diffusion-based generative neural network conditioned on the lighting feature representation, the digital image including the foreground object inserted into the target background image based on a composite image comprising the foreground object and the target background image with a foreground mask corresponding to the foreground object.

Classes IPC  ?

  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 5/77 - RetoucheRestaurationSuppression des rayures
  • G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
  • G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
  • G06V 10/771 - Sélection de caractéristiques, p. ex. sélection des caractéristiques représentatives à partir d’un espace multidimensionnel de caractéristiques
  • H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse

17.

Techniques for Triangle-level Rejection Sampling in Three-dimensional Object Meshes

      
Numéro d'application 18643369
Statut En instance
Date de dépôt 2024-04-23
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Boubekeur, Tamy
  • Thonat, Theo
  • Schertzer, Jèrémie

Abrégé

A graphics generation computing device applies triangle-level rejection sampling to generate a set of surface mesh point samples. A highly parallelized processor included in the graphics generation computing device generates a triangle-level sampling array that includes triangle-level sampling data for each triangle included in a 3D object mesh. Based on the data in the triangle-level sampling array, the highly parallelized processor determines a quantity of point samples in each triangle. The highly parallelized processor calculates, for each point sample, point sample location data that indicates a location of the point sample on a triangle. The highly parallelized processor modifies a set of point samples to include the location data. In some cases, the set of point samples is used to generate digital fibers or other structure data objects at the point sample locations indicated by the set of point samples.

Classes IPC  ?

  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06T 1/20 - Architectures de processeursConfiguration de processeurs p. ex. configuration en pipeline

18.

MASKED LATENT DECODER FOR IMAGE INPAINTING

      
Numéro d'application 18957817
Statut En instance
Date de dépôt 2024-11-24
Date de la première publication 2025-10-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Zheng, Haitian
  • Zhang, Zhifei
  • Lin, Zhe
  • Zhou, Yuqian

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image and an input mask, where the input image depicts a scene and the input mask indicates an inpainting region of the input image. A latent code is generated, using a generator network of an image generation model, based on the input image and the input mask. The latent code includes synthesized content in the inpainting region. A synthetic image is generated, using a decoder network of the image generation model, based on the latent code and the input image. The synthetic image depicts the scene from the input image outside the inpainting region and includes the synthesized content within the inpainting region, and the synthetic image comprises a seamless transition across a boundary of the inpainting region.

Classes IPC  ?

  • G06T 5/77 - RetoucheRestaurationSuppression des rayures
  • G06T 5/60 - Amélioration ou restauration d'image utilisant l’apprentissage automatique, p. ex. les réseaux neuronaux

19.

REMOVING OBJECTS AT IMAGE CAPTURE TIME

      
Numéro d'application 19255350
Statut En instance
Date de dépôt 2025-06-30
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Shukla, Sankalp
  • Gupta, Angad Kumar
  • Gupta, Sourabh

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for removing objects from an image stream at capture time of a digital image. For example, the disclosed system contemporaneously detects and segments objects from a digital image stream being previewed in a camera viewfinder graphical user interface of a client device. The disclosed system removes selected objects from the image stream and fills a hole left by the removed object with a content aware fill. Moreover, the disclosed system displays the image stream with the removed object and content fill as the image stream is previewed by a user prior to capturing a digital image from the image stream.

Classes IPC  ?

  • H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
  • H04N 5/262 - Circuits de studio, p. ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux
  • H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
  • H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques

20.

CONTRIBUTION DATA CALIBRATION

      
Numéro d'application 18637718
Statut En instance
Date de dépôt 2024-04-17
Date de la première publication 2025-10-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Huang, Bei
  • Yuan, Yuan
  • Xu, Yiming
  • Yuan, Qilong
  • Xu, Jin
  • Wang, Lijing
  • Wang, Bowen
  • Li, Yancheng
  • Yan, Zhenyu

Abrégé

A method, non-transitory computer readable medium, apparatus, and system for data processing include obtaining, by a multi-touch attribution model, individual-level user interaction data from a digital content channel, and computing, using the multi-touch attribution model, channel contribution data based on the individual-level user interaction data. Some embodiments include training, using a training component, an aggregate attribution model based on the channel contribution data. Some embodiments include generating, using a calibration component, an individual channel contribution value for the digital content channel based on the channel contribution data and the aggregate attribution model.

Classes IPC  ?

21.

OBJECT-CENTRIC CONTACT MODELING AND HAND GRASP GENERATION

      
Numéro d'application 18638487
Statut En instance
Date de dépôt 2024-04-17
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhou, Yang
  • Liu, Shaowei
  • Yang, Jimei

Abrégé

In some embodiments, a computing system receives a representation of an object from a client device. The computing system generates a contact representation for hand-object interaction based on the representation of the object. The object-centric contact representation includes a contact map indicating contact points on the representation of the object, a hand part map indicating hand parts contacting the object, and a direction map comprising contact directions of the hand parts contacting the object. The computing system generates a hand grasp representation with respect to the object based on the contact representation using a model-based optimization algorithm. The computing system provides the hand grasp representation to the client device.

Classes IPC  ?

  • G06F 30/23 - Optimisation, vérification ou simulation de l’objet conçu utilisant les méthodes des éléments finis [MEF] ou les méthodes à différences finies [MDF]
  • G06T 17/10 - Description de volumes, p. ex. de cylindres, de cubes ou utilisant la GSC [géométrie solide constructive]

22.

Relightable Scene Reconstructions Using Radiance Guided Material Extraction

      
Numéro d'application 18639346
Statut En instance
Date de dépôt 2024-04-18
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Michel, Élie Louis Simon
  • Philip, Julien Olivier Victor
  • Gomez Mijangos, Diego Andre
  • Kaiser, Adrien Michel Paul

Abrégé

Techniques for relightable scene reconstructions using radiance guided material extraction are described to accurately render 3D scenes under different lighting conditions and perspectives than original source images from which the scenes are constructed. In an example, a processing device is operable to receive a plurality of digital images that depict a scene from multiple perspectives, determine a view-independent radiance of the scene based on the plurality of digital images, and determine a view-dependent radiance of the scene based on the plurality of digital images. The processing device is further operable to determine a set of lighting conditions associated with an input perspective, generate a synthesized image having a reconstruction of the scene based on the set of lighting conditions using the view-independent radiance and the view-dependent radiance, and output the synthesized image.

Classes IPC  ?

23.

GENERATING AND MODIFYING DIGITAL IMAGE DATABASES THROUGH FAIRNESS DEDUPLICATION

      
Numéro d'application 18639568
Statut En instance
Date de dépôt 2024-04-18
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Slyman, Eric
  • Kafle, Kushal
  • Cohen, Scott

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying databases using a fairness deduplication algorithm. In particular, in one or more embodiments, the disclosed systems generate, within an embedding space, semantic embeddings from a plurality of digital images stored in a database. In some embodiments, the disclosed systems identify, from among the semantic embeddings in the embedding space, a preservable embedding according to a preservation prototype indicating a semantic concept to preserve within the database. In one or more embodiments, the disclosed systems generate a modified database by pruning one or more digital images corresponding to semantic embeddings other than the preservable embedding from the database.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux

24.

GENERATING VISUALLY AWARE DESIGN LAYOUTS USING A MULTI-DOMAIN DIFFUSION NEURAL NETWORK

      
Numéro d'application 18641137
Statut En instance
Date de dépôt 2024-04-19
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Wang, Zhaowen
  • Zhao, Nanxuan
  • Yang, Jimei
  • Liu, Difan
  • Shabani, Mohammad Amin

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer readable media that generate layouts for digital designs from image elements via multi-domain diffusion. For instance, in some embodiments, the disclosed systems receive, from a client device, a plurality of image elements for generating a digital design. The disclosed systems generate, using an encoder of a multi-domain diffusion neural network, embeddings representing visual characteristics and bounding box characteristics of the plurality of image elements. The disclosed systems further generate, using the multi-domain diffusion neural network, a layout for the digital design from the visual characteristics and bounding box characteristics of the embeddings. Additionally, the disclosed systems provide the layout for display on the client device.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte

25.

GENERATIVE ARTIFICAL INTELLIGENCE VISUAL EFFECTS

      
Numéro d'application 18677874
Statut En instance
Date de dépôt 2024-05-30
Date de la première publication 2025-10-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Agarwal, Rishav
  • Iyer, Siddharth Srinivasan
  • Yadav, Shubbham
  • Jain, Sanyam
  • Brdiczka, Oliver
  • Katakol, Sudeep Siddheshwar
  • Bourgin, David Davenport
  • Darabi, Aliakbar

Abrégé

Generative artificial intelligence visual effect techniques are described. A prompt, for example, is received. The prompt includes text specifying a visual effect and text specifying a shape. A mask is formed defining a portion of digital content based on an object selected from digital content. The visual effect is generated using generative artificial intelligence by one or more machine-learning models based on the text specifying the visual effect, the text specifying the shape, and the mask. The digital content is presented as having the visual effect applied to the portion of the digital content for display in a user interface.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

26.

PROXY-GUIDED IMAGE EDITING

      
Numéro d'application 18956284
Statut En instance
Date de dépôt 2024-11-22
Date de la première publication 2025-10-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Zhou, Yuqian
  • Singh, Krishna Kumar
  • Lin, Zhe

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image and an input mask, wherein the input mask indicates a region of the input image to be modified and generating, using a first image generation model, an intermediate result based on the input image and the input mask, wherein the intermediate result modifies the region of the input image indicated by the input mask. A second image generation model generates a synthetic image based on the input image and the intermediate result, wherein the synthetic image depicts the input image with content from the modified region at a higher level of detail than the intermediate result.

Classes IPC  ?

  • G06T 5/77 - RetoucheRestaurationSuppression des rayures
  • G06T 5/70 - DébruitageLissage
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”

27.

GENERATING SCALABLE VECTOR TEXT EFFECTS

      
Numéro d'application 18631521
Statut En instance
Date de dépôt 2024-04-10
Date de la première publication 2025-10-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Ungureanu-Contes, Adrian-Stefan
  • Lupascu, Marian
  • Lungu-Stan, Vlad-Constantin
  • Mironicä, Ionut

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a pattern prompt and a text image, where the pattern prompt describes a visual pattern and the text image depicts text, generating a pattern image based on the pattern prompt, where the pattern image depicts the visual pattern, and generating a patterned text image based on the pattern image and the pattern prompt.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

28.

INTERMEDIATE NOISE RETRIEVAL FOR IMAGE GENERATION

      
Numéro d'application 18637024
Statut En instance
Date de dépôt 2024-04-16
Date de la première publication 2025-10-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Agarwal, Shubham
  • Mitra, Subrata
  • Karanam, Srikrishna
  • Mukherjee, Koyel
  • Saini, Shiv Kumar

Abrégé

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image processing include obtaining an input prompt and retrieving an intermediate noise state based on a similarity between the input prompt and a candidate prompt corresponding to the intermediate noise state. An image generation model generates a synthetic image based on the input prompt and the intermediate noise state.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 1/60 - Gestion de mémoire
  • G06T 5/60 - Amélioration ou restauration d'image utilisant l’apprentissage automatique, p. ex. les réseaux neuronaux
  • G06T 5/70 - DébruitageLissage

29.

TEXTURE BASED CONSISTENCY FOR GENERATIVE AI ASSETS, EFFECTS AND ANIMATIONS

      
Numéro d'application 18665130
Statut En instance
Date de dépôt 2024-05-15
Date de la première publication 2025-10-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Lungu-Stan, Vlad-Constantin
  • Ungureanu-Contes, Adrian-Stefan
  • Mironica, Ionuţ

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input texture image and a plurality of image masks, generating a plurality of image assets corresponding to the plurality of image masks based on the input texture image, and generating a combined asset including the plurality of image assets. The plurality of image assets have a consistent texture based on the input texture image.

Classes IPC  ?

  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 7/40 - Analyse de la texture
  • G06T 13/80 - Animation bidimensionnelle [2D], p. ex. utilisant des motifs graphiques programmables

30.

ACTIVELY-LEARNED CONTEXT MODELING FOR IMAGE COMPRESSION

      
Numéro d'application 19044293
Statut En instance
Date de dépôt 2025-02-03
Date de la première publication 2025-10-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Wu, Gang
  • Li, Yang
  • Petrangeli, Stefano
  • Swaminathan, Viswanathan
  • Wang, Haoliang
  • Rossi, Ryan A.
  • Song, Zhao

Abrégé

Embodiments described herein provide methods and systems for facilitating actively-learned context modeling. In one embodiment, a subset of data is selected from a training dataset corresponding with an image to be compressed, the subset of data corresponding with a subset of data of pixels of the image. A context model is generated using the selected subset of data. The context model is generally in the form of a decision tree having a set of leaf nodes. Entropy values corresponding with each leaf node of the set of leaf nodes are determined. Each entropy value indicates an extent of diversity of context associated with the corresponding leaf node. Additional data from the training dataset is selected based on the entropy values corresponding with the leaf nodes. The updated subset of data is used to generate an updated context model for use in performing compression of the image.

Classes IPC  ?

  • H04N 19/96 - Codage au moyen d'une arborescence, p. ex. codage au moyen d'une arborescence quadratique
  • G06N 20/00 - Apprentissage automatique
  • H04N 19/182 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un pixel
  • H04N 19/184 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant des bits, p. ex. de flux vidéo compressé
  • H04N 19/50 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif
  • H04N 19/91 - Codage entropique, p. ex. codage à longueur variable ou codage arithmétique

31.

COLLABORATION CONTROLS FOR DOCUMENT SECTIONS

      
Numéro d'application 19192026
Statut En instance
Date de dépôt 2025-04-28
Date de la première publication 2025-10-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Bansal, Ayush
  • Sinha, Deep

Abrégé

Methods and systems are provided for facilitating document collaboration in accordance with collaboration controls. In embodiments, an indication of a collaboration control for a collaborator of a document is obtained. The collaboration control generally indicates an edit permission for a document section of the document in relation to the collaborator. Thereafter, a set of collaboration control data for the document is generated. In embodiments, the set of collaboration control data includes the collaboration control indicating the edit permission for the document section of the document in relation to the collaborator. Based on an input (e.g., edit) by the collaborator to the document section of the document, a determination is made, using the set of collaboration control data, as to whether to enable an edit to the document section of the document.

Classes IPC  ?

  • H04L 65/401 - Prise en charge des services ou des applications dans laquelle les services impliquent une session principale en temps réel et une ou plusieurs sessions parallèles additionnelles en temps réel ou sensibles au temps, p. ex. accès partagé à un tableau blanc ou mise en place d’une sous-conférence
  • G06F 40/166 - Édition, p. ex. insertion ou suppression
  • H04L 9/40 - Protocoles réseaux de sécurité

32.

MULTI-MODAL RETRIEVAL USING AN INTERMEDIATE NOISE STATE

      
Numéro d'application 18632414
Statut En instance
Date de dépôt 2024-04-11
Date de la première publication 2025-10-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Tanjim, Md Mehrab
  • Lu, Chen-Yi
  • Mahadik, Kanak
  • Rao, Anup Bandigadi

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for data processing include obtaining a text prompt and generating a first intermediate noise state based on the text prompt, retrieving a second intermediate noise state based on the text prompt and the first intermediate noise state, and generating a synthetic image based on the text prompt and the second intermediate noise state.

Classes IPC  ?

33.

GENERATING HIERARCHICAL ENTITY SEGMENTATIONS UTILIZING SELF-SUPERVISED MACHINE LEARNING MODELS

      
Numéro d'application 18632933
Statut En instance
Date de dépôt 2024-04-11
Date de la première publication 2025-10-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Gu, Jiuxiang
  • Kuen, Jason Wen Yong
  • Tan, Hao
  • Zhang, Ruiyi
  • Zhao, Handong
  • Nenkova, Ani
  • Sun, Tong
  • Cao, Shengcao

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for hierarchical entity segmentation. In particular, in one or more embodiments, the disclosed systems receive a digital image comprising a plurality of object entities. In addition, in some embodiments, the disclosed systems generate, utilizing a segmentation model comprising parameters generated according to pseudo-labels indicating hierarchies of segmentation masks for a set of training digital images, a hierarchical segmentation indicating hierarchical relations of the plurality of object entities of the digital image. Moreover, in some embodiments, the disclosed systems generate, for the digital image, a segmentation map from the hierarchical segmentation of the plurality of object entities.

Classes IPC  ?

  • G06T 7/12 - Découpage basé sur les bords
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
  • G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
  • G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques

34.

GENERATING DIGITAL CONTENT CONSISTENT WITH CONTEXT-SPECIFIC GUIDELINES UTILIZING PROMPT AUGMENTATION AND MODEL TUNING

      
Numéro d'application 18634240
Statut En instance
Date de dépôt 2024-04-12
Date de la première publication 2025-10-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Sankar, Varsha
  • Venkitachalam, Shankar
  • Yadagiri, Meghanath Macha
  • Moosaei, Maryam
  • Pai, Deepak
  • Basu, Debraj Debashish

Abrégé

The present disclosure is directed toward systems, methods, and non-transitory computer readable media that provide a contextual content generation system that trains and implements a unique machine learning architecture to generate context-specific digital content items based on a digital guideline document. In particular, the disclosed systems select a content generation method from among prompt engineering and/or updating one or more machine learning models to generate digital content. For example, the disclosed systems utilize machine learning models to extract key elements from a digital guideline document comprising context-specific guidelines for digital content. Further, the disclosed systems generate an augmented prompt comprising indications of key elements from the digital guideline document. In addition, the disclosed systems select a content generation method from among prompt engineering and/or updating machine learning models to generate the digital content item which incorporates digital content corresponding to the context-specific guidelines based on the augmented prompt.

Classes IPC  ?

35.

ENHANCING LIGHT TEXT IN SCANNED DOCUMENTS WHILE PRESERVING DOCUMENT FIDELITY

      
Numéro d'application 18931424
Statut En instance
Date de dépôt 2024-10-30
Date de la première publication 2025-10-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Mondal, Prasenjit
  • Soni, Sachin

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement an image filter for enhancing light text and removing document shadows. In particular embodiments, the disclosed systems use a modified adaptive thresholding approach the relies on image gradients to efficiently guide the thresholding process. In addition, the disclosed systems use a machine-learning model to generate a document shadow map. The document shadow map can include text reflections. Accordingly, the disclosed systems remove text reflections from the document shadow map (e.g., by using an interpolated shadow intensity value of neighboring shadow map pixels). In turn, the disclosed systems use the document text mask and the document shadow map cleaned of text reflections to remove shadows from the digital image. Further, the disclosed systems enhance text in the shadow-removed digital image based on contrast stretching.

Classes IPC  ?

  • G06T 5/80 - Correction géométrique
  • G06T 5/40 - Amélioration ou restauration d'image utilisant des techniques d'histogrammes
  • G06T 5/60 - Amélioration ou restauration d'image utilisant l’apprentissage automatique, p. ex. les réseaux neuronaux
  • G06T 5/92 - Modification de la plage dynamique d'images ou de parties d'images basée sur les propriétés globales des images
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux

36.

STYLE KITS GENERATION AND CUSTOMIZATION

      
Numéro d'application 18958842
Statut En instance
Date de dépôt 2024-11-25
Date de la première publication 2025-10-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Hurlburt, Kelly
  • Hopper, Brooke
  • Vuong, Minh-Anh
  • Tall, Tidjane

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a style kit including a first image generation input indicating a first image attribute, a second image generation input indicating a second image attribute, and a selectability parameter indicating that the second image generation input is selectable. A third image generation input is received from a user based on the selectability parameter, wherein the third image generation input indicates a third image attribute different from the second image attribute of the second image generation input. An image generation model generates a synthetic image based on the style kit, the first image generation input, and the third image generation input, wherein the synthetic image has the first image attribute and the third image attribute.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

37.

Using reinforcement learning to recommend data visualizations

      
Numéro d'application 18668888
Numéro de brevet 12443621
Statut Délivré - en vigueur
Date de dépôt 2024-05-20
Date de la première publication 2025-10-14
Date d'octroi 2025-10-14
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Porwal, Vibhor
  • Mitra, Subrata
  • Agarwal, Shubham
  • Rossi, Ryan A
  • Ahmad, Ghazi Shazan
  • Doshi, Manav Ketan
  • Kumar Paila, Syam Manoj

Abrégé

Methods and systems are provided for using reinforcement learning to recommend data visualizations. In embodiments described herein, statistical features for each sample of corresponding samples of a dataset are determined by applying each sample of the dataset to a data visualization recommendation model. The computational cost of each of the statistical features for each of the samples is determined based via a regression model. Recommended statistical features are determined by sequentially applying each sample to a reinforcement learning model with a computational budget and with the corresponding computational costs of the statistical features of each sample. A data visualization is then displayed that is generated by applying the dataset and the recommended statistical features to the data visualization recommendation model.

Classes IPC  ?

  • G06F 16/26 - Exploration de données visuellesNavigation dans des données structurées

38.

GROUP PORTRAIT PHOTO EDITING

      
Numéro d'application 18625461
Statut En instance
Date de dépôt 2024-04-03
Date de la première publication 2025-10-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Jiang, Yuming
  • Zhao, Nanxuan
  • Liu, Qing
  • Singh, Krishna Kumar

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image generation includes obtaining an input image depicting an entity and a skeleton map depicting a pose of the entity and performing a cross-attention mechanism between image features of the input image and entity features representing the pose to obtain modified image features. An output image is generated based on the modified image features that depicts the entity with the pose.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 5/77 - RetoucheRestaurationSuppression des rayures

39.

GENERATIVE ARTIFICIAL INTELLIGENCE (AI) CONTENT STRATEGY

      
Numéro d'application 18625484
Statut En instance
Date de dépôt 2024-04-03
Date de la première publication 2025-10-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Xiao, Chang
  • Courtois, Zeus
  • Surange, Sonali
  • Hanson-Regalado, Jacob Benjamin
  • Koh, Eunyee
  • Miller, Gavin Stuart Peter

Abrégé

Generative artificial intelligence (AI) content strategy techniques are described. In one or more examples, a content brief is received describing a goal to be achieved in controlling digital content output. Content brief data is extracted from the content brief and a content strategy is generated based on the content brief data using generative artificial intelligence implemented using one or more machine-learning models.

Classes IPC  ?

  • G06Q 30/0207 - Remises ou incitations, p. ex. coupons ou rabais
  • G06Q 30/0204 - Segmentation du marché
  • G06Q 30/0226 - Systèmes d’incitation à un usage fréquent, p. ex. programmes de miles pour voyageurs fréquents ou systèmes de points

40.

INJECTIVE 3D DEFORMATIONS BASED ON 2D MESH DEFORMATIONS

      
Numéro d'application 18630007
Statut En instance
Date de dépôt 2024-04-09
Date de la première publication 2025-10-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Sun, Bo
  • Groueix, Thibault
  • Aigerman, Noam

Abrégé

Aspects and features of the present disclosure relate to providing injective three-dimensional (3D) deformations based on two-dimensional (2D) mesh deformations. For example, a method involves defining at least one 2D mesh deformation based on a designated position of an object represented by an input neural radiance field (NeRF). The method also involves applying the 2D mesh deformation(s) to a 3D piecewise-linear map that operates over a plane and preserves a normal direction to produce prismatic maps. The method further involves composing a 3D deformation for the object from layers defined by the prismatic maps, and parameterizing the 3D piecewise-linear map. The method additionally involves storing or rendering, using the 3D piecewise-linear map, a deformed NeRF injectively representing the object in the designated position. Aspects also include computer systems, apparatus, and computer programs configured to perform the method.

Classes IPC  ?

  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

41.

SELF-SUPERVISED AUDIO-VISUAL LEARNING FOR CORRELATING MUSIC AND VIDEO

      
Numéro d'application 19246631
Statut En instance
Date de dépôt 2025-06-23
Date de la première publication 2025-10-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Salamon, Justin
  • Russell, Bryan
  • Suris Coll-Vinent, Didac

Abrégé

Embodiments are disclosed for correlating video sequences and audio sequences by a media recommendation system using a trained encoder network. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a training input including a media sequence, including a video sequence paired with an audio sequence, segmenting the media sequence into a set of video sequence segments and a set of audio sequence segments, extracting visual features for each video sequence segment and audio features for each audio sequence segment, generating, by transformer networks, contextualized visual features from the extracted visual features and contextualized audio features from the extracted audio features, the transformer networks including a visual transformer and an audio transformer, generating predicted video and audio sequence segment pairings based on the contextualized visual and audio features, and training the visual transformer and the audio transformer to generate the contextualized visual and audio features.

Classes IPC  ?

  • G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
  • G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques
  • G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
  • G10L 25/03 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits
  • G10L 25/57 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour le traitement des signaux vidéo

42.

ENHANCING ARTIFICIAL INTELLIGENCE RESPONSES WITH CONTEXTUAL USAGE INSIGHTS

      
Numéro d'application 18626551
Statut En instance
Date de dépôt 2024-04-04
Date de la première publication 2025-10-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Maharaj, Akash Vivek
  • Muppala, Vaishnavi
  • Vaithyanathan, Shivakumar
  • Garg, Manas
  • Russell, Kenneth George
  • Dasgupta, Ishita
  • Rao, Anup Bandigadi
  • Pejcic, Aleksander

Abrégé

Some aspects relate to technologies for an artificial intelligent (AI) system that, among other things, enhances responses to concepts questions for an application with contextual usage insights. In accordance with some aspects, a user query is determined to comprise a concepts question regarding an application. Responsive to determining the user query comprises the concepts question, documentation regarding the application relevant to the user query is identified. A generative model generates text for a response to the concepts question using the documentation regarding the application. Additionally, a determination is made to add contextual usage insights to the response. Responsive to determining to add contextual usage insights to the response, usage data relevant to the user query and/or the response is retrieved. The generative model generates text for a final response using the response and the usage data, and the final response is provided to a user device for presentation.

Classes IPC  ?

43.

GENERATING CUSTOMIZED ARROW HEADS UTILIZING DEEP LEARNING

      
Numéro d'application 18628250
Statut En instance
Date de dépôt 2024-04-05
Date de la première publication 2025-10-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Gehlaut, Tarun
  • Jain, Stuti

Abrégé

The present disclosure is directed toward systems, methods, and non-transitory computer readable media that provide a digital design interface for intuitively creating custom arrows that demonstrate both visual consistency and inherent directionality within vector-based design applications. In particular, in one or more implementations, the disclosed systems receive a request to create a custom arrow from a digital object and a path segment. In addition, the disclosed systems detect that the digital object is within a threshold distance of the path segment and combine the digital object with the path segment to create a custom arrow object. In particular, the disclosed systems utilize a bilateral segmentation machine-learning model to segment the digit object and a symmetry axis detection model to determine an axis of symmetry of the digital object. Moreover, the disclosed systems attach the digital object to an endpoint of the path segment at the axis of symmetry.

Classes IPC  ?

  • G06T 7/13 - Détection de bords
  • G06T 3/403 - Changement d'échelle guidé par les bordsChangement d'échelle basé sur les bords
  • G06T 5/70 - DébruitageLissage
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

44.

Miscellaneous Design

      
Numéro d'application 019256499
Statut En instance
Date de dépôt 2025-10-03
Propriétaire Adobe Inc. (USA)
Classes de Nice  ? 41 - Éducation, divertissements, activités sportives et culturelles

Produits et services

Educational and training services; educational and training services in the form of classroom training, online training, web based training, and video training in the fields of computer software, cloud computing, desktop publishing, digital publishing, electronic publishing, graphic design, marketing, advertising, analytics, e-commerce, digital asset management, data management, business management, business process management, business document and forms creation, and automation of business document and forms processing and workflow; educational services; educational services in the form of arranging professional workshops and training courses, conducting classes, seminars, conferences, and workshops in the fields of computer software, cloud computing, desktop publishing, digital publishing, electronic publishing, graphic design, marketing, advertising, analytics, e-commerce, digital asset management, data management, business management, business process management, business document and forms creation, and automation of business document and forms processing and workflow; educational and training sessions in the field of organization and business matters relating to creative professionals.

45.

EDITING DIGITAL IMAGES WITH LOCAL REFINEMENT VIA SELECTIVE FEATURE TRIMMING

      
Numéro d'application 18617032
Statut En instance
Date de dépôt 2024-03-26
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Nitzan, Yotam
  • Wu, Zongze
  • Park, Taesung
  • Zhang, Richard
  • Gharbi, Michael
  • Shechtman, Elya

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for modifying digital images via a generative neural network with local refinement. The disclosed system generates, utilizing an encoder neural network, a latent feature vector of a digital image by encoding global context information of the digital image into the latent feature vector. The disclosed system also determines a modified latent feature vector by trimming the latent feature vector to a feature subset corresponding to a masked portion of the digital image. Additionally, the disclosed system generates, utilizing a generative decoder neural network on the modified latent feature vector, digital image data corresponding to the masked portion of the digital image. The disclosed system also generates a modified digital image including the digital image data corresponding to the masked portion combined with additional portions of the digital image.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 5/60 - Amélioration ou restauration d'image utilisant l’apprentissage automatique, p. ex. les réseaux neuronaux

46.

GENERATING A DIGITAL POSTER INCLUDING MULTIMODAL CONTENT EXTRACTED FROM A SOURCE DOCUMENT

      
Numéro d'application 18619667
Statut En instance
Date de dépôt 2024-03-28
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jaisankar, Vijay
  • Chaitanya, Varre Suman
  • Vyas, Kalp Sachinkumar
  • Bandyopadhyay, Sambaran
  • Somasundaram, Shwetha

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating digital posters from digital documents with multimodal content using a deep submodular function. Specifically, the disclosed systems generate embedding vectors representing multimodal content of a digital document comprising text and images. Further, disclosed systems determine, utilizing a deep submodular function on the embedding vectors, a content subset comprising one or more digital images aligned with one or more text segments representative of the digital document. Moreover, the disclosed systems generate, utilizing a large language model, a summary of the multimodal content of the digital document from a prompt based on the content subset. Additionally, the disclosed systems generate, for display at a client device, a digital poster comprising the summary of the multimodal content generated via the large language model.

Classes IPC  ?

  • G06N 3/0455 - Réseaux auto-encodeursRéseaux encodeurs-décodeurs

47.

IMAGE RELIGHTING USING MACHINE LEARNING

      
Numéro d'application 18949023
Statut En instance
Date de dépôt 2024-11-15
Date de la première publication 2025-10-02
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Revanur, Ambareesh
  • Kolkin, Nicholas Isaac
  • Agarwal, Dhwanit
  • Agrawal, Shradha
  • Zhang, He
  • Harikumar, Midhun
  • Shechtman, Elya

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image generation includes obtaining an input image and an input prompt, where the input image depicts an object and the input prompt describes a lighting condition for the object, generating relighted image features based on the input image and the input prompt, where the relighted image features represent the object with the lighting condition, and generating a synthetic image based on the relighted image features, where the synthetic image depicts the object with the lighting condition.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

48.

META-LEARNING FOR ADAPTIVE FILTERS

      
Numéro d'application 19239430
Statut En instance
Date de dépôt 2025-06-16
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Casebeer, Jonah
  • Bryan, Nicholas J.

Abrégé

Embodiments are disclosed for performing a using a neural network to optimize filter weights of an adaptive filter. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving, by a filter, an input audio signal, wherein the input audio signal is a far-end audio signal, the filter including a transfer function with adaptable filter weights, generating a response audio signal modeling the input audio signal passing through the acoustic environment, receiving a target response signal, including the input audio signal and near-end audio signals, calculating an adaptive filter loss, generating, by a trained recurrent neural network, a filter weight update using the calculated adaptive filter loss, updating the adaptable filter weights of the transfer function to create an updated transfer function, generating an updated response audio signal based on the updated transfer function, and providing the updated response audio signal as an output audio signal.

Classes IPC  ?

  • G10L 21/0232 - Traitement dans le domaine fréquentiel
  • G10L 21/0208 - Filtration du bruit
  • G10L 21/0224 - Traitement dans le domaine temporel
  • G10L 25/18 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information spectrale de chaque sous-bande
  • G10L 25/30 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux

49.

DOCUMENT BOUNDARY DETECTION USING THE CURVATURE OF TEXT LINES

      
Numéro d'application 18617279
Statut En instance
Date de dépôt 2024-03-26
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Mondal, Prasenjit
  • Soni, Sachin

Abrégé

Embodiments are disclosed for using the curvature of text lines to detect a document boundary. The method may include receiving a warped image depicting a page of a document having an incomplete document boundary, the page including a plurality of text lines. A complete document boundary may be identified based on the incomplete document boundary and the plurality of text lines. A dewarped image corresponding to the warped image may be determined using the complete document boundary. The dewarped image may then be provided for display on a client device.

Classes IPC  ?

50.

DIFFUSION WATERMARKING FOR CAUSAL ATTRIBUTION

      
Numéro d'application 18617969
Statut En instance
Date de dépôt 2024-03-27
Date de la première publication 2025-10-02
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Agarwal, Shruti
  • Collomosse, John
  • Asnani, Vishal

Abrégé

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image processing include obtaining an input prompt describing an image element, generating, using an image generation model, an output image depicting the image element and including a watermark, and identifying the training image as a source of the output image based on the watermark. The image generation model is trained using a training image including the image element and the watermark.

Classes IPC  ?

  • G06T 1/00 - Traitement de données d'image, d'application générale
  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte

51.

ADAPTIVE DYNAMIC GUIDANCE IN DATA ANALYSIS TOOLS

      
Numéro d'application 18618638
Statut En instance
Date de dépôt 2024-03-27
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Narechania, Arpit Ajay
  • Hoffswell, Jane
  • Guo, Shunan
  • Koh, Eunyee
  • Bhutani, Prithvi

Abrégé

In one aspect, a computer-implemented method includes accessing, by a guidance module of an analysis application executing on a processor, wildcard data associated with data in a data repository. The method further includes displaying, by the guidance module based on the wildcard data, one or more wildcard elements in a graphical user interface (GUI). The method further includes receiving, by the analysis application, selection of a first wildcard element of the one or more wildcard elements. The method further includes displaying, by the guidance module, a suggestion based on the selection of the first wildcard element.

Classes IPC  ?

  • G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
  • G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur

52.

GENERATING SYNTHETIC DIGITAL IMAGES UTILIZING A TEXT-TO-IMAGE GENERATION NEURAL NETWORK WITH LOCALIZED CONSTRAINTS

      
Numéro d'application 18619587
Statut En instance
Date de dépôt 2024-03-28
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Feng, Weixi
  • Li, Yijun
  • Bui, Trung
  • Hinz, Tobias
  • Cohen, Scott
  • Tran, Quan
  • Zhang, Jianming
  • Zhao, Handong
  • Dernoncourt, Franck

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating digital images via a generative neural network with localized constraints. The disclosed system generates, utilizing one or more encoder neural networks, a sequence of embeddings comprising a prompt embedding representing a text prompt and an object text embedding representing a phrase indicating an object in the text prompt. The disclosed system generates, utilizing the one or more encoder neural networks, a visual embedding representing an object image corresponding to the object. The disclosed system determines a modified sequence of embeddings by replacing the object text embedding with the visual embedding in the sequence of embeddings. The disclosed system also generates, utilizing a generative neural network, a synthetic digital image from the modified sequence of embeddings comprising the visual embedding.

Classes IPC  ?

  • G06N 3/0455 - Réseaux auto-encodeursRéseaux encodeurs-décodeurs
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

53.

VECTORIZING DIGITAL IMAGES WITH SUB-PIXEL ACCURACY USING DYNAMIC UPSCALING

      
Numéro d'application 18619610
Statut En instance
Date de dépôt 2024-03-28
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Harpavat, Keerti
  • Chakraborty, Souymodip
  • Gharbi, Michael
  • Fisher, Matthew
  • Ranawat, Jaswant Singh
  • Phogat, Ankit
  • Batra, Vineet

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that selectively utilizes an image super-resolution model to upscale image patches corresponding to high frequency portions. In particular, the disclosed systems select a set of image patches corresponding to high frequency portions of a digital image at a first resolution. Furthermore, the disclosed systems utilize an image super-resolution model to generate upscaled image patches for the set of image patches of the high-frequency portions to a second resolution higher than the first resolution according to an upscaling factor of at least two. The disclosed systems generate a segmentation map of the digital image based on the upscaled image patches and an upscaled segmentation corresponding to low-frequency portions of the digital image. Further, the disclosed systems generate a vectorized digital image for the digital image according to the segmentation map.

Classes IPC  ?

  • G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
  • G06T 5/70 - DébruitageLissage
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/13 - Détection de bords
  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur

54.

THREE DIMENSIONAL AWARE VIDEO COMPOSITING

      
Numéro d'application 18623377
Statut En instance
Date de dépôt 2024-04-01
Date de la première publication 2025-10-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Xu, Zhan
  • Pimmel, Kim P.
  • Yang, Jimei

Abrégé

Three dimensional aware video compositing techniques are described. In one or more examples, subject data is produced that defines a subject depicted in frames of a subject video and viewpoint data describing movement of a viewpoint with respect to the frames of the subject video. Three-dimensional data is formed that defines a three-dimensional representation of an environment depicted in frames of an environment video. A composited video is generated by aligning the environment with the movement of the viewpoint of the subject based on the subject data and the three-dimensional data, which is then rendered, e.g., presented for display in a user interface.

Classes IPC  ?

  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/215 - Découpage basé sur le mouvement
  • G06T 7/579 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir du mouvement
  • G06T 15/20 - Calcul de perspectives

55.

REFERENCE IMAGE STRUCTURE MATCH USING DIFFUSION MODELS

      
Numéro d'application 18947959
Statut En instance
Date de dépôt 2024-11-14
Date de la première publication 2025-10-02
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Kelkar, Sachin Madhav
  • Chen, Fengbin
  • Ravi, Hareesh
  • Zhang, Zhifei
  • Kale, Ajinkya Gorakhnath
  • Lin, Zhe

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a structural input indicating a target spatial structure, encoding, using a condition encoder, the structural input to obtain a structural encoding representing the target spatial structure, and generating, using an image generation model, a synthetic image based on the structural encoding, where the synthetic image depicts an object having the target spatial structure.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

56.

TEXT TO COLOR PALETTE GENERATION USING DIFFUSION MODELS

      
Numéro d'application 18609102
Statut En instance
Date de dépôt 2024-03-19
Date de la première publication 2025-09-25
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Aggarwal, Pranav Vineet
  • Kale, Ajinkya Gorakhnath

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for text-to-color palette generation include encoding a text prompt to obtain text embedding. A color embedding is generated based on the text embedding by performing a diffusion process. Then a color palette is generated based on the color embedding. The color palette includes a plurality of colors corresponding to the text prompt.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06F 40/284 - Analyse lexicale, p. ex. segmentation en unités ou cooccurrence
  • G06F 40/40 - Traitement ou traduction du langage naturel
  • G06T 5/70 - DébruitageLissage

57.

CLUSTERING USERS ACCORDING TO CAUSAL RELATIONSHIPS AMONG USER DATA

      
Numéro d'application 18609625
Statut En instance
Date de dépôt 2024-03-19
Date de la première publication 2025-09-25
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Porwal, Vibhor
  • Chopra, Harshita
  • Sinha, Atanu R.
  • Modanwal, Sharda Kriti
  • Narayanaswamy, Chetan Reddy
  • Niaz, Zainul

Abrégé

Methods, non-transitory computer readable media, apparatuses, and systems for data processing include obtaining, by a machine learning model, a user cluster and interaction data for users in the user cluster, where the interaction data relates to interactions between the users and a digital platform. Some embodiments further include generating, by the machine learning model, a directed graph based on the user cluster and the interaction data, where the directed graph represents causal relationships among the interactions. Some embodiments further include updating, by the machine learning model, the user cluster based on the directed graph. Some embodiments further include providing, by a content component, customized content to a user via the digital platform based on the updated user cluster.

Classes IPC  ?

58.

SELECTIVE OBJECT-LEVEL UNDO

      
Numéro d'application 18610628
Statut En instance
Date de dépôt 2024-03-20
Date de la première publication 2025-09-25
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Soni, Nikita
  • Bui, Trung
  • Smith, Kevin Gary

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for modifying a digital design by performing a selective object-level undo operation. In one or more embodiments, the disclosed systems generate a modified object by performing a series of operations on an object depicted within the digital design. In some embodiments, the disclosed systems receive a selective object-level undo operation on the modified object, wherein the request specifies an operation to undo from among the series of operations performed on the object. In one or more embodiments, the disclosed systems modify the modified object by performing the selective object-level undo operation on the modified object to undo the operation from among the series of operations. In some embodiments, the disclosed systems provide an updated digital design depicting the modified object reflecting modifications from the series of operations excluding the operation undone by the selective object-level undo operation.

Classes IPC  ?

  • G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
  • G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p. ex. glissement, rotation, agrandissement ou changement de couleur
  • G06F 30/12 - CAO géométrique caractérisée par des moyens d’entrée spécialement adaptés à la CAO, p. ex. interfaces utilisateur graphiques [UIG] spécialement adaptées à la CAO

59.

DIGITAL IMAGE VISUAL AESTHETIC SCORE GENERATION

      
Numéro d'application 18611886
Statut En instance
Date de dépôt 2024-03-21
Date de la première publication 2025-09-25
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jenni, Simon
  • Wang, Zhaowen
  • Collomosse, John Philip

Abrégé

Digital image visual aesthetic score generation techniques are described. In one or more examples, these techniques are implemented by a system including a training data collection module implemented by a processing device to collect training data including training digital images and user interaction data describing user interaction with the training digital images, respectively. A training module is configured to train a machine-learning model using the training data to generate an aesthetic score based on an input digital image. The aesthetic score is configured to specify an amount of visual aesthetics exhibited by the input digital image.

Classes IPC  ?

  • G06T 7/00 - Analyse d'image
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
  • G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques

60.

CONTROLLABLE VISUAL TEXT GENERATION WITH ADAPTER-ENHANCED DIFFUSION MODELS

      
Numéro d'application 18612100
Statut En instance
Date de dépôt 2024-03-21
Date de la première publication 2025-09-25
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Ji, Jiabao
  • Wang, Zhaowen
  • Zhang, Zhifei
  • Price, Brian Lynn

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image generation include obtaining a text content image and a text style image. The text content image is encoded to obtain content guidance information and the text style image is encoded to obtain style guidance information. Then a synthesized image is generated based on the content guidance information and the style guidance information. The synthesized image includes text from the text content image having a text style from the text style image.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06F 40/109 - Maniement des polices de caractèresTypographie cinétique ou temporelle
  • G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
  • G06V 30/10 - Reconnaissance de caractères

61.

REDUCING HALLUCINATIONS FOR GENERATIVE TEXT RESPONSES USING A MACHINE LEARNING PROMPT ENSEMBLE

      
Numéro d'application 18612566
Statut En instance
Date de dépôt 2024-03-21
Date de la première publication 2025-09-25
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Yu, Tong
  • Chen, Xiang
  • Bursztyn, Victor Soares
  • Kim, Sungchul
  • Rossi, Ryan A
  • Zhang, Ruiyi
  • Wang, Rui

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that iteratively generates, utilizing a machine learning model, text responses to reduce hallucinated content. In particular, in some embodiments, the disclosed systems receive a digital query and selects one or more supporting digital documents for the digital query. Furthermore, in some embodiments the disclosed systems generate a first text response from a first text prompt generated by using the digital query. Moreover, in some embodiments the disclosed systems extract a misalignment portion of the first text response by comparing the first text response and the one or more supporting digital documents. Additionally, from the misalignment portion of the first text response and the digital query, the disclosed systems further generate a second text response.

Classes IPC  ?

  • G06F 16/33 - Requêtes
  • G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu

62.

PROMPT PERSONALIZATION FOR GENERATIVE MODELS

      
Numéro d'application 18958801
Statut En instance
Date de dépôt 2024-11-25
Date de la première publication 2025-09-25
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Khanna, Anand
  • Tandon, Abhishek
  • Vlassis, Nikolaos
  • Brdiczka, Oliver

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for generating suggested prompts include obtaining a sequence of text prompts associated with a user and determining a session concept for the user based on the sequence of text prompts. Embodiments then generate, using a prompt generation model, an image generation prompt based on the sequence of text prompts and the session concept. Subsequently, embodiments generate, using an image generation model, a synthetic image based on the image generation prompt.

Classes IPC  ?

63.

PDF SPACES

      
Numéro de série 99403187
Statut En instance
Date de dépôt 2025-09-19
Propriétaire Adobe Inc. ()
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Downloadable software using artificial intelligence for collecting, compiling, converting, organizing, consolidating, collaborating on, sharing, and editing files, links, notes, and documents and for creating an information hub; downloadable assistant and chatbot software using artificial intelligence for preparing insights, notes, and citations based on document content and user input and for collaborating on or sharing the same with other users; downloadable software using artificial intelligence for content generation and management Software as a service (SAAS) services featuring software using artificial intelligence for collecting, compiling, converting, organizing, consolidating, collaborating on, sharing, and editing files, links, notes, and documents and for creating an information hub; software as a service (SAAS) services featuring assistant and chatbot software using artificial intelligence for preparing insights, notes, and citations based on document content and user input and for collaborating on or sharing the same with other users; software as a service (SAAS) services featuring software using artificial intelligence for content generation and management

64.

COMPLETING TEMPORAL KNOWLEDGE GRAPHS BASED ON ENHANCED ENTITY REPRESENTATION AND WEIGHTED FREQUENCY-BASED SAMPLING

      
Numéro d'application 18602753
Statut En instance
Date de dépôt 2024-03-12
Date de la première publication 2025-09-18
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Rossi, Ryan A
  • Mirtaheri Feijani, Mehrnoosh Sadat
  • Kim, Sungchul
  • Mahadik, Kanak
  • Yu, Tong
  • Chen, Xiang

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate predicted relationships for entities of a temporal knowledge graph using enhanced entity representations. For instance, in one or more embodiments, the disclosed systems generate a query for predicting a relationship for a subject entity represented within a temporal knowledge graph. The disclosed systems further determine an enhanced entity representation generated for the subject entity by an enhancement layer of a temporal knowledge graph completion model, the enhanced entity representation including a combination of a connection-based similarity for the subject entity and a relationship-based similarity for the subject entity. Using the temporal knowledge graph completion model and based on the enhanced entity representation of the subject entity, the disclosed systems generate a predicted relationship for the subject entity.

Classes IPC  ?

  • G06N 5/02 - Représentation de la connaissanceReprésentation symbolique

65.

UTILIZING MACHINE LEARNING MODELS TO GENERATE IMAGE EDITING DIRECTIONS IN A LATENT SPACE

      
Numéro d'application 19224170
Statut En instance
Date de dépôt 2025-05-30
Date de la première publication 2025-09-18
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Li, Yijun
  • Zhang, Richard
  • Singh, Krishna Kumar
  • Lu, Jingwan
  • Parmar, Gaurav
  • Zhu, Jun-Yan

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate modified digital images. In particular, in some embodiments, the disclosed systems generate image editing directions between textual identifiers of two visual features utilizing a language prediction machine learning model and a text encoder. In some embodiments, the disclosed systems generated an inversion of a digital image utilizing a regularized inversion model to guide forward diffusion of the digital image. In some embodiments, the disclosed systems utilize cross-attention guidance to preserve structural details of a source digital image when generating a modified digital image with a diffusion neural network.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06F 40/56 - Génération de langage naturel
  • G06T 1/00 - Traitement de données d'image, d'application générale
  • G06T 5/70 - DébruitageLissage
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques

66.

HUMAN INPAINTING UTILIZING A SEGMENTATION BRANCH FOR GENERATING AN INFILL SEGMENTATION MAP

      
Numéro d'application 19224477
Statut En instance
Date de dépôt 2025-05-30
Date de la première publication 2025-09-18
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Singh, Krishna Kumar
  • Li, Yijun
  • Lu, Jingwan
  • Aksit, Duygu Ceylan
  • Wang, Yangtuanfeng
  • Yang, Jimei
  • Hinz, Tobias
  • Liu, Qing
  • Zhang, Jianming
  • Lin, Zhe

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.

Classes IPC  ?

  • G06T 5/77 - RetoucheRestaurationSuppression des rayures
  • G06T 7/11 - Découpage basé sur les zones
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 40/10 - Corps d’êtres humains ou d’animaux, p. ex. occupants de véhicules automobiles ou piétonsParties du corps, p. ex. mains

67.

INTERACTIVE TREE REPRESENTING ATTRIBUTE QUALITY OR CONSUMPTION METRICS FOR DATA INGESTION AND OTHER APPLICATIONS

      
Numéro d'application 19224516
Statut En instance
Date de dépôt 2025-05-30
Date de la première publication 2025-09-18
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Narechania, Arpit Ajay
  • Du, Fan
  • Sinha, Atanu R.
  • Rossi, Ryan A.
  • Hoffswell, Jane Elizabeth
  • Guo, Shunan
  • Koh, Eunyee
  • Anderson, John
  • Surange, Sonali
  • Mahapatra, Saurabh
  • Holtcamp, Vasanthi

Abrégé

Embodiments provide systems, methods, and computer storage media for management, assessment, navigation, and/or discovery of data based on data quality, consumption, and/or utility metrics. Data may be assessed using attribute-level and/or record-level metrics that quantify data: “quality”—the condition of data (e.g., presence of incorrect or incomplete values), its “consumption”—the tracked usage of data in downstream applications (e.g., utilization of attributes in dashboard widgets or customer segmentation rules), and/or its “utility”−a quantifiable impact resulting from the consumption of data (e.g., revenue or number of visits resulting from marketing campaigns that use particular datasets, storage costs of data). This data assessment may be performed at different stages of a data intake, preparation, and/or modeling lifecycle. For example, an interactive tree view may visually represent a nested attribute schema and attribute quality or consumption metrics to facilitate discovery of bad data before ingesting into a data lake.

Classes IPC  ?

  • G06Q 10/0639 - Analyse des performances des employésAnalyse des performances des opérations d’une entreprise ou d’une organisation
  • G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
  • G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques

68.

COLLABORATIVE EDITING OF VECTOR GRAPHICS

      
Numéro d'application 19226722
Statut En instance
Date de dépôt 2025-06-03
Date de la première publication 2025-09-18
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jain, Vishwas
  • Batra, Vineet
  • Chakraborty, Souymodip
  • Wampler, Kevin
  • Phogat, Ankit

Abrégé

Certain aspects and features of this disclosure relate to providing a vector graphics entity component system that supports collaborative editing in real time or near real time. Graphical constructs are efficiently described by integer-based identifiers, and graphical constructs of the same type are stored in a definitional component. Each client maintains both a pending state representation and a synchronized state representation of the graphical design to independently track the state of the representation at a live editing server. The use of integer-based identifiers for graphical constructs provides an efficient change representation that can be communicated with minimal network traffic. All copies of the graphical design represented among clients reach a consistent state quickly even when multiple users are making changes to the same vector path, eliminating the need to track changes manually or to move large files.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte

69.

MULTI-COMPONENT LATENT PYRAMID SPACE FOR GENERATIVE MODELS

      
Numéro d'application 18607813
Statut En instance
Date de dépôt 2024-03-18
Date de la première publication 2025-09-18
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Zhang, Jianming
  • Zheng, Haitian
  • Zhang, Zhifei
  • Lin, Zhe

Abrégé

A method, apparatus, non-transitory computer readable medium, apparatus, and system for image processing include obtaining a text prompt; generating, using a generator of an image generation model, a feature embedding based on the text prompt, wherein the feature embedding includes a first set of channels that encodes a first value of an image characteristic and a second set of channels that encodes a residual between the first value of the image characteristic and a second value of the image characteristic; and generating, using a decoder of the image generation model, a synthetic image corresponding to the second value of the image characteristic based on the feature embedding.

Classes IPC  ?

70.

REAL-TIME TEXT-BASED DISENTANGLED REAL IMAGE EDITING

      
Numéro d'application 18925290
Statut En instance
Date de dépôt 2024-10-24
Date de la première publication 2025-09-18
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Wu, Zongze
  • Kolkin, Nicholas Isaac
  • Zhang, Richard
  • Shechtman, Elya

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining an input image depicting a first element, a text description of the input image, and a modification prompt describing a second element different from the first element, generating an intermediate output based on the input image and the text description, where the intermediate output represents the first element, and generating a synthetic image based on the intermediate output and the modification prompt, where the synthetic image replaces the first element from the input image with the second element from the modification prompt.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 5/70 - DébruitageLissage
  • G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques

71.

GENERATING ADAPTIVE THREE-DIMENSIONAL MESHES OF TWO-DIMENSIONAL IMAGES

      
Numéro d'application 19224445
Statut En instance
Date de dépôt 2025-05-30
Date de la première publication 2025-09-18
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Gadelha, Matheus
  • Mech, Radomir

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.

Classes IPC  ?

  • G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
  • H04N 13/00 - Systèmes vidéo stéréoscopiquesSystèmes vidéo multi-vuesLeurs détails

72.

ADOBE FIREFLY

      
Numéro de série 99389499
Statut En instance
Date de dépôt 2025-09-12
Propriétaire Adobe Inc. ()
Classes de Nice  ? 09 - Appareils et instruments scientifiques et électriques

Produits et services

Downloadable software for using artificial intelligence models for content generation and management, namely, image, video, sound, audio, and music generation from user prompts, image editing, and for generating translations; downloadable software for using artificial intelligence models for content generation and management; downloadable application programming interface (API) software

73.

SCENE RE-LIGHTING USING DIRECT SHADING CONTROL

      
Numéro d'application 18601435
Statut En instance
Date de dépôt 2024-03-11
Date de la première publication 2025-09-11
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Kocsis, Peter Koppany
  • Hold-Geoffroy, Yannick
  • Philip, Julien Olivier Victor
  • Sunkavalli, Kalyan K.

Abrégé

A method, apparatus, non-transitory computer readable medium, apparatus, and system for scene re-lighting using direct shading control include obtaining an input image and a lighting direction indicator that describes a lighting direction. A direct shading map is generated based on the input image and the lighting direction indicator and a shaded image is generated depicting an object from the input image with shading consistent with the lighting direction based on the shading map.

Classes IPC  ?

74.

RADIANCE FIELD GRADIENT SCALING FOR UNBIASED NEAR-CAMERA TRAINING

      
Numéro d'application 19214775
Statut En instance
Date de dépôt 2025-05-21
Date de la première publication 2025-09-11
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Philip, Julien
  • Deschaintre, Valentin

Abrégé

Methods and systems disclosed herein relate generally to radiance field gradient scaling for unbiased near-camera training. In a method, a computing system receives information about a 3D environment. The computing system receives a camera location and a camera direction. The computing system determines, using a machine learning model, a multiple densities and colors of the 3D environment from a perspective of the camera location at a number of respective points sampled along a first projected ray from the camera location in the direction of the camera direction. The computing system aggregates the multiple densities and colors of the 3D environment to generate an output pixel comprising an integrated color that represents the 3D environment.

Classes IPC  ?

  • G06T 15/06 - Lancer de rayon
  • G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques

75.

FI

      
Numéro de série 99388480
Statut En instance
Date de dépôt 2025-09-11
Propriétaire Adobe Inc. ()
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Downloadable software for using artificial intelligence models for content generation and management, namely, image, video, sound, audio, and music generation and editing from user prompts, image editing, and for generating translations; downloadable software for using artificial intelligence models for content generation and management; downloadable application programming interface (API) software Software as a service (SAAS) services featuring software for using artificial intelligence models for content generation and management, namely, image, video, sound, audio, and music generation and editing from user prompts and for generating translations; Software as a service (SAAS) services featuring software for using artificial intelligence models for content generation and management

76.

ANONYMIZING DIGITAL IMAGES UTILIZING A GENERATIVE NEURAL NETWORK

      
Numéro d'application 19217725
Statut En instance
Date de dépôt 2025-05-23
Date de la première publication 2025-09-11
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Khodadadeh, Siavash
  • Kalarot, Ratheesh
  • Ghadar, Shabnam
  • Hold-Geoffroy, Yannick

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer readable media for generating anonymized digital images utilizing a face anonymization neural network. In some embodiments, the disclosed systems utilize a face anonymization neural network to extract or encode a face anonymization guide that encodes face attribute features, such as gender, ethnicity, age, and expression. In some cases, the disclosed systems utilize the face anonymization guide to inform the face anonymization neural network in generating synthetic face pixels for anonymizing a digital image while retaining attributes, such as gender, ethnicity, age, and expression. The disclosed systems learn parameters for a face anonymization neural network for preserving face attributes, accounting for multiple faces in digital images, and generating synthetic face pixels for faces in profile poses.

Classes IPC  ?

  • G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
  • G06N 3/0455 - Réseaux auto-encodeursRéseaux encodeurs-décodeurs
  • G06N 3/0475 - Réseaux génératifs

77.

CUSTOM IMAGE AND CONCEPT COMBINER USING DIFFUSION MODELS

      
Numéro d'application US2024025853
Numéro de publication 2025/188330
Statut Délivré - en vigueur
Date de dépôt 2024-04-23
Date de publication 2025-09-11
Propriétaire ADOBE, INC. (USA)
Inventeur(s)
  • Aggarwal, Pranav
  • Harikumar, Midhun
  • Ravi, Hareesh
  • Kale, Ajinkya

Abrégé

Techniques for generation of images based on a variety of input conditions or modalities are described, whereby one or more processing devices (300) receive a plurality of input modalities comprising multiple images (302) and a text input in a natural language (308). The processing devices (300) generate image embeddings for the multiple images (306) and a text embedding for the text input (312). The processing devices, using a machine learning model (132), generate an output image (322) based on the image embeddings (306) and the text embedding (312). The output image includes portions of the multiple images.

Classes IPC  ?

78.

TECHNIQUES TO PERSONALIZE CONTENT USING MACHINE LEARNING

      
Numéro d'application 18592044
Statut En instance
Date de dépôt 2024-02-29
Date de la première publication 2025-09-04
Propriétaire Adobe Inc. (USA)
Inventeur(s) Unnikrishnan, Soumya

Abrégé

Techniques for personalizing multimedia content based on a knowledge graph are described. In one embodiment, a method includes receiving activity data associated with a user from a device, generating a touchpoint embedding and a decision embedding using a graph neural network (GNN) model based on the activity data, the GNN model trained using a knowledge graph, predicting a touchpoint using a first classifier based on the touchpoint embedding, predicting a decision stage using a second classifier based on the decision embedding, and generating personalized content for the touchpoint based on the decision stage using a large language model (LLM). Other embodiments are described and claimed.

Classes IPC  ?

79.

APPLYING AND BLENDING NEW TEXTURES TO SURFACES ACROSS FRAMES OF A VIDEO SEQUENCE

      
Numéro d'application 18592313
Statut En instance
Date de dépôt 2024-02-29
Date de la première publication 2025-09-04
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Huang, Jiahui
  • Lee, Joon-Young

Abrégé

Embodiments are disclosed for a process of applying and blending new textures to surfaces across frames of a video sequence. The method may include obtaining a new texture for a selected region of a video frame of a video sequence. The method may further comprise generating a mesh for the selected region of the first video frame that includes a plurality of control points. The method may further comprise determining control point location data for each of the plurality of control points for additional video frames of the video sequence and using the control point location data to generate a plurality of warped video frames by applying the new texture to the additional video. The method may further comprise generating blended video frames by blending the new texture in the warped video frames and providing a modified version of the video sequence using the generated blended video frames.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 3/18 - Déformation d’images, p. ex. réarrangement de pixels individuellement
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/20 - Analyse du mouvement
  • G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images utilisant des procédés basés sur les caractéristiques
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques

80.

DETECTING AND PROCESSING CURVED TEXT IN DOCUMENTS

      
Numéro d'application 18593684
Statut En instance
Date de dépôt 2024-03-01
Date de la première publication 2025-09-04
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Sachdeva, Paridhi
  • Patel, Parth
  • Gupta, Mohit

Abrégé

Embodiments are disclosed for a process of detecting and processing curved text in a document using a digital design system. The method may include identifying, by a page segmentation model, a plurality of paragraph objects in a document. The disclosed systems and methods further comprise determining that a paragraph object of the plurality of paragraph objects includes curved text in view of positions of baselines of text runs in the paragraph object. The processing of the curved text in the paragraph object can include determining spacing data for text runs of the curved text in the paragraph object. The disclosed systems and methods further comprise presenting output data representing the curved text using the spacing data for the text runs of the curved text.

Classes IPC  ?

  • G06F 40/103 - Mise en forme, c.-à-d. modification de l’apparence des documents
  • G06V 30/414 - Extraction de la structure géométrique, p. ex. arborescenceDécoupage en blocs, p. ex. boîtes englobantes pour les éléments graphiques ou textuels

81.

SUMMARY PAGE GENERATION USING DOCUMENTS

      
Numéro d'application 18593834
Statut En instance
Date de dépôt 2024-03-01
Date de la première publication 2025-09-04
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Nair, Inderjeet
  • Kumar, Varinder
  • Bandyopadhyay, Sambaran
  • Chhaya, Niyati Himanshu
  • Saxena, Apoorv

Abrégé

Embodiments are disclosed for summary page generation using a document. The method may include receiving a text document. The method may further include generating a test summary based on the text document and a structured representation of the text summary using the document summarized model. The method may further include generating an image generation prompt based on the text summary and the structured representation of the text summary using a prompt generator. The method may further include generating a multimedia summary document corresponding to the text document using a diffusion model and the image generation prompt. The multimedia summary document includes a generated background imagery based on the text summary. The multimedia summary document includes at least a portion of the text summary which is placed within the multimedia summary document based on the structed representation of the text summary.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06F 40/103 - Mise en forme, c.-à-d. modification de l’apparence des documents
  • G06F 40/40 - Traitement ou traduction du langage naturel
  • G06T 11/20 - Traçage à partir d'éléments de base, p. ex. de lignes ou de cercles

82.

DIGITAL CONTENT GENERATION FROM A TEXT-BASED INPUT

      
Numéro d'application 18594376
Statut En instance
Date de dépôt 2024-03-04
Date de la première publication 2025-09-04
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Chan, Yeuk-Yin
  • Zhou, Tongyu

Abrégé

Digital content generation techniques are described that are performed using a text-based input. A text-based input is received and asset recommendation data is generated based on the text-based input using a machine-learning model, e.g., a large language model (LLM). A selection of a plurality of assets is received from the asset recommendation data and a selection is also received of at least one interaction from a plurality of interactions for the plurality of assets. The digital content is generated as having the interaction between the selection of the plurality of assets.

Classes IPC  ?

  • G06F 16/9035 - Filtrage basé sur des données supplémentaires, p. ex. sur des profils d'utilisateurs ou de groupes

83.

CUSTOM IMAGE AND CONCEPT COMBINER USING DIFFUSION MODELS

      
Numéro d'application 18595096
Statut En instance
Date de dépôt 2024-03-04
Date de la première publication 2025-09-04
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Aggarwal, Pranav
  • Harikumar, Midhun
  • Ravi, Hareesh
  • Kale, Ajinkya

Abrégé

Techniques for generation of images based on a variety of input conditions or modalities are described. In one embodiment, one or more processing devices receive a plurality of input modalities comprising multiple images and a text input in a natural language. The processing devices generate image embeddings for the multiple images and a text embedding for the text input. The processing devices, using a machine learning model, generate an output image based on the image embeddings and the text embedding. The output image includes portions of the multiple images.

Classes IPC  ?

  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction

84.

Typeface

      
Numéro d'application 29896415
Numéro de brevet D1091687
Statut Délivré - en vigueur
Date de dépôt 2023-07-03
Date de la première publication 2025-09-02
Date d'octroi 2025-09-02
Propriétaire Adobe Inc. (USA)
Inventeur(s) Slimbach, Robert

85.

MASK-FREE COMPOSITE IMAGE GENERATION

      
Numéro d'application 18584022
Statut En instance
Date de dépôt 2024-02-22
Date de la première publication 2025-08-28
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Canet Tarrés, Gemma
  • Kim, Soo Ye
  • Lin, Zhe
  • Zhang, Zhifei
  • Zhang, Jianming
  • Song, Yizhi

Abrégé

A method, apparatus, non-transitory computer readable medium, and system include obtaining a first image depicting a background scene and a second image depicting a foreground element, generating a guidance embedding based on the second image, and generating a synthetic image depicting the foreground element and the background scene based on the first image and the guidance embedding, wherein the image generation model determines a location of the foreground element within the synthetic image in light of the background scene.

Classes IPC  ?

  • G06T 5/77 - RetoucheRestaurationSuppression des rayures
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 5/75 - Masquage flou
  • G06T 7/149 - DécoupageDétection de bords impliquant des modèles déformables, p. ex. des modèles de contours actifs

86.

IDENTIFICATION OF OBJECTS IN DIGITAL IMAGE

      
Numéro d'application 18586491
Statut En instance
Date de dépôt 2024-02-25
Date de la première publication 2025-08-28
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Soni, Sachin
  • Mondal, Prasenjit

Abrégé

Object identification techniques from a digital image are described. In an implementation, edges of an object are determined by analyzing gradients from a digital image. A structure of the object is computed by detecting line segments from the digital image. A boundary of the object is defined based on the edges and the structure. A display of the object is edited in a user interface based on the boundary using an edit operation.

Classes IPC  ?

  • G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
  • G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
  • G06T 7/12 - Découpage basé sur les bords
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
  • G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
  • G06V 10/36 - Utilisation d’un opérateur local, c.-à-d. des moyens pour opérer sur des points d’image situés dans la proximité d’un point donnéOpérations de filtrage locales non linéaires, p. ex. filtrage médian
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
  • G06V 10/77 - Traitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source
  • G06V 10/776 - ValidationÉvaluation des performances
  • G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques

87.

MANAGING DIGITAL ASSETS STORED AS COMPONENTS AND PACKAGED FILES

      
Numéro d'application 19205479
Statut En instance
Date de dépôt 2025-05-12
Date de la première publication 2025-08-28
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Switzer, Stanley J
  • Horns, Roey
  • Goldman, Oliver I
  • Vitrano, Michael
  • Wixson, Julian R

Abrégé

The present disclosure relates to a digital asset synchronization system that provides improved digital asset management and synchronization of a digital asset stored either within a component database or a packaged file. For example, the digital asset synchronization system enables a set of components that makes up a digital asset to appear as a singular packaged file, while also maintaining the benefits of having the digital asset made up of the components. In this manner, the digital asset synchronization system provides a bridge between a digital asset stored in a packaged file format and conventional file formats. In addition, the digital asset synchronization system also provides digital asset management and improved synchronization between a client device and a cloud storage system.

Classes IPC  ?

  • G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
  • G06F 16/182 - Systèmes de fichiers distribués
  • G06F 16/22 - IndexationStructures de données à cet effetStructures de stockage
  • G06F 16/25 - Systèmes d’intégration ou d’interfaçage impliquant les systèmes de gestion de bases de données
  • H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
  • H04L 67/1095 - Réplication ou mise en miroir des données, p. ex. l’ordonnancement ou le transport pour la synchronisation des données entre les nœuds du réseau
  • H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]

88.

CONTOUR-BASED SNAPPING OF GRAPHICAL OBJECTS

      
Numéro d'application 18585602
Statut En instance
Date de dépôt 2024-02-23
Date de la première publication 2025-08-28
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Singh, Siddharth J.
  • Agrawal, Vivek
  • Mishra, Nilesh
  • Chakravarti, Deepanjana

Abrégé

In some embodiments, a computing system provides a graphical interface that displays one or more graphical objects including a moving object and a static object. The computing system generates an impact contour for the moving object that has a predefined distance from a first boundary of the moving object. Based on detecting that the impact contour of the moving object intersects a second boundary of the static object, the computing system determines a first snapping point on the first boundary of the moving object and a second snapping point on the second boundary of the static object. The computing system updates the graphical interface to execute a snapping operation by translating the moving object to a location where the first snapping point and the second snapping point touch each other.

Classes IPC  ?

  • G06T 7/13 - Détection de bords
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

89.

GENERATING AND EXECUTING ACTION PLANS INVOLVING SOFTWARE TOOLS VIA A LARGE LANGUAGE MODEL

      
Numéro d'application 18589065
Statut En instance
Date de dépôt 2024-02-27
Date de la première publication 2025-08-28
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhuang, Yuchen
  • Chen, Xiang
  • Bursztyn, Victor Soares
  • Yu, Tong
  • Sarkhel, Somdeb
  • Mitra, Saayan
  • Rossi, Ryan A

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating action plans utilizing a large language model with a best-first search model. The disclosed system determines a request to utilize a large language model to generate an action plan via one or more software tools. The disclosed system generates the action plan by traversing a decision tree comprising an action space involving the one or more software tools by iteratively: selecting, utilizing a best-first search model, an action from a set of possible actions in the action space of the decision tree; and expanding, utilizing the best-first search model, the action space of the decision tree to include an additional set of possible actions. The disclosed system also executes the action plan via one or more interactions with the one or more software tools according to the action.

Classes IPC  ?

  • G06N 3/0455 - Réseaux auto-encodeursRéseaux encodeurs-décodeurs

90.

REFERENCE IMAGE BASED MATERIAL RETRIEVAL

      
Numéro d'application 18589075
Statut En instance
Date de dépôt 2024-02-27
Date de la première publication 2025-08-28
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Deschaintre, Valentin Mathieu
  • Martin, Rosalie Noémie Raphaëlle

Abrégé

Techniques for reference image based material retrieval are described that support identification of procedural materials based on visual features of input images. A processing device, for instance, receives an input image that has a particular visual appearance. The processing device generates a histogram representation of the input image that represents a color prominence of the input image and generates a color distribution based on the color prominence. The processing device leverages a vision language model to filter candidate procedural materials by a semantic similarity to the input image. The processing device then identifies a procedural material that has a visual similarity to the particular visual appearance by comparing the color distribution for the input image to color distributions associated with the filtered candidate procedural materials. In this way, the techniques described herein support efficient retrieval of procedural materials based on color and on semantic features of the input image.

Classes IPC  ?

  • G06T 5/40 - Amélioration ou restauration d'image utilisant des techniques d'histogrammes
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
  • G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
  • G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
  • G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques

91.

SELF ATTENTION REFERENCE FOR IMPROVED DIFFUSION PERSONALIZATION

      
Numéro d'application 18817915
Statut En instance
Date de dépôt 2024-08-28
Date de la première publication 2025-08-28
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Kolkin, Nicholas Isaac
  • Misraa, Aashish Kumar
  • Harikumar, Midhun
  • Shechtman, Elya

Abrégé

A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a reference image an input prompt describing an image element, identifying an object from the reference image; generating, using an image generation model, image features representing the object based on the reference image, and generating, using the image generation model, a synthetic image depicting the image element and the object based on the input prompt and the image features from the reference image.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux

92.

MODIFYING DIGITAL IMAGES VIA MULTI-LAYERED SCENE COMPLETION FACILITATED BY ARTIFICIAL INTELLIGENCE

      
Numéro d'application 19206944
Statut En instance
Date de dépôt 2025-05-13
Date de la première publication 2025-08-28
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Pakhomov, Daniil
  • Liu, Qing
  • Ding, Zhihong
  • Cohen, Scott
  • Lin, Zhe
  • Zhang, Jianming
  • Zhang, Zhifei
  • Dibua, Ohiremen
  • Souppe, Mariette
  • Singh, Krishna Kumar
  • Brandt, Jonathan

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via multi-layered scene completion techniques facilitated by artificial intelligence. For instance, in some embodiments, the disclosed systems receive a digital image portraying a first object and a second object against a background, where the first object occludes a portion of the second object. Additionally, the disclosed systems pre-process the digital image to generate a first content fill for the portion of the second object occluded by the first object and a second content fill for a portion of the background occluded by the second object. After pre-processing, the disclosed systems detect one or more user interactions to move or delete the first object from the digital image. The disclosed systems further modify the digital image by moving or deleting the first object and exposing the first content fill for the portion of the second object.

Classes IPC  ?

  • G06T 5/77 - RetoucheRestaurationSuppression des rayures
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p. ex. glissement, rotation, agrandissement ou changement de couleur
  • G06T 5/70 - DébruitageLissage
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras

93.

Typeface

      
Numéro d'application 29896424
Numéro de brevet D1090689
Statut Délivré - en vigueur
Date de dépôt 2023-07-03
Date de la première publication 2025-08-26
Date d'octroi 2025-08-26
Propriétaire Adobe Inc. (USA)
Inventeur(s) Slimbach, Robert

94.

OFFSETTING CAMERA FILTER SHIFT

      
Numéro d'application 18443667
Statut En instance
Date de dépôt 2024-02-16
Date de la première publication 2025-08-21
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kaiser, Adrien Michel Paul
  • Hold-Geoffroy, Yannick
  • Deschaintre, Valentin Mathieu
  • Derel, Jerome Eric Christophe
  • Bennaceur, Adel

Abrégé

In implementation of techniques for offsetting camera filter shift, a computing device implements an offset system to capture a first digital image using a filter at a first position relative to an image capture device and to capture a second digital image using the filter at a second position relative to the image capture device resulting from movement of the filter between the first position and the second position. The offset system determines a filter shift resulting from the movement by comparing the first and second digital images. The offset system then controls an offset of a portion of the image capture device based on the filter shift.

Classes IPC  ?

  • H04N 23/55 - Pièces optiques spécialement adaptées aux capteurs d'images électroniquesLeur montage
  • G06T 7/30 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images
  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte

95.

BUILDING VISION-LANGUAGE MODELS USING MASKED DISTILLATION FROM FOUNDATION MODELS

      
Numéro d'application 18443808
Statut En instance
Date de dépôt 2024-02-16
Date de la première publication 2025-08-21
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jenni, Simon
  • Sameni, Sepehr
  • Kafle, Kushal
  • Tan, Hao

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and implementing a vision-language model using masked distillation and contrastive image-text training. In particular, in one or more embodiments, the disclosed systems generate, utilizing a vision encoder, an image embedding from a masked digital image comprising a digital image with one or more masked patches. In some embodiments, the disclosed systems generate, utilizing a text encoder, a text embedding from a masked text phrase. In one or more embodiments, the disclosed systems generate, utilizing the vision-language model from the image embedding and the text embedding, a predicted text reconstruction of the text description and a predicted image reconstruction of the digital image. In some embodiments, the disclosed systems modify parameters of the vision-language model according to a masked distillation loss between the predicted text reconstruction and a text reconstruction generated by a pretrained large language model.

Classes IPC  ?

  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06F 40/40 - Traitement ou traduction du langage naturel
  • G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
  • G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
  • G06V 10/776 - ValidationÉvaluation des performances
  • G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques

96.

ADOBE FIREFLY

      
Numéro d'application 019235685
Statut En instance
Date de dépôt 2025-08-21
Propriétaire Adobe Inc. (USA)
Classes de Nice  ? 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Software as a service (SAAS) services featuring software for using artificial intelligence models for content generation and management, all relating to sound, audio, and music generation from user prompts and for generating translations; software as a service (SAAS) services featuring software for using artificial intelligence models for content generation and management.

97.

CHANNEL INCREMENTALITY MEASUREMENT USING CAUSAL FOREST

      
Numéro d'application 18443489
Statut En instance
Date de dépôt 2024-02-16
Date de la première publication 2025-08-21
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Gao, Michael
  • Huang, Bei
  • Yuan, Qilong

Abrégé

One or more aspects of the method, apparatus, and non-transitory computer readable medium include obtaining content presentation data; generating, using a machine learning model, predicted user interaction data by computing a plurality of decision tree regressors, wherein nodes of the decision tree regressors are trained to infer a causal relationship between a user interaction variable and a treatment variable; and present content to the user based on the predicted user interaction data. The causal relationship is based on maximizing a difference in a relationship between a user interaction variable and a treatment variable of a tree.

Classes IPC  ?

  • G06Q 30/0242 - Détermination de l’efficacité des publicités
  • G06Q 30/0201 - Modélisation du marchéAnalyse du marchéCollecte de données du marché
  • G06Q 30/0272 - Période d’exposition à la publicité

98.

ARTIFICIAL INTELLIGENT (AI) AGENT CONTROL AND PROGRESS INDICATOR

      
Numéro d'application 18581762
Statut En instance
Date de dépôt 2024-02-20
Date de la première publication 2025-08-21
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Saad, Michele
  • Mejia, Irgelkha
  • Jain, Ajay

Abrégé

Artificial intelligence (AI) agent control and progress indicator techniques are described. A search-query type of a search query, for instance, is detected using a machine-learning model. Responsive to detecting the search-query type is a first type, the search query is communicated for processing by an algorithmic search engine to generate a search result. Responsive to the detecting that the search-query type is a second type, the search query is communicated for processing using an artificial intelligence (AI) search assistant implemented using a large language model (LLM) to generate the search result.

Classes IPC  ?

  • G06F 16/2457 - Traitement des requêtes avec adaptation aux besoins de l’utilisateur
  • G06F 16/248 - Présentation des résultats de requêtes
  • G06F 16/28 - Bases de données caractérisées par leurs modèles, p. ex. des modèles relationnels ou objet

99.

DIGITAL VIDEO EDITING BASED ON A TARGET DIGITAL IMAGE

      
Numéro d'application 18583067
Statut En instance
Date de dépôt 2024-02-21
Date de la première publication 2025-08-21
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Sree Harsha, Sai
  • Agarwal, Dhwanit
  • Revanur, Ambareesh
  • Agrawal, Shradha

Abrégé

Digital video editing techniques are described that are based on a target digital image. In one or more implementations, inputs are received. The inputs include a target text prompt, a target digital image depicting a target object, and a source digital video having a plurality of frames depicting a source object. Regions-of-interest are identified in the plurality of frames of the source digital video, respectively, based on the target text prompt and the target digital image using a machine-learning model, e.g., a diffusion model. A plurality of frames of a target digital video are generated as having the target object using a generative machine-learning model. The generating is based on the regions-of-interest, the target digital image, the source digital video, and a source text prompt describing the source digital video.

Classes IPC  ?

  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 5/70 - DébruitageLissage

100.

Splatting-based Digital Image Synthesis

      
Numéro d'application 19185046
Statut En instance
Date de dépôt 2025-04-21
Date de la première publication 2025-08-21
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Niklaus, Simon
  • Chen, Jiawen

Abrégé

Digital image synthesis techniques are described that leverage splatting, i.e., forward warping. In one example, a first digital image and a first optical flow are received by a digital image synthesis system. A first splat metric and a first merge metric are constructed by the digital image synthesis system that defines a weighted map of respective pixels. From this, the digital image synthesis system produces a first warped optical flow and a first warp merge metric corresponding to an interpolation instant by forward warping the first optical flow based on the splat metric and the merge metric. A first warped digital image corresponding to the interpolation instant is formed by the digital image synthesis system by backward warping the first digital image based on the first warped optical flow.

Classes IPC  ?

  • G06T 7/269 - Analyse du mouvement utilisant des procédés basé sur le gradient
  • G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  1     2     3     ...     73        Prochaine page