Unity Technologies SF

États‑Unis d’Amérique

Retour au propriétaire

1-100 de 222 pour Unity Technologies SF Trier par
Recheche Texte
Affiner par
Type PI
        Brevet 205
        Marque 17
Juridiction
        États-Unis 207
        International 7
        Canada 5
        Europe 3
Date
2025 avril 1
2025 (AACJ) 3
2024 11
2023 28
2022 98
Voir plus
Classe IPC
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels 49
G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation 29
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties 18
G06T 5/00 - Amélioration ou restauration d'image 18
H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio 17
Voir plus
Classe NICE
42 - Services scientifiques, technologiques et industriels, recherche et conception 16
09 - Appareils et instruments scientifiques et électriques 13
35 - Publicité; Affaires commerciales 5
41 - Éducation, divertissements, activités sportives et culturelles 5
28 - Jeux, jouets, articles de sport 2
Statut
En Instance 23
Enregistré / En vigueur 199
  1     2     3        Prochaine page

1.

DEFORMING POINTS IN SPACE USING A CURVE DEFORMER

      
Numéro d'application 18990263
Statut En instance
Date de dépôt 2024-12-20
Date de la première publication 2025-04-24
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Singh, Karan

Abrégé

Disclosed is a method to deform points in space using a curve. A processor obtains the curve and a point on the curve. The processor computes a tangent to the curve at the point and determines a first vector different from the tangent to the curve. The processor constructs a plane at the point on the curve based on the tangent and the first vector. The processor defines the first and second side of the curve, where the first and second side include a first and second plurality of points on opposite sides of the plane, respectively. The processor defines a deformation associated with the curve and influencing the points in space, where a magnitude of the deformation affecting a second point among the points in space depends on whether the second point is on the first or second side. The processor applies the deformation to the second point.

Classes IPC  ?

  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

2.

SYSTEM AND METHODS FOR APPLICATION INTERACTION WITH POLYSPATIAL APPLICATION GRAPHS

      
Numéro d'application 18777319
Statut En instance
Date de dépôt 2024-07-18
Date de la première publication 2025-01-23
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Condon, Timothy S.
  • Vukicevic, Vladimir
  • Goodrich, Ryan

Abrégé

A system and method for application interaction and/or communication, the system maintaining a polyspatial input/output (I/O) graph specifying how applications can interact within a unified logical space. In some embodiments, the polyspatial graph specifies an application hierarchy comprising at least a host application, one or more hosted applications to be executed within the host application, one of the hosted applications corresponding to an intermediate host application for an additional application. The host application and the one or more hosted applications are executed, the executing comprising: receiving, at the host application, input to be transmitted to the one or more hosted applications; coordinating, by the host application, interactions among the one or more hosted applications; generating, by the host application, of an aggregated output based on outputs of the hosted applications and comprising a scene graph; and displaying, by the host application, of a display based on the generated aggregated output.

Classes IPC  ?

  • G06F 9/54 - Communication interprogramme
  • G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet

3.

UNIVERSAL SERVER AND HOST SYSTEM

      
Numéro d'application 18777330
Statut En instance
Date de dépôt 2024-07-18
Date de la première publication 2025-01-23
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Condon, Timothy S.
  • Vukicevic, Vladimir

Abrégé

A universal server and host system and method, the system configuring a universal server to transmit, to one or more universal hosts, asset and scene information to generate one or more local scene graphs, each local scene graph replicating a scene graph associated with a simulation running at the universal server and being associated with a local simulation running at a universal host. Upon receiving input from the one or more universal hosts, the universal server updates an internal state based on the received input, generate commands encoding changes to an output state, and transmits the commands to the one or more universal hosts for updating each local scene graph at its respective universal host, at least one local scene graph to be rendered at a local device associated with its respective universal host. The universal server and the one or more universal hosts are applications.

Classes IPC  ?

  • A63F 13/355 - Réalisation d’opérations pour le compte de clients ayant des capacités de traitement restreintes, p. ex. serveurs transformant une scène de jeu qui évolue en flux vidéo codé à transmettre à un téléphone portable ou à un client léger
  • A63F 13/358 - Adaptation du déroulement du jeu en fonction de la charge du réseau ou du serveur, p. ex. pour diminuer la latence due aux différents débits de connexion entre clients
  • A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
  • G06T 15/20 - Calcul de perspectives
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie

4.

MIXED REALITY VOLUME CAMERAS

      
Numéro d'application 18671901
Statut En instance
Date de dépôt 2024-05-22
Date de la première publication 2024-11-28
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Condon, Timothy S.
  • Vukicevic, Vladimir

Abrégé

A method of rendering 3D content in a mixed reality environment is disclosed. A volume camera is defined within first virtual environment. The volume camera is configured to capture a 3D volume of space. The defining includes setting parameters for a position or an orientation to be captured by the volume camera. A volume view transform is applied to the volume camera to establish a perspective of the volume camera within the 3D volume of space. The volume view transform is configured to define the position or the orientation relative to the 3D content in the 3D volume of space. Selected 3D content is processed within the 3D volume of space based on a configuration of the volume camera. The selected 3D content is rendered into a second virtual environment based on the configuration. The rendering includes adapting to a perspective of a user within the second virtual environment.

Classes IPC  ?

  • G06T 15/20 - Calcul de perspectives
  • G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie

5.

AI-ASSISTED TEXTURE GENERATION

      
Numéro d'application 18669489
Statut En instance
Date de dépôt 2024-05-20
Date de la première publication 2024-11-21
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Bocquelet, Florent Benjamin

Abrégé

An AI-assisted texture generation system and method that generates a texture for an object such as a 3D mesh. At a given iteration, the system renders, from a viewpoint, one or more of an object depth and a cumulative texture associated with a cumulative pixel quality map; determines an estimated pixel quality map associated with the viewpoint; generates, at the viewpoint, a new texture associated with the object based on one or more of the object depth, the cumulative texture, and a refine mask; updates the cumulative texture based on the new texture; and updates the cumulative pixel quality map based on the estimated pixel quality map. In some examples, the refine mask is computed based on the cumulative pixel quality map and an estimated pixel quality map. In some examples, the new texture is generating using an augmented inpainting diffusion model for image generation.

Classes IPC  ?

6.

SINGLE-SUBJECT IMAGE GENERATION

      
Numéro d'application 18645333
Statut En instance
Date de dépôt 2024-04-24
Date de la première publication 2024-10-24
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Reid, Alexander Robert

Abrégé

A method of generating an image is disclosed. A mask and descriptive text associated with a subject are received. The descriptive text comprises a text prompt. The mask is resized to fit within a predefined bounding box and the resized mask is centered on a background image. The centered mask is filled with noise. Output of an image of the subject on a solid background is received from a generative AI model in response to a passing of a request to the generative AI model. The request includes the noise-filled mask and the descriptive text.

Classes IPC  ?

  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
  • G06T 13/80 - Animation bidimensionnelle [2D], p. ex. utilisant des motifs graphiques programmables

7.

CROSS-PLATFORM COMPUTATIONAL GRAPH IMPORTING, CUSTOMIZATION AND EXECUTION

      
Numéro d'application 18636172
Statut En instance
Date de dépôt 2024-04-15
Date de la première publication 2024-10-17
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Ribard, Alexandre
  • Coope, Giles
  • Sharpe, Tracy
  • Drouin, Sylvio Herve
  • Riccitielllo, John Salvatore

Abrégé

System and methods for importing, converting, optimizing and/or executing a computational graph or AST at an endpoint target. The system includes accessing an input computational graph corresponding to a trained machine-learning (ML) model; converting the input computational graph into an internal computational graph; based on determined characteristics of the internal computational graph, optimizing the internal computational graph to generate an optimized computational graph by applying one or more of at least a graph element reordering operation, a graph element fusing operation, or a graph element creation operation; converting the optimized computational graph to executable instructions enabled to be executed on an endpoint associated with a backend and a platform; generating associated scheduling instructions; and executing the executable instructions on the endpoint based on the scheduling instructions. The executable instructions can forgo references to the input, internal or optimized computational graphs, and/or be reused by other systems, engines or applications.

Classes IPC  ?

  • G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
  • G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption

8.

EFFICIENTLY PROVIDING DATA IN A CLOUD PIPELINE TO GEOGRAPHICALLY DISTRIBUTED USERS

      
Numéro d'application 18160938
Statut En instance
Date de dépôt 2023-01-27
Date de la première publication 2024-08-01
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Barzel, Ronen
  • Marks, Joseph W.
  • Thurston, Iii, Kimball D.
  • Hillman, Peter M.
  • Demmocks, Graeme
  • Major, Gary
  • Sullivan, Carter Bart

Abrégé

The system efficiently distributes data in a cloud pipeline to geographically distributed users. To enable efficient distribution of resources to multiple users, the production pipeline can be represented as two flow graphs: a flow graph of tasks as well as a flow graph of data. The flow graph of tasks can indicate a sequence of tasks, departments, and users in a production pipeline. The flow graph of data can indicate location of data needed for each task in the flow graph of tasks, and location of the users assigned to the particular task. Based on the next task that needs to be executed in the flow graph of tasks, the system can proactively gather the data from the various locations and deliver the data to the users assigned to executing the next task.

Classes IPC  ?

  • G06Q 10/0631 - Planification, affectation, distribution ou ordonnancement de ressources d’entreprises ou d’organisations
  • G06Q 10/0633 - Analyse du flux de travail
  • G06Q 10/1093 - Ordonnancement basé sur un agenda pour des personnes ou des groupes

9.

System and method for progressive enhancement of in-app augmented reality advertising

      
Numéro d'application 17738879
Numéro de brevet 12029982
Statut Délivré - en vigueur
Date de dépôt 2022-05-06
Date de la première publication 2024-07-09
Date d'octroi 2024-07-09
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Parisi, Anthony Salvatore
  • Stagg, Katrina Rose

Abrégé

A method of performing operations for executing a progressive interactive augmented reality (AR) advertisement display during execution of a running application is disclosed. based on a receiving of a first trigger event from the running application, a first aspect of the progressive AR advertisement is executed within a display window while the application is executing. The first aspect includes an interactive advertisement. Based on a receiving of a second trigger event from within the first aspect of the progressive AR advertisement, a second aspect of the progressive AR advertisement is executed in the display window while the application is executing. The second aspect includes an interactive AR advertisement.

Classes IPC  ?

  • A63F 13/00 - Jeux vidéo, c.-à-d. jeux utilisant un affichage à plusieurs dimensions généré électroniquement
  • A63F 13/213 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de photo-détection, p. ex. des caméras, des photodiodes ou des cellules infrarouges
  • A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
  • A63F 13/61 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu utilisant des informations publicitaires
  • G06Q 30/0241 - Publicités

10.

Systems and Methods for Anatomically-Driven 3D Facial Animation

      
Numéro d'application 18058687
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Singh, Karan
  • Choi, Byungkuk

Abrégé

Embodiments described herein provide a three-dimensional (3D) facial processing system that can be used for animator-centric and anatomically-driven 3D facial modeling, animation and transfer. Specifically, a collection of muscle fiber curves may be considered as an anatomic basis, whose contraction and relaxation are defined as a fine-grained parameterization of human facial expression. An end-to-end modular deformation architecture may then be built using this representation to implemtn automatic optimization of the parameters of a specific face from high-quality dynamic facial scans; face animation driven by performance capture, keyframes, or dynamic simulation; interactive and direct manipulation of facial expression; and animation transfer from an actor to a character.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
  • G06V 40/18 - Caractéristiques de l’œil, p. ex. de l’iris

11.

Speaker separation based on real-time latent speaker state characterization

      
Numéro d'application 18368459
Numéro de brevet 12315516
Statut Délivré - en vigueur
Date de dépôt 2023-09-14
Date de la première publication 2024-05-09
Date d'octroi 2025-05-27
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Perret, Valentin Alain Jean
  • Kedves, Nándor
  • Perony, Nicolas Lucien

Abrégé

Systems, methods, and non-transitory computer-readable media can obtain a stream of audio waveform data that represents speech involving a plurality of speakers. As the stream of audio waveform data is obtained, a plurality of audio chunks can be determined. An audio chunk can be associated with one or more identity embeddings. The stream of audio waveform data can be segmented into a plurality of segments based on the plurality of audio chunks and respective identity embeddings associated with the plurality of audio chunks. A segment can be associated with a speaker included in the plurality of speakers. Information describing the plurality of segments associated with the stream of audio waveform data can be provided.

Classes IPC  ?

  • G10L 17/18 - Réseaux neuronaux artificielsApproches connexionnistes
  • G06N 3/04 - Architecture, p. ex. topologie d'interconnexion
  • G06N 3/045 - Combinaisons de réseaux
  • G06N 3/049 - Réseaux neuronaux temporels, p. ex. éléments à retard, neurones oscillants ou entrées impulsionnelles
  • G06N 3/08 - Méthodes d'apprentissage
  • G10L 17/02 - Opérations de prétraitement, p. ex. sélection de segmentReprésentation ou modélisation de motifs, p. ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principalesSélection ou extraction des caractéristiques
  • G10L 17/04 - Entraînement, enrôlement ou construction de modèle
  • G10L 17/06 - Techniques de prise de décisionStratégies d’alignement de motifs
  • G10L 17/08 - Utilisation d’une mesure de distorsion ou d’une distance particulière entre un motif d’analyse et les modèles de référence
  • G10L 21/0272 - Séparation du signal de voix

12.

Low-latency, peer-to-peer streaming video

      
Numéro d'application 17201310
Numéro de brevet 11979571
Statut Délivré - en vigueur
Date de dépôt 2021-03-15
Date de la première publication 2024-05-07
Date d'octroi 2024-05-07
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Dickson, Christopher
  • Boxer, Benjamin
  • Dickson, Jamie
  • Applegate, Daniel
  • Lazaroff, Jake
  • Nygren, Sten Erik Persson

Abrégé

Peer-to-peer video streaming software that takes video from a host's Graphical Processing Unit (GPU), moves it to the encoder, manages the color conversions, moves the video into the networking software, and in turn manages the video on a client. This software also handles the inputs from the client and tells the host how to react to the client's inputs. In executing the video streaming, the host broadcasts and renders the video that is or will be streamed. It encodes the video frame with a hardware encoder that is part of the GPU to eliminate latency associated with transferring frames to the central processing unit (CPU). The host transmits compressed video and audio data to the client, and the client uses its hardware decoder and the video streaming software to render the video. The client sends mouse, keyboard, joystick, and controller signals to the host, which varies the bit rate based on the network connection.

Classes IPC  ?

  • G06F 15/16 - Associations de plusieurs calculateurs numériques comportant chacun au moins une unité arithmétique, une unité programme et un registre, p. ex. pour le traitement simultané de plusieurs programmes
  • H04L 61/256 - Traversée NAT
  • H04L 65/1066 - Gestion de session
  • H04L 65/60 - Diffusion en flux de paquets multimédias
  • H04L 65/612 - Diffusion en flux de paquets multimédias pour la prise en charge des services de diffusion par flux unidirectionnel, p. ex. radio sur Internet pour monodiffusion [unicast]
  • H04L 67/104 - Réseaux de pairs [P2P]
  • H04L 69/164 - Adaptation ou utilisations spéciales du protocole UDP
  • H04N 19/124 - Quantification

13.

METHOD AND SYSTEM FOR GENERATING ANONYMIZED DIGITAL HUMANS

      
Numéro d'application 18241211
Statut En instance
Date de dépôt 2023-08-31
Date de la première publication 2024-02-29
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Crespi, Adam Richard
  • Metze, Iii, Charles Levi
  • Laidlaw, Cera Kayli
  • Warren, James Neilson
  • Page, Iii, William Nathaniel
  • Sun, Cameron
  • Thaman, Alex

Abrégé

A method of populating a digital environment with an anonymous humanoid digital character is disclosed. One or more values are randomly determined for one or more target human properties. An asset pool is searched for assets that are compatible with the one or more values. The assets include a human body mesh. A blend of height and weight vertex animation textures (VATs) is applied to the human body mesh. The blended human body mesh is placed into the digital environment.

Classes IPC  ?

  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

14.

PARSEC

      
Numéro de série 98427008
Statut Enregistrée
Date de dépôt 2024-02-29
Date d'enregistrement 2024-12-24
Propriétaire Unity Technologies SF ()
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Downloadable computer software for enabling collaborative gaming, game software development, and project management Providing a website featuring technology that enables collaborative gaming, game software development, and project management

15.

SYSTEM AND METHOD FOR AI ASSISTED CHARACTER POSE AUTHORING

      
Numéro d'application 18197669
Statut En instance
Date de dépôt 2023-05-15
Date de la première publication 2023-11-16
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Bocquelet, Florent Benjamin
  • Laflamme, Dominic
  • Oreshkin, Boris
  • Harvey, Félix Gingras

Abrégé

A method of optimizing a pose of a character is disclosed. An input is received. The input defines one or more effectors. A pose is generated for the character using a learned inverse kinematics (LIK) machine-learning (ML) component. The LIK ML component is trained using a motion dataset. The generating of the pose is based on one or more criteria. The one or more criteria include explicit intent expressed as the one or more effectors. The generated pose is adjusted using an ordinary inverse kinematics (OIK) component. The OIK component solves an output from the LIK ML component to increase an accuracy at which the explicit intent is reached. A final pose is generated from the adjusted pose. The generating of the final pose includes applying a physics engine (PE) to an output from the OIK component to increase a physics accuracy of the pose.

Classes IPC  ?

16.

SAMPLE-EFFICIENT REPRESENTATION LEARNING FOR REAL-TIME LATENT SPEAKER STATE CHARACTERISATION

      
Numéro d'application 18129789
Statut En instance
Date de dépôt 2023-03-31
Date de la première publication 2023-11-02
Propriétaire
  • UNITY TECHNOLOGIES SF (USA)
  • UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Perret, Valentin Alain Jean
  • Perony, Nicolas Lucien
  • Kedves, Nándor

Abrégé

Systems, methods, and non-transitory computer-readable media can provide audio waveform data that corresponds to a voice sample to a temporal convolutional network for evaluation. The temporal convolutional network can pre-process the audio waveform data and can output an identity embedding associated with the audio waveform data. The identity embedding associated with the voice sample can be obtained from the temporal convolutional network. Information describing a speaker associated with the voice sample can be determined based at least in part on the identity embedding.

Classes IPC  ?

  • G10L 17/18 - Réseaux neuronaux artificielsApproches connexionnistes
  • G10L 17/02 - Opérations de prétraitement, p. ex. sélection de segmentReprésentation ou modélisation de motifs, p. ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principalesSélection ou extraction des caractéristiques
  • G06N 3/049 - Réseaux neuronaux temporels, p. ex. éléments à retard, neurones oscillants ou entrées impulsionnelles
  • G06N 3/08 - Méthodes d'apprentissage
  • G06N 3/045 - Combinaisons de réseaux
  • G06N 3/048 - Fonctions d’activation

17.

Generating a facial rig for use in animating a computer-generated character based on facial scans and muscle models of multiple live actors

      
Numéro d'application 17381114
Numéro de brevet 11715247
Statut Délivré - en vigueur
Date de dépôt 2021-07-20
Date de la première publication 2023-08-01
Date d'octroi 2023-08-01
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Choi, Byung Kuk
  • Klár, Gergely

Abrégé

An animation system wherein scanned facial expressions are processed to form muscle models based on live actors combines muscle models over a plurality of live actors to form a facial rig usable for generating expressions based on specification of a strain vector and a control vector of a muscle model for varying characters corresponding to live actors.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
  • G06N 3/045 - Combinaisons de réseaux

18.

ACTIVE MARKER RELAY SYSTEM FOR PERFORMANCE CAPTURE

      
Numéro d'application 18101507
Statut En instance
Date de dépôt 2023-01-25
Date de la première publication 2023-07-27
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Momcilovic, Dejan
  • Botting, Jake

Abrégé

An active marker relay system is provided to operate responsive active markers coupled to an object in a live action scene for performance capture, via a trigger unit that relays energy pulse information to responsive active markers. Using use simple sensors, the responsive active markers sense control energy pulses projected from the trigger unit. In return, the responsive active markers produce energy pulses that emulate at least one characteristic of the control energy pulses, such as a particular pulse rate or wavelength of energy. The reactivity of the responsive active markers to control energy pulses enables simple control of the responsive active markers through the trigger unit.

Classes IPC  ?

  • G01S 17/42 - Mesure simultanée de la distance et d'autres coordonnées
  • G01S 17/894 - Imagerie 3D avec mesure simultanée du temps de vol sur une matrice 2D de pixels récepteurs, p. ex. caméras à temps de vol ou lidar flash
  • G01S 17/50 - Systèmes de mesure basés sur un mouvement relatif de la cible
  • G01S 7/497 - Moyens de contrôle ou de calibrage
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio

19.

UNITY SENTIS

      
Numéro de série 98079287
Statut En instance
Date de dépôt 2023-07-11
Propriétaire Unity Technologies SF ()
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Downloadable computer software using artificial intelligence for creating, enhancing, enabling, operating, and displaying computer applications, computer games, online games, online computer applications, multi-player computer games, interactive 3D game and non-game worlds, mixed 3D and 2D content, films, television programs, videos, and digital media; Downloadable computer software using artificial intelligence for creating, enhancing, enabling, operating, and displaying virtual, augmented, mixed reality, and extended reality experiences; Downloadable computer software using artificial intelligence for input, output, and display of live-linked interactive 3D renderings, virtual reality, augmented reality, and mixed reality for collaborative design; Downloadable software development kits (SDK); Downloadable computer software for integrating and embedding digital virtual agents, machine learning, deep neural networks, intelligence agents, predictive systems, cognitive process automation, visual computing applications, knowledge virtualization, robotics, natural language processing, genetic and deep learning algorithms, reinforcement learning, semantic ontologies, pattern recognition, knowledge modeling technologies, cognitive enhancement in respect of experience and productivity, accelerating processes, automation, and autonomy in the field of artificial intelligence Providing temporary use of on-line non-downloadable software and applications using artificial intelligence for creating, enhancing, enabling, operating, and displaying computer applications, computer games, online games, online computer applications, multi-player computer games, interactive 3D game and non-game worlds, mixed 3D and 2D content, films, television programs, videos, and digital media; Providing temporary use of on-line non-downloadable software and applications using artificial intelligence for creating, enhancing, enabling, operating, and displaying virtual, augmented, mixed reality, and extended reality experiences; Providing temporary use of on-line non-downloadable software and applications using artificial intelligence for input, output, and display of live-linked interactive 3D renderings, virtual reality, augmented reality, and mixed reality for collaborative design; Providing temporary use of on-line non-downloadable software for integrating and embedding digital virtual agents, machine learning, deep neural networks, intelligence agents, predictive systems, cognitive process automation, visual computing applications, knowledge virtualization, robotics, natural language processing, genetic and deep learning algorithms, reinforcement learning, semantic ontologies, pattern recognition, knowledge modeling technologies, cognitive enhancement in respect of experience and productivity, accelerating processes, automation, and autonomy in the field of artificial intelligence

20.

Computer graphics interface using visual indicator representing object global volume and/or global volume changes and method therefore

      
Numéro d'application 18069950
Numéro de brevet 12205224
Statut Délivré - en vigueur
Date de dépôt 2022-12-21
Date de la première publication 2023-07-06
Date d'octroi 2025-01-21
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Singh, Karan
  • Cullingford, Stephen
  • Choi, Byung Kuk
  • Eom, Haekwang

Abrégé

A computer-implemented method and interface provides visualization for a volume and/or a change in a volume of a virtual object, the representation usable in a user interface, comprising a first representation of the virtual object represented by a surface positioned in a three-dimensional (3D) virtual space, a bounding object input that defines a bounding object relative to the 3D virtual space, a plurality of local volumes of the virtual object, wherein a local volume of the plurality of local volumes is computed for geometry based on a bounding object feature associated with the bounding object and based on at least some vertices of the corresponding face, wherein at least some of the plurality of local volumes are aggregated to determine a global volume for the set of polygonal faces, and providing for display of a first visual indicator representing a value of the global volume.

Classes IPC  ?

  • G06T 17/10 - Description de volumes, p. ex. de cylindres, de cubes ou utilisant la GSC [géométrie solide constructive]
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

21.

Performance capture marker housing

      
Numéro d'application 29752630
Numéro de brevet D0991056
Statut Délivré - en vigueur
Date de dépôt 2020-09-28
Date de la première publication 2023-07-04
Date d'octroi 2023-07-04
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Momcilovich, Dejan
  • Botting, Jake

22.

Method and Apparatus for Processing Reference Inputs for Video Compositing with Replacement

      
Numéro d'application 18171289
Statut En instance
Date de dépôt 2023-02-17
Date de la première publication 2023-06-29
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Hillman, Peter

Abrégé

A compositor generates a representation of a gradient image corresponding to a replaceable background object in a scene. The representation can be generated from user set of reference points in the scene.

Classes IPC  ?

  • G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
  • G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images utilisant des procédés basés sur les caractéristiques

23.

Deforming points in space using a curve deformer

      
Numéro d'application 18074008
Numéro de brevet 12211165
Statut Délivré - en vigueur
Date de dépôt 2022-12-02
Date de la première publication 2023-06-22
Date d'octroi 2025-01-28
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Singh, Karan

Abrégé

Disclosed is a method to determine influence of a deformation of a curve on a surface. A processor obtains deformations of the curve and desired deformations of the surface. The processor obtains a first point on the surface and determines the influence of the deformation of the curve on the surface by determining a second point on the curve and iterating over a desired deformation of the surface and a corresponding curve. To iterate, the processor determines a desired position of the first point of the desired deformation, determines a position of the first point based on a position of the second point on the corresponding curve and a parameter indicating the influence, and calculates a difference between the desired position of the first point and the determined position of the first point. Based on the difference, the processor adjusts the parameter, and determines the influence based on the parameter.

Classes IPC  ?

  • G06T 15/60 - Génération d'ombres
  • G06T 7/507 - Récupération de la profondeur ou de la forme à partir de l'ombrage
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 15/20 - Calcul de perspectives
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

24.

Object deformation with bindings and deformers interpolated from key poses

      
Numéro d'application 18061397
Numéro de brevet 12307620
Statut Délivré - en vigueur
Date de dépôt 2022-12-02
Date de la première publication 2023-06-22
Date d'octroi 2025-05-20
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Singh, Karan
  • Choi, Byung Kuk

Abrégé

An image generation system defines a set of deformation handles having an associated set of one or more control parameters, obtains a set of object points representative of a virtual object, maps the set of object points to a plurality of key pose states, wherein the key pose state is represented by a key pose state data structure corresponding to a key pose that specifies control parameters to place the set of the deformation handles to coincide with the set of object points, determines corresponding key pose state data structures, receives a set of deformations to be applied to the set of deformation handles, interpolates poses among an interpolated key pose state set to form a current pose state based on the set of deformations, and adjusts the virtual object based on the interpolated key pose state set.

Classes IPC  ?

  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

25.

METHOD FOR IMAGE PROCESSING OF IMAGE DATA FOR A TWO-DIMENSIONAL DISPLAY WALL WITH THREE-DIMENSIONAL OBJECTS

      
Numéro d'application US2022081314
Numéro de publication 2023/108151
Statut Délivré - en vigueur
Date de dépôt 2022-12-09
Date de publication 2023-06-15
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Thurston Iii, Kimball D.
  • Hillman, Peter M.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent Lillian
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.

Abrégé

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.

Classes IPC  ?

  • G06T 15/20 - Calcul de perspectives
  • G06T 15/50 - Effets de lumière
  • G06T 7/285 - Analyse du mouvement utilisant une séquence de paires d'images stéréo

26.

CAMERA LENS FEED TO DRIVE DEFOCUS

      
Numéro d'application US2022081325
Numéro de publication 2023/108161
Statut Délivré - en vigueur
Date de dépôt 2022-12-09
Date de publication 2023-06-15
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Thurston Iii, Kimball D.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.

Abrégé

A virtual scene can be generated for use in a captured scene with focus settings that take into account camera position. Virtual images displayed in a virtual scene that is presented on a display wall and captured in a scene can be presented in the virtual scene with a focus or defocus that is dependent on a virtual object position in the virtual scene and a position of a camera relative to the display wall. Defocusing of virtual objects can be such that an eventual defocus when captured by the camera corresponds to what would be a defocus of an object distant from the camera by a distance that represents a first distance from the camera to the display wall and a second distance being a virtual distance in the virtual scene from the virtual object to a virtual camera plane of the virtual scene.

Classes IPC  ?

  • G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra
  • H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D

27.

Method for image processing of image data for varying image quality levels on a two-dimensional display wall

      
Numéro d'application 17548120
Numéro de brevet 11677928
Statut Délivré - en vigueur
Date de dépôt 2021-12-10
Date de la première publication 2023-06-13
Date d'octroi 2023-06-13
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Hillman, Peter M.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent Lillian
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.

Abrégé

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.

Classes IPC  ?

  • H04N 13/133 - Égalisation des caractéristiques de différentes composantes d’images, p. ex. de la luminance moyenne ou de l’équilibre des couleurs
  • H04N 13/15 - Aspects des signaux d’images en rapport avec la couleur
  • H04N 13/293 - Génération d’images stéréoscopiques mixtesGénération d’images stéréoscopiques et monoscopiques mixtes, p. ex. une fenêtre avec une image stéréoscopique en superposition sur un arrière-plan avec une image monoscopique
  • H04N 13/178 - Métadonnées, p. ex. informations sur la disparité
  • H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
  • H04N 13/363 - Reproducteurs d’images utilisant des écrans de projection
  • H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
  • H04N 13/00 - Systèmes vidéo stéréoscopiquesSystèmes vidéo multi-vuesLeurs détails

28.

Method for image processing of image data for high-resolution images on a two-dimensional display wall

      
Numéro d'application 17548103
Numéro de brevet 11677923
Statut Délivré - en vigueur
Date de dépôt 2021-12-10
Date de la première publication 2023-06-13
Date d'octroi 2023-06-13
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Hillman, Peter M.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent Lillian
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.

Abrégé

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values for a replacement wall image of higher resolution than the precursor image is determined, and the image data of the captured scene is adjusted using the pixel display values and the image matte.

Classes IPC  ?

  • H04N 13/133 - Égalisation des caractéristiques de différentes composantes d’images, p. ex. de la luminance moyenne ou de l’équilibre des couleurs
  • H04N 13/15 - Aspects des signaux d’images en rapport avec la couleur
  • G06F 3/14 - Sortie numérique vers un dispositif de visualisation
  • H04N 13/204 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques
  • H04N 13/189 - Enregistrement de signaux d’imagesReproduction de signaux d’images enregistrés

29.

INCREASING DYNAMIC RANGE OF A VIRTUAL PRODUCTION DISPLAY

      
Numéro d'application 17856395
Statut En instance
Date de dépôt 2022-07-01
Date de la première publication 2023-06-01
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.

Classes IPC  ?

  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
  • G06T 7/174 - DécoupageDétection de bords impliquant l'utilisation de plusieurs images
  • G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G09G 5/10 - Circuits d'intensité
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • H04N 5/92 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour le surjeu
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 9/00 - Codage d'image
  • G06T 1/60 - Gestion de mémoire
  • H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
  • H04N 5/91 - Traitement du signal de télévision pour l'enregistrement
  • H04N 5/04 - Synchronisation
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06T 7/10 - DécoupageDétection de bords
  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra

30.

INCREASING RESOLUTION AND LUMINANCE OF A DISPLAY

      
Numéro d'application US2022051218
Numéro de publication 2023/097109
Statut Délivré - en vigueur
Date de dépôt 2022-11-29
Date de publication 2023-06-01
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Sullivan, Carter Bart

Abrégé

Disclosed here is a system and method to increase resolution of a display, such as an LED wall. Some examples describe increasing resolution during postprocessing. In some examples, there is described methods and systems that can modify luminance of a display. In further examples, the display may operate at a predetermined frequency by displaying a first image at a first time and a second image at a second time. A selective screen disposed between the display and the light receiver can include multiple light transmitting elements, such as pixel masks. A light transmitting element A can redirect a first portion of light transmitted by the display. A light transmitting element B can allow a second portion of light transmitted by the display to reach the light receiver. The selective screen can increase the resolution of the display by operating at the predetermined frequency and causing a first portion of the first image to be shown at the first time, and a second portion of the second image to be shown at the second time, where the first portion of the first image and the second portion of the second image are different. The predetermined frequency enables the light receiver to form an image based on the first portion of the first image, and the second portion of the second image.

Classes IPC  ?

  • G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
  • G06F 3/14 - Sortie numérique vers un dispositif de visualisation
  • G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice

31.

LEVELPLAY

      
Numéro d'application 225473500
Statut En instance
Date de dépôt 2023-04-27
Propriétaire Unity Technologies SF (USA)
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 35 - Publicité; Affaires commerciales
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

(1) Downloadable and recorded computer software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Downloadable and recorded computer software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Downloadable and recorded computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Downloadable and recorded computer software for enabling software developers to provide advertising content; Downloadable and recorded computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Downloadable and recorded computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Downloadable software development kits (SDK) (1) Business monitoring and consulting services, namely, analyzing digital games to provide strategy and insight regarding digital game creation, design, development, management, monitoring, improvement, monetization, marketing, advertising, and sales for business purposes; Advertising, marketing and promotion services for others provided via mobile and online communications networks; Market research services; Marketing analytics services, namely, providing Internet traffic metrics, statistics, user activity data, audience measurement and marketing and advertising reports; Advertising, marketing and business services, namely, assisting others in developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing an online network and online marketplace for others to engage in online advertising exchange, direct deals and cross-promotion with other publishers and providers of online mobile applications; Promoting the goods and services of others, namely, connecting software developers and advertisers in order to distribute advertising content to end users; Providing information through electronic networks about products and services for advertising and sales purposes; Business management services, namely, tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Business data analysis services, namely, providing predictive data analysis for developers and publishers; Database management in the field of advertisers, advertising content, computer games, software developers, and software applications; Advertisement via mobile phone networks; Digital advertising services; Dissemination of advertising for others via public and private wireless networks for display on mobile devices; Mobile advertising services for others in the field of ad monetization; Online advertising and marketing services in the field of ad monetization (2) Providing temporary use of non-downloadable software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Software as a service (SAAS) services featuring software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Platform as a service (PAAS) featuring computer software platforms for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Providing temporary use of non-downloadable computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing temporary use of on-line non-downloadable software and applications for enabling software developers to provide advertising content; Providing temporary use of non-downloadable computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Providing temporary use of non-downloadable computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Application service provider, namely, hosting, developing, managing, analyzing, tracking and maintaining applications, software and online networks of others in the fields of advertising and marketing

32.

LEVELPLAY

      
Numéro d'application 018868082
Statut Enregistrée
Date de dépôt 2023-04-27
Date d'enregistrement 2023-09-27
Propriétaire Unity Technologies SF (USA)
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 35 - Publicité; Affaires commerciales
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Downloadable and recorded computer software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Downloadable and recorded computer software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Downloadable and recorded computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Downloadable and recorded computer software for enabling software developers to provide advertising content; Downloadable and recorded computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Downloadable and recorded computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Downloadable software development kits (SDK). Business monitoring and consulting services, namely, analyzing digital games to provide strategy and insight regarding digital game creation, design, development, management, monitoring, improvement, monetization, marketing, advertising, and sales for business purposes; Advertising, marketing and promotion services for others provided via mobile and online communications networks; Market research services; Marketing analytics services, namely, providing Internet traffic metrics, statistics, user activity data, audience measurement and marketing and advertising reports; Advertising, marketing and business services, namely, assisting others in developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing an online network and online marketplace for others to engage in online advertising exchange, direct deals and cross-promotion with other publishers and providers of online mobile applications; Promoting the goods and services of others, namely, connecting software developers and advertisers in order to distribute advertising content to end users; Providing information through electronic networks about products and services for advertising and sales purposes; Business management services, namely, tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Business data analysis services, namely, providing predictive data analysis for developers and publishers; Database management in the field of advertisers, advertising content, computer games, software developers, and software applications; Advertisement via mobile phone networks; Digital advertising services; Dissemination of advertising for others via public and private wireless networks for display on mobile devices; Mobile advertising services for others in the field of ad monetization; Online advertising and marketing services in the field of ad monetization. Providing temporary use of non-downloadable software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Software as a service (SAAS) services featuring software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Platform as a service (PAAS) featuring computer software platforms for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Providing temporary use of non-downloadable computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing temporary use of on-line non-downloadable software and applications for enabling software developers to provide advertising content; Providing temporary use of non-downloadable computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Providing temporary use of non-downloadable computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Application service provider, namely, hosting, developing, managing, analyzing, tracking and maintaining applications, software and online networks of others in the fields of advertising and marketing.

33.

LEVELPLAY

      
Numéro de série 97909042
Statut En instance
Date de dépôt 2023-04-26
Propriétaire Unity Technologies SF ()
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 35 - Publicité; Affaires commerciales
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Downloadable and recorded computer software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Downloadable and recorded computer software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Downloadable and recorded computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Downloadable and recorded computer software for enabling software developers to provide advertising content; Downloadable and recorded computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Downloadable and recorded computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Downloadable software development kits (SDK) Business monitoring and consulting services, namely, analyzing digital games to provide strategy and insight regarding digital game creation, design, development, management, monitoring, improvement, monetization, marketing, advertising, and sales for business purposes; Advertising, marketing and promotion services for others provided via mobile and online communications networks; Market research services; Marketing analytics services, namely, providing Internet traffic metrics, statistics, user activity data, audience measurement and marketing and advertising reports; Advertising, marketing and business services, namely, assisting others in developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing an online network and online marketplace for others to engage in online advertising exchange, direct deals and cross-promotion with other publishers and providers of online mobile applications; Promoting the goods and services of others, namely, connecting software developers and advertisers in order to distribute advertising content to end users; Providing information through electronic networks about products and services for advertising and sales purposes; Business management services, namely, tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Business data analysis services, namely, providing predictive data analysis for developers and publishers; Database management in the field of advertisers, advertising content, computer games, software developers, and software applications; Advertisement via mobile phone networks; Digital advertising services; Dissemination of advertising for others via public and private wireless networks for display on mobile devices; Mobile advertising services for others in the field of ad monetization; Online advertising and marketing services in the field of ad monetization Providing temporary use of non-downloadable software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Software as a service (SAAS) services featuring software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Platform as a service (PAAS) featuring computer software platforms for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Providing temporary use of non-downloadable computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing temporary use of on-line non-downloadable software and applications for enabling software developers to provide advertising content; Providing temporary use of non-downloadable computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Providing temporary use of non-downloadable computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Application service provider, namely, hosting, developing, managing, analyzing, tracking and maintaining applications, software and online networks of others in the fields of advertising and marketing

34.

Method for image processing of image data for a two-dimensional display wall with three-dimensional objects

      
Numéro d'application 17548071
Numéro de brevet 11627297
Statut Délivré - en vigueur
Date de dépôt 2021-12-10
Date de la première publication 2023-04-11
Date d'octroi 2023-04-11
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Hillman, Peter M.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent Lillian
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.

Abrégé

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.

Classes IPC  ?

  • H04N 13/133 - Égalisation des caractéristiques de différentes composantes d’images, p. ex. de la luminance moyenne ou de l’équilibre des couleurs
  • H04N 13/15 - Aspects des signaux d’images en rapport avec la couleur
  • H04N 13/178 - Métadonnées, p. ex. informations sur la disparité
  • H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
  • H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
  • H04N 13/111 - Transformation de signaux d’images correspondant à des points de vue virtuels, p. ex. interpolation spatiale de l’image

35.

Increasing resolution and luminance of a display

      
Numéro d'application 17883396
Numéro de brevet 11615755
Statut Délivré - en vigueur
Date de dépôt 2022-08-08
Date de la première publication 2023-03-28
Date d'octroi 2023-03-28
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Sullivan, Carter Bart

Abrégé

The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.

Classes IPC  ?

  • G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
  • G06T 7/10 - DécoupageDétection de bords
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • H04N 5/265 - Mélange
  • H04N 5/76 - Enregistrement du signal de télévision
  • H04N 7/01 - Conversion des normes
  • H04N 9/64 - Circuits pour le traitement de signaux de couleur

36.

Method and apparatus for processing reference inputs for video compositing with replacement

      
Numéro d'application 17193204
Numéro de brevet 11605171
Statut Délivré - en vigueur
Date de dépôt 2021-03-05
Date de la première publication 2023-03-14
Date d'octroi 2023-03-14
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Hillman, Peter

Abrégé

A compositor generates a representation of a gradient image corresponding to a replaceable background object in a scene. The representation can be generated from user set of reference points in the scene.

Classes IPC  ?

  • G06T 7/00 - Analyse d'image
  • G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
  • G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images utilisant des procédés basés sur les caractéristiques

37.

Method for associating production elements with a production approach

      
Numéro d'application 17970399
Numéro de brevet 11720233
Statut Délivré - en vigueur
Date de dépôt 2022-10-20
Date de la première publication 2023-03-02
Date d'octroi 2023-08-08
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Letteri, Joseph Bruce
  • Snelling, Geo H.
  • Penman, Matt J.
  • Guarina, Rebs Senga

Abrégé

A computer-implemented method for managing associations between production elements and production approaches includes displaying, within a breakdown panel, a representation of at least one scene obtained from a script, the representation including a plurality of production elements; displaying, within an approaches panel, a representation of at least some of the plurality of production elements displayed within the breakdown panel, and associated respective user selection indicators; and displaying, within the approaches panel, a representation of at least one production approach. The method further includes receiving a user selection of at least one production element displayed within the approaches panel; receiving a user selection of the at least one production approach. The selected at least one production element is associated with the selected at least one production approach.

Classes IPC  ?

  • G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
  • G06F 40/205 - Analyse syntaxique
  • G06F 16/26 - Exploration de données visuellesNavigation dans des données structurées
  • G06Q 10/10 - BureautiqueGestion du temps
  • G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs

38.

Method for image processing of image data for image and visual effects on a two-dimensional display wall

      
Numéro d'application 17548131
Numéro de brevet 11593993
Statut Délivré - en vigueur
Date de dépôt 2021-12-10
Date de la première publication 2023-02-28
Date d'octroi 2023-02-28
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Hillman, Peter M.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent Lillian
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.

Abrégé

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.

Classes IPC  ?

  • G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
  • G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
  • G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
  • G06F 3/14 - Sortie numérique vers un dispositif de visualisation

39.

Method for controlling digital feather generations through a user interface in a computer modeling system

      
Numéro d'application 17827522
Numéro de brevet 11783516
Statut Délivré - en vigueur
Date de dépôt 2022-05-27
Date de la première publication 2023-02-23
Date d'octroi 2023-10-10
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Sprenger, Christoph
  • Baas, Matthias

Abrégé

A realistic feather generation may be represented via a user interface in a modeling system. To perform the feather generation, a curve defining a spine of a feather is received from input to the user interface of the modeling system. The feather is created in the user interface based at least on the curve, where the feather includes barbs extended away from the spine of the feather in one or more directions. The feather is deformed based on one or more deformation parameters. A feather contour of the barbs along at least one of the one or more directions is generated based on a contour parameter defining the feather contour. A barb density for a number of the barbs along the spine of the feather is established based on a barb density parameter. Thereafter, a data structure representing the feather is generated.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

40.

METHOD FOR CONTROLLING DIGITAL OBJECT GENERATIONS THROUGH A USER INTERFACE IN A COMPUTER MODELING SYSTEM

      
Numéro d'application US2022041107
Numéro de publication 2023/023404
Statut Délivré - en vigueur
Date de dépôt 2022-08-22
Date de publication 2023-02-23
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Sprenger, Christoph
  • Baas, Matthias

Abrégé

A realistic feather generation may be represented via a user interface in a modeling system. To perform the object generation, such as surface object generation e.g., feather generation, a curve defining a line of an object (e.g., spine of a feather) is received from input to the user interface of the modeling system. The object is created in the user interface based at least on the curve, where the object includes structures (e.g., barbs of a feather) extended away from the line of the object in one or more directions. The object is deformed based on one or more deformation parameters. A object contour of the structures along at least one of the one or more directions is generated based on a contour parameter defining the object contour. A structure density for a number of the structures along the line of the object is established based on a structure density parameter. Thereafter, a data structure representing the object (e.g., feather) is generated.

Classes IPC  ?

  • G06T 17/30 - Description de surfaces, p. ex. description de surfaces polynomiales
  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

41.

Systems and methods for computer animation of an artificial character using facial poses from a live actor

      
Numéro d'application 17403730
Numéro de brevet 11587278
Statut Délivré - en vigueur
Date de dépôt 2021-08-16
Date de la première publication 2023-02-16
Date d'octroi 2023-02-21
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Ma, Wan-Duo Kurt
  • Ghifary, Muhammad

Abrégé

Embodiments described herein provide an approach of animating a character face of an artificial character based on facial poses performed by a live actor. Geometric characteristics of the facial surface corresponding to each facial pose performed the live actor may be learnt by a machine learning system, which in turn build a mesh of a facial rig of an array of controllable elements applicable on a character face of an artificial character.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
  • G06N 3/08 - Méthodes d'apprentissage

42.

Computing illumination of an elongated shape having a noncircular cross section

      
Numéro d'application 17545343
Numéro de brevet 11600041
Statut Délivré - en vigueur
Date de dépôt 2021-12-08
Date de la première publication 2023-01-19
Date d'octroi 2023-03-07
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Weidlich, Andrea

Abrégé

The system obtains an indication of a shape of a cross-section of an elongated shape, and an orientation of the shape. Based on the shape of the cross-section of the elongated shape and the orientation of the shape, the system creates a nonuniform distribution of random numbers mapping uniformly distributed input values to multiple points on the surface of the elongated shape. The system provides an input value randomly selected from a uniform distribution of random numbers to the nonuniform distribution of random numbers to obtain a point among the multiple sample points on the surface of the elongated shape. The system applies a function to the input value to obtain an indication of a normal associated with the sample point among the multiple sample points. Finally, the system computes an illumination of the elongated shape using the normal.

Classes IPC  ?

43.

Method for simulating combustion in digital imagery with equilibrium and non-equilibrium conditions

      
Numéro d'application 17805412
Numéro de brevet 11756252
Statut Délivré - en vigueur
Date de dépôt 2022-06-03
Date de la première publication 2022-12-08
Date d'octroi 2023-09-12
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Stomakhin, Alexey
  • Museth, Ken

Abrégé

A combustion simulation system is provided. The combustion simulation system can be performed using a computing device operated by a computer user or artist. The computer-implemented method of generating one or more visual representations of a combustion even is provided. The method includes simulating the combustion event, which transforms combustion reactants into combustion products, the combustion event occurring at a reference pressure, automatically determining values of combustion properties, the values of the combustion properties being calculated as a function of a nonzero pressure field, and generating the one or more visual representations of the combustion event based on the values of combustion properties.

Classes IPC  ?

  • G06T 13/60 - Animation tridimensionnelle [3D] de phénomènes naturels, p. ex. la pluie, la neige, l'eau ou les plantes
  • G06F 30/28 - Optimisation, vérification ou simulation de l’objet conçu utilisant la dynamique des fluides, p. ex. les équations de Navier-Stokes ou la dynamique des fluides numérique [DFN]
  • G06F 17/11 - Opérations mathématiques complexes pour la résolution d'équations
  • G05B 19/406 - Commande numérique [CN], c.-à-d. machines fonctionnant automatiquement, en particulier machines-outils, p. ex. dans un milieu de fabrication industriel, afin d'effectuer un positionnement, un mouvement ou des actions coordonnées au moyen de données d'un programme sous forme numérique caractérisée par le contrôle ou la sécurité
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • G06N 3/10 - Interfaces, langages de programmation ou boîtes à outils de développement logiciel, p. ex. pour la simulation de réseaux neuronaux

44.

Calibrating focus/defocus operations of a virtual display based on camera settings

      
Numéro d'application 17547162
Numéro de brevet 11514654
Statut Délivré - en vigueur
Date de dépôt 2021-12-09
Date de la première publication 2022-11-29
Date d'octroi 2022-11-29
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.

Abrégé

Methods and systems are presented for determining a virtual focus model for a camera apparatus, the camera apparatus comprising one or more image capture elements and one or more optics device through which light in an optical path passes from a stage environment to at least one of the one or more image capture elements, the stage environment including virtual scene display for displaying a virtual scene.

Classes IPC  ?

  • G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra
  • H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D

45.

Method for controlling digital feather growth between two manifolds in a computer simulated creature

      
Numéro d'application 17666430
Numéro de brevet 11682156
Statut Délivré - en vigueur
Date de dépôt 2022-02-07
Date de la première publication 2022-11-17
Date d'octroi 2023-06-20
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Sprenger, Christoph

Abrégé

A realistic feather growth may be represented between two surface manifolds in a modeling system. To perform the feather growth, a feather groom for a plurality of feathers between an inner shell of a creature and an outer shell of the creature is received. An inner manifold for the inner shell and an outer manifold for the outer shell is determined with a plurality of follicle points and a plurality of tip points. A first surface contour definition for the inner manifold and a second surface contour definition for the outer manifold is determined and used to determine a volumetric vector field between the inner manifold and the outer manifold. Thereafter, the plurality of feathers is generated between the inner manifold and the outer manifold using the follicle points, the tip points, and the volumetric vector fields.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 15/04 - Mappage de texture

46.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538614
Numéro de brevet 11503224
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-11-15
Date d'octroi 2022-11-15
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

The processor obtains a third pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the third pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.

Classes IPC  ?

  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06T 5/00 - Amélioration ou restauration d'image
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • H04N 5/92 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour le surjeu
  • H04N 5/265 - Mélange

47.

Method for associating production elements with a production approach

      
Numéro d'application 17544476
Numéro de brevet 11487408
Statut Délivré - en vigueur
Date de dépôt 2021-12-07
Date de la première publication 2022-11-01
Date d'octroi 2022-11-01
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Letteri, Joseph Bruce
  • Snelling, Geo H.
  • Penman, Matt J.
  • Guarina, Rebs Senga

Abrégé

A computer-implemented method for managing associations between production elements and production approaches includes displaying, within a breakdown panel, a representation of at least one scene obtained from a script, the representation including a plurality of production elements; displaying, within an approaches panel, a representation of at least some of the plurality of production elements displayed within the breakdown panel, and associated respective user selection indicators; and displaying, within the approaches panel, a representation of at least one production approach. The method further includes receiving a user selection of at least one production element displayed within the approaches panel; receiving a user selection of the at least one production approach. The selected at least one production element is associated with the selected at least one production approach.

Classes IPC  ?

  • G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
  • G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
  • G06F 16/26 - Exploration de données visuellesNavigation dans des données structurées
  • G06F 40/205 - Analyse syntaxique
  • G06Q 10/10 - BureautiqueGestion du temps

48.

Increasing resolution and luminance of a display

      
Numéro d'application 17537821
Numéro de brevet 11490034
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-11-01
Date d'octroi 2022-11-01
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Sullivan, Carter Bart

Abrégé

The disclosed system and method can increase resolution of a display in postprocessing. The processor can obtain multiple images presented on a display, where the display is configured to present the multiple images at a first frame rate higher than a frame rate needed to form a perception of motion. The processor can obtain a mask corresponding to one or more images among the multiple images, where the mask indicates a portion of the one or more images among the multiple images to include in an output image. The processor can increase resolution of the display in proportion to the number of multiple images presented to the display by combining, based on the mask, the one or more images among the multiple images to obtain the output image.

Classes IPC  ?

  • H04N 5/265 - Mélange
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • H04N 9/64 - Circuits pour le traitement de signaux de couleur
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • H04N 5/76 - Enregistrement du signal de télévision
  • H04N 7/01 - Conversion des normes
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction

49.

Computing virtual screen imagery based on a stage environment, camera position, and/or camera settings

      
Numéro d'application 17547173
Numéro de brevet 11488348
Statut Délivré - en vigueur
Date de dépôt 2021-12-09
Date de la première publication 2022-11-01
Date d'octroi 2022-11-01
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.

Abrégé

Methods and systems are presented for generating a virtual scene rendering usable in a captured scene based on a camera position of a camera in a stage environment, a mapping of a plurality of subregions of a virtual scene display in the stage environment to corresponding positions in the stage environment, and details of a virtual scene element. The details might include a subregion of the plurality of subregions for the virtual scene element where on the virtual scene display the given virtual scene element would, at least in part, appear, and stage subregion depth values. A blur factor for a corresponding subregion might be determined based at least in part on the stage subregion depth value and the virtual subregion depth value. Rendering the virtual scene might take into account the blur factor for the given virtual scene element.

Classes IPC  ?

  • G06T 15/20 - Calcul de perspectives
  • G06T 15/50 - Effets de lumière
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 7/13 - Détection de bords
  • G06T 7/536 - Récupération de la profondeur ou de la forme à partir des effets de perspective, p. ex. en utilisant des points de fuite
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques

50.

Efficiently determining an absorption coefficient of a virtual volume in 3D computer graphics

      
Numéro d'application 17521746
Numéro de brevet 11481959
Statut Délivré - en vigueur
Date de dépôt 2021-11-08
Date de la première publication 2022-10-25
Date d'octroi 2022-10-25
Propriétaire
  • UNITY SOFTWARE INC. (USA)
  • UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Weidlich, Andrea

Abrégé

Disclosed is a method to derive the absorption coefficient, transparency, and/or the scattering coefficient from the user-specified parameters including roughness, phase function, index of refraction (IOR), and color by performing the simulation once, and storing the results of the simulation in an easy to retrieve representation, such as a lookup table, or an analytic function. To create the analytic function, one or more analytic functions can be fitted to the results of the simulation for the multiple parameters including roughness, phase function, IOR, and color. The lookup table can be combined with the analytic representation. For example, the lookup table can be used to represent the color, roughness, and phase function, while the IOR can be represented by an analytic function. For example, when the IOR is above 2, the lookup table becomes three-dimensional and the IOR is calculated using the analytic function.

Classes IPC  ?

51.

Hierarchies to generate animation control rigs

      
Numéro d'application 17855340
Numéro de brevet 11715249
Statut Délivré - en vigueur
Date de dépôt 2022-06-30
Date de la première publication 2022-10-20
Date d'octroi 2023-08-01
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Stevenson, Thomas
  • Sun, Edward

Abrégé

An animation system is provided for generating an animation control rig for character development, configured to manipulate a skeleton of an animated character. Hierarchical representation of puppets includes groups of functions related in a hierarchy according to character specialization for creating the animated rig are derived using base functions of a core component node. The hierarchical nodes may include an archetype node, at least one appendage node, and at least one feature node. In some implementations, portions of a hierarchical node, including the functions from the core component node, may be shared to generate different animation rigs for a variety of characters. In some implementations, portions of a hierarchical node, including the component node functions, may be reused to build similar appendages of a same animation rig.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
  • G06F 8/20 - Conception de logiciels
  • G06F 8/36 - Réutilisation de logiciel
  • G06F 8/41 - Compilation
  • G06F 8/658 - Mises à jour par incrémentMises à jour différentielles
  • G06F 8/72 - Refonte de code
  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

52.

WETA TOOLS

      
Numéro de série 97628156
Statut En instance
Date de dépôt 2022-10-11
Propriétaire Unity Technologies SF ()
Classes de Nice  ?
  • 09 - Appareils et instruments scientifiques et électriques
  • 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Downloadable computer software for use in creating digital animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics, interactive computer graphics, computer modeled humans and animals avatars; downloadable computer software for creating, scanning, displaying, analyzing, editing, processing, recording, transmitting, and printing digital imagery, 3D models, graphics, text, audio and computer-generated effects; downloadable computer software for the integration of text, audio, graphics, and still images; downloadable computer software for the integration of digital imagery and 3D models into an interactive delivery for multimedia applications; downloadable computer software for use in manipulating graphic images and photographic images; downloadable computer software for use in creating interactive 3D content; downloadable computer software that enables the input, output, and display of interactive 3D renderings; none of the foregoing for physical effects production, including prosthetics and makeup services, costumes, models, miniatures, weaponry armor, and set and prop construction; none of the foregoing for professional services consisting of visual effects artist services in connection with audio-visual productions for movie, television, and other production studios Industrial analysis and industrial research services in the field of computer graphics creation, graphics processing, 3D modeling, 3D rendering and virtual computer simulations; computer graphics services; Computer graphics design services, namely, creating 3D models and renderings, animations, virtual characters and avatars; providing temporary use of online, non-downloadable software for 3D-visualization, 3D modeling and 3D rendering; providing temporary use of online, non-downloadable software for image editing and the simulation, visualization and rendering of digital objects and images; providing temporary use of online, non-downloadable computer software for the integration of digital imagery and 3D models into an interactive delivery for multimedia applications; providing temporary use of online, non-downloadable software for graphics processing; providing temporary use of online, non-downloadable cloud-based software for use in creating digital animation, computer-generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, film making, computer-generated graphics, interactive computer graphics, computer modeled humans and animals avatars; providing temporary use of online, non-downloadable cloud-based computer software for creating, scanning, displaying, analyzing, editing, processing, recording, transmitting, and printing digital imagery, 3D models, graphics, text, audio and computer-generated effects; providing temporary use of online, non-downloadable cloud-based software for the integration of text, audio, graphics, and still images; providing temporary use of online, non-downloadable computer software for use in manipulating graphic images and photographic images; providing temporary use of online, non-downloadable software that enables the input, output, and display of interactive 3D renderings; design, development and maintenance of software, software applications and software application programming interfaces, including in the fields of animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, computer-generated graphics and interactive computer graphics; providing temporary use of online, non-downloadable software and web-based software applications for use in creating digital animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, computer-generated graphics and interactive computer graphics; providing temporary use of online, non-downloadable software and web-based software applications for use in creating digital animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics and interactive computer graphics; software-as-a service (SAAS) services and platform-as-a-service (PAAS) featuring computer software platforms, both featuring software for use in creating animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics and interactive computer graphics; video and computer game software design and development services; providing temporary use of on-line non-downloadable software development tools for the design and development of electronic games; providing temporary use of on-line non-downloadable software and web-based applications for accessing streaming audio and video files, games, social networks, text files and multimedia files; application service provider services, namely, hosting, managing, developing, and maintaining applications, software, and web sites of others in the fields of animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving image production, motion capture, computer-generated graphics and interactive computer graphics; cloud computing featuring software for use in creating animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics and interactive computer graphics; technical support services, namely, troubleshooting of computer software problems; cloud computing featuring software for use in computer graphics creation, 3D modeling, 3D rendering and virtual simulations; platform as a service (PAAS) featuring computer software platforms for editing images, sound, video, and digital media; Platform as a service (PAAS) featuring computer software platforms for computer graphics creation, 3D modeling, 3D rendering, virtual simulations, animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving-image production, motion capture, computer-generated graphics and interactive computer graphics; information, advisory and consultancy services relating to the design of animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, computer-generated graphics and interactive computer graphics; none of the foregoing for physical effects production, including prosthetics and makeup services, costumes, models, miniatures, weaponry armor, and set and prop construction; none of the foregoing for professional services consisting of visual effects artist services in connection with audio-visual productions for movie, television, and other production studios

53.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538861
Numéro de brevet 11468546
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-10-11
Date d'octroi 2022-10-11
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06T 9/00 - Codage d'image
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

54.

Method for scattering points in a uniform arbitrary distribution across a target mesh for a computer animated creature

      
Numéro d'application 17834727
Numéro de brevet 11710270
Statut Délivré - en vigueur
Date de dépôt 2022-06-07
Date de la première publication 2022-09-22
Date d'octroi 2023-07-25
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Hunter, Philip

Abrégé

A programmatic arbitrary distribution of items in a modeling system may be provided. To perform the distribution, a surface may be received, and a point count of application points associated with locations on the surface may be determined. A density map may be applied over the surface to assign a density to portions of the surface for the point count. Application points are then assigned to locations on the surface according to the density map and a scattering function of the point count, where the scattering function is based on one or more repulsion forces between neighboring points. The one or more repulsion forces are treated as pushing each of the neighboring point apart. Thereafter, the surface may be provided having the application points scattered across the surface based on the one or more repulsion forces.

Classes IPC  ?

  • G06T 15/04 - Mappage de texture
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 15/00 - Rendu d'images tridimensionnelles [3D]

55.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17539072
Numéro de brevet 11451708
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-09-20
Date d'octroi 2022-09-20
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

A processor calibrates the camera by presenting an input image on the display to obtain a presented image. The camera, arbitrarily positioned relative to the display, records the presented image. The processor obtains the input image via a channel different from the display. The processor obtains an indication of a display region associated with the display. The processor determines an input image region corresponding to the display region, and a recorded image region corresponding to the display region. The processor obtains a first pixel value associated with the input image region and a second pixel value associated with the recorded image region. The processor determines a mapping between the first pixel value and the second pixel value, where applying the mapping to the second pixel value substantially produces the first pixel value. The processor stores an identifier associated with the recorded image region and the mapping.

Classes IPC  ?

  • H04N 5/225 - Caméras de télévision
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra

56.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538921
Numéro de brevet 11451709
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-09-20
Date d'octroi 2022-09-20
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.

Classes IPC  ?

  • G06V 20/00 - ScènesÉléments spécifiques à la scène
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 5/00 - Amélioration ou restauration d'image

57.

Method for processing image data to provide for soft shadow effects using shadow depth information

      
Numéro d'application 17583192
Numéro de brevet 11810248
Statut Délivré - en vigueur
Date de dépôt 2022-01-24
Date de la première publication 2022-09-15
Date d'octroi 2023-11-07
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Hillman, Peter M

Abrégé

An image dataset is processed with a shadow map generated from objects in a virtual scene that can cast shadows and the scene is rendered independent of the shadows. The shadow might be edited separately, and then applied to a post-render image of the scene to form a shadowed image. Light factor values for pixels of the shadow map might be stored as summed-area table values.

Classes IPC  ?

  • G06T 15/60 - Génération d'ombres
  • G06T 15/04 - Mappage de texture
  • G06T 15/50 - Effets de lumière
  • G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

58.

Increasing resolution and luminance of a display

      
Numéro d'application 17537908
Numéro de brevet 11443676
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-09-13
Date d'octroi 2022-09-13
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Sullivan, Carter Bart

Abrégé

The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.

Classes IPC  ?

  • G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06T 7/10 - DécoupageDétection de bords

59.

SYNCSKETCH

      
Numéro d'application 220814000
Statut En instance
Date de dépôt 2022-09-07
Propriétaire Unity Technologies SF (USA)
Classes de Nice  ? 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

(1) Software as a service (SaaS) services featuring software that gives multiple users the ability to import, review and make annotations on images, three-dimensional models, files, audio, video and audio-visual content; Providing temporary use of non-downloadable cloud-based software that allows synchronized real-time reviews of digital media content; Providing temporary use of non-downloadable cloud-based software that gives users the ability to provide input, edits, modifications, suggestions and comments on digital media content; Providing temporary use of non-downloadable cloud-based software for sharing, reviewing, manipulating and collaborating simultaneously on digital media content; Providing temporary use of non-downloadable cloud-based software for synchronous presentation; Providing temporary use of non-downloadable cloud-based software for editing digital media content, sharing and downloading feedback notes.

60.

Removing moving objects from a video scene captured by a moving camera

      
Numéro d'application 17409616
Numéro de brevet 11436708
Statut Délivré - en vigueur
Date de dépôt 2021-08-23
Date de la première publication 2022-09-06
Date d'octroi 2022-09-06
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Glassner, Andrew
  • Hillman, Peter

Abrégé

Methods, an apparatus, and software media are provided for removing unwanted information such as moving or temporary foreground objects from a video sequence. The method performs, for each pixel, a statistical analysis to create a background data model whose color values can be used to detect and remove the unwanted information. The method assumes that for each pixel the background is present in a majority of the frames. The camera that records the video sequence may move relative to the geometry of the video scene. A pixel in a first frame is matched to a location in the geometry. The method determines color values of pixels, matched to the location in the geometry, in successive frames and clusters color values to determine a background color value range. It may use quadratic or better interpolation and extrapolation to determine background color values for unavailable frames.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • H04N 5/91 - Traitement du signal de télévision pour l'enregistrement
  • H04N 9/67 - Circuits pour le traitement de signaux de couleur pour le matriçage

61.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538981
Numéro de brevet 11438520
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-09-06
Date d'octroi 2022-09-06
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Petrovic, Lena

Abrégé

A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.

Classes IPC  ?

  • H04N 5/20 - Circuits pour la commande de la courbe de réponse en amplitude
  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06T 5/00 - Amélioration ou restauration d'image
  • H04N 5/91 - Traitement du signal de télévision pour l'enregistrement
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • G06T 1/60 - Gestion de mémoire

62.

Replacing moving objects with background information in a video scene

      
Numéro d'application 17409619
Numéro de brevet 11430132
Statut Délivré - en vigueur
Date de dépôt 2021-08-23
Date de la première publication 2022-08-30
Date d'octroi 2022-08-30
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Glassner, Andrew
  • Hillman, Peter

Abrégé

Methods, an apparatus, and software media are provided for removing unwanted information such as moving or temporary foreground objects from a video sequence. The method performs, for each pixel, a statistical analysis to create a background data model whose color values can be used to detect and remove the unwanted information. This includes determining a prevalent color cluster from among k clusters of color values for the pixel in successive frames. The method uses k-means clustering. To replace the unwanted information, the method iterates frames to find frames in which a pixel's color value is not included in the prevalent color cluster. In those frames, it replaces the pixel's color value with a value from the prevalent color cluster.

Classes IPC  ?

  • G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
  • H04N 5/265 - Mélange
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • H04N 13/15 - Aspects des signaux d’images en rapport avec la couleur
  • H04N 13/211 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant le multiplexage temporel
  • G06T 7/11 - Découpage basé sur les zones
  • H04N 13/00 - Systèmes vidéo stéréoscopiquesSystèmes vidéo multi-vuesLeurs détails

63.

Increasing resolution and luminance of a display

      
Numéro d'application 17537853
Numéro de brevet 11423853
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-23
Date d'octroi 2022-08-23
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Sullivan, Carter Bart

Abrégé

The disclosed system increases resolution of a display. The display operates at a predetermined frequency by displaying a first image at a first time and a second image at a second time. A selective screen disposed between the display and the camera includes multiple light transmitting elements. A light transmitting element A redirects a first portion of light transmitted by the display. A light transmitting element B allows a second portion of light transmitted by the display to reach the camera. The selective screen increases the resolution of the display by operating at the predetermined frequency and causing a first portion of the first image to be shown at the first time, and a second portion of the second image to be shown at the second time. The camera forms an image from the first portion of the first image, and the second portion of the second image.

Classes IPC  ?

  • G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante

64.

Blending real and virtual focus in a virtual display environment

      
Numéro d'application 17547177
Numéro de brevet 11425283
Statut Délivré - en vigueur
Date de dépôt 2021-12-09
Date de la première publication 2022-08-23
Date d'octroi 2022-08-23
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Marks, Joseph W.
  • Fascione, Luca
  • Maier, Millicent
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.

Abrégé

Methods and systems are presented for generating a virtual scene rendering of a captured scene based on a relative position of a camera and a virtual scene display in a stage environment, along with real and virtual lens effects. The details might include determining the camera position and virtual display position in the stage environment, and determining a depth value of a virtual scene element displayed on the virtual scene display. A desired focus model can then be determined from focus parameters of the camera, the depth value, and a desired lens effect, and an adjusted focus for the virtual scene element can be determined from the desired focus model. The adjusted focus can then be applied to the camera, the image of the virtual scene element on the virtual scene display, or pixels representing the virtual scene element in a composite image captured by the camera.

Classes IPC  ?

  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • G06T 7/50 - Récupération de la profondeur ou de la forme
  • G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • G06T 15/10 - Effets géométriques
  • G06T 5/00 - Amélioration ou restauration d'image

65.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538129
Numéro de brevet 11425313
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-23
Date d'octroi 2022-08-23
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

Disclosed here are various techniques to increase dynamic range of an image recorded from a display. A processor performing preprocessing splits an input image containing both bright and dark regions into two images, image A containing bright regions, and image B containing dark regions. The display presents image A and image B in alternating fashion. Camera is synchronized with the display to record image A and image B independently. In postprocessing, a processor obtains the recorded images A and B. The processor increases the pixel value of the recorded image A to obtain image A with increased pixel value. Finally, the processor increases pixel value of the image recorded from the display by combining the first recorded image with increased pixel value and the second recorded image.

Classes IPC  ?

  • H04N 5/228 - Caméras de télévision - Détails de circuits pour tubes analyseurs
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G06T 7/10 - DécoupageDétection de bords
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
  • G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
  • H04N 5/04 - Synchronisation
  • H04N 5/91 - Traitement du signal de télévision pour l'enregistrement
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio

66.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538234
Numéro de brevet 11418723
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-16
Date d'octroi 2022-08-16
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

Disclosed here are various techniques to increase dynamic range of an image recorded from a display. A processor performing preprocessing splits an input image containing both bright and dark regions into two images, image A containing bright regions, and image B containing dark regions. The display presents image A and image B in alternating fashion. Camera is synchronized with the display to record image A and image B independently. In postprocessing, a processor obtains the recorded images A and B. The processor increases the pixel value of the recorded image A to obtain image A with increased pixel value. Finally, the processor increases pixel value of the image recorded from the display by combining the first recorded image with increased pixel value and the second recorded image.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 7/174 - DécoupageDétection de bords impliquant l'utilisation de plusieurs images
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G09G 5/10 - Circuits d'intensité
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance

67.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538709
Numéro de brevet 11418724
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-16
Date d'octroi 2022-08-16
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

The processor obtains a first pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the first pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.

Classes IPC  ?

  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G09G 5/10 - Circuits d'intensité
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance

68.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538738
Numéro de brevet 11418725
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-16
Date d'octroi 2022-08-16
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

The processor obtains a first pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the first pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.

Classes IPC  ?

  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06T 1/60 - Gestion de mémoire
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • H04N 5/92 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour le surjeu
  • G06T 5/00 - Amélioration ou restauration d'image

69.

Dynamic range of a virtual production display

      
Numéro d'application 17539007
Numéro de brevet 11412155
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-09
Date d'octroi 2022-08-09
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Petrovic, Lena

Abrégé

A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
  • H04N 5/04 - Synchronisation
  • G06T 1/60 - Gestion de mémoire

70.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17538832
Numéro de brevet 11410281
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-09
Date d'octroi 2022-08-09
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.

Abrégé

A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06T 1/60 - Gestion de mémoire
  • G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique

71.

Systems and methods for computer animation of an artificial character using facial poses from a live actor

      
Numéro d'application 17403727
Numéro de brevet 11410370
Statut Délivré - en vigueur
Date de dépôt 2021-08-16
Date de la première publication 2022-08-09
Date d'octroi 2022-08-09
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Ma, Wan-Duo Kurt
  • Ghifary, Muhammad
  • Lewis, John P.

Abrégé

Embodiments described herein provide an approach of animating a character face of an artificial character based on facial poses performed by a live actor. Geometric characteristics of the facial surface corresponding to each facial pose performed the live actor may be learnt by a machine learning system, which in turn build a mesh of a facial rig of an array of controllable elements applicable on a character face of an artificial character.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions

72.

Increasing dynamic range of a virtual production display

      
Numéro d'application 17539035
Numéro de brevet 11412156
Statut Délivré - en vigueur
Date de dépôt 2021-11-30
Date de la première publication 2022-08-09
Date d'octroi 2022-08-09
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Marks, Joseph W.
  • Fascione, Luca
  • Thurston, Iii, Kimball D.
  • Maier, Millie
  • Gimpelson, Kenneth
  • Momcilovic, Dejan
  • Miller, Keith F.
  • Hillman, Peter M.
  • Swartz, Jonathan S.
  • Petrovic, Lena

Abrégé

A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.

Classes IPC  ?

  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • G09G 5/10 - Circuits d'intensité
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance

73.

Method for computation of local densities for virtual fibers

      
Numéro d'application 17716610
Numéro de brevet 11803998
Statut Délivré - en vigueur
Date de dépôt 2022-04-08
Date de la première publication 2022-07-21
Date d'octroi 2023-10-31
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Gourmel, Olivier

Abrégé

An image generator generates images of a set of virtual fibers and effects thereon by processing representations of the set of fibers and computing representation of a virtual surface for a fiber clump in the set of virtual fibers from an artist parameter representing a desired artist effect, computing correlations of the vertices from a set of vertices based on associations of the vertices corresponding to the artist parameter, computing a set of relevant vertices using the correlations of the vertices, computing orthogonal gradients to produce a plurality of gradients using a selected vertex and the set of relevant vertices for the fiber clump, and computing the virtual surface of the fiber clump from the plurality of gradients.

Classes IPC  ?

  • G06T 15/06 - Lancer de rayon
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
  • G06T 15/08 - Rendu de volume
  • G06T 15/04 - Mappage de texture

74.

Smoothly changing a focus of a camera between multiple target objects

      
Numéro d'application 17525766
Numéro de brevet 11367210
Statut Délivré - en vigueur
Date de dépôt 2021-11-12
Date de la première publication 2022-06-21
Date d'octroi 2022-06-21
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Hayes, Eric S.
  • Hurlburt, Robert S.
  • Frew, Corey B.

Abrégé

Disclosed herein is a system to smoothly change the focus of a camera between multiple targets. The system can obtain an indication of a target, an indication of a manner of focus transition between a first target and a second target, and camera settings. The system can determine a point associated with the second target, where the point has a property that focusing the camera on the point places the second target in focus, and the point is closer to the current focus point of the camera than a substantial portion of other points having the property. The system can obtain a nonlinear function indicating a second manner of focus transition between the first target and the second target. The system can change the focus of the camera between the first target and the second target by changing the focus of the camera from the current focus point to the determined point based on the nonlinear function.

Classes IPC  ?

  • G06T 7/571 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir de mises au point
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • G06T 15/20 - Calcul de perspectives
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio

75.

SYNCSKETCH

      
Numéro de série 97451663
Statut Enregistrée
Date de dépôt 2022-06-09
Date d'enregistrement 2023-03-21
Propriétaire Unity Technologies SF ()
Classes de Nice  ? 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Software as a service (SaaS) services featuring software that gives multiple users the ability to import, review and make annotations on images, three-dimensional models, files, audio, video and audio-visual content; Providing temporary use of non-downloadable cloud-based software that allows synchronized real-time reviews of digital media content; Providing temporary use of non-downloadable cloud-based software that gives users the ability to provide input, edits, modifications, suggestions and comments on digital media content; Providing temporary use of non-downloadable cloud-based software for sharing, reviewing, manipulating and collaborating simultaneously on digital media content; Providing temporary use of non-downloadable cloud-based software for synchronous presentation; Providing temporary use of non-downloadable cloud-based software for editing digital media content, sharing and downloading feedback notes

76.

System and method for progressive enhancement of in-app augmented reality advertising

      
Numéro d'application 16904532
Numéro de brevet 11344812
Statut Délivré - en vigueur
Date de dépôt 2020-06-17
Date de la première publication 2022-05-31
Date d'octroi 2022-05-31
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Parisi, Anthony Salvatore
  • Stagg, Katrina Rose

Abrégé

A method of performing operations for executing a progressive interactive augmented reality (AR) advertisement display during execution of a running application is disclosed. based on a receiving of a first trigger event from the running application, a first aspect of the progressive AR advertisement is executed within a display window while the application is executing. The first aspect includes an interactive advertisement. Based on a receiving of a second trigger event from within the first aspect of the progressive AR advertisement, a second aspect of the progressive AR advertisement is executed in the display window while the application is executing. The second aspect includes an interactive AR advertisement.

Classes IPC  ?

  • A63F 13/00 - Jeux vidéo, c.-à-d. jeux utilisant un affichage à plusieurs dimensions généré électroniquement
  • A63F 13/61 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu utilisant des informations publicitaires
  • A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
  • A63F 13/213 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de photo-détection, p. ex. des caméras, des photodiodes ou des cellules infrarouges
  • G06Q 30/02 - MarketingEstimation ou détermination des prixCollecte de fonds

77.

Method for controlling digital feather generations through a user interface in a computer modeling system

      
Numéro d'application 17459658
Numéro de brevet 11348286
Statut Délivré - en vigueur
Date de dépôt 2021-08-27
Date de la première publication 2022-05-31
Date d'octroi 2022-05-31
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Sprenger, Christoph
  • Baas, Matthias

Abrégé

A realistic feather generation may be represented via a user interface in a modeling system. To perform the feather generation, a curve defining a spine of a feather is received from input to the user interface of the modeling system. The feather is created in the user interface based at least on the curve, where the feather includes barbs extended away from the spine of the feather in one or more directions. The feather is deformed based on one or more deformation parameters. A feather contour of the barbs along at least one of the one or more directions is generated based on a contour parameter defining the feather contour. A barb density for a number of the barbs along the spine of the feather is established based on a barb density parameter. Thereafter, a data structure representing the feather is generated.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

78.

Method for scattering points in a uniform arbitrary distribution across a target mesh for a computer animated creature

      
Numéro d'application 17127105
Numéro de brevet 11361494
Statut Délivré - en vigueur
Date de dépôt 2020-12-18
Date de la première publication 2022-04-28
Date d'octroi 2022-06-14
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Hunter, Philip

Abrégé

A programmatic arbitrary distribution of items in a modeling system may be provided. To perform the distribution, a surface may be received, and a point count of application points associated with locations on the surface may be determined. A density map may be applied over the surface to assign a density to portions of the surface for the point count. Application points are then assigned to locations on the surface according to the density map and a scattering function of the point count, where the scattering function is based on one or more repulsion forces between neighboring points. The one or more repulsion forces are treated as pushing each of the neighboring point apart. Thereafter, the surface may be provided having the application points scattered across the surface based on the one or more repulsion forces.

Classes IPC  ?

  • G06T 15/04 - Mappage de texture
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 15/00 - Rendu d'images tridimensionnelles [3D]

79.

Method for computation of local densities for virtual fibers

      
Numéro d'application 17098221
Numéro de brevet 11315299
Statut Délivré - en vigueur
Date de dépôt 2020-11-13
Date de la première publication 2022-04-26
Date d'octroi 2022-04-26
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Gourmel, Olivier

Abrégé

An image generator generates images of a set of virtual fibers and effects thereon by processing representations of the set of fibers and computing representation of a virtual surface for a fiber clump in the set of virtual fibers from an artist parameter representing a desired artist effect, computing correlations of the vertices from a set of vertices based on associations of the vertices corresponding to the artist parameter, computing a set of relevant vertices using the correlations of the vertices, computing orthogonal gradients to produce a plurality of gradients using a selected vertex and the set of relevant vertices for the fiber clump, and computing the virtual surface of the fiber clump from the plurality of gradients.

Classes IPC  ?

  • G06T 15/06 - Lancer de rayon
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
  • G06T 15/08 - Rendu de volume
  • G06T 15/04 - Mappage de texture

80.

Increasing the speed of computation of a volumetric scattering render technique

      
Numéro d'application 17482310
Numéro de brevet 11315305
Statut Délivré - en vigueur
Date de dépôt 2021-09-22
Date de la première publication 2022-04-26
Date d'octroi 2022-04-26
Propriétaire Unity Technologies SF (USA)
Inventeur(s)
  • Weidlich, Andrea
  • Droske, Marc

Abrégé

Presented here is a system and method to increase the speed of computation of a volumetric scattering render technique. The volumetric scattering can include path tracing which simulates interactions between a virtual ray of light and a volume. The interaction can include reflection of the virtual ray of light of a particle within the volume. The system can obtain a threshold number of interactions between a virtual ray of light and a three-dimensional object through which the virtual ray of light is traveling. As the system performs the simulation, the system can compare a number of the interactions to the threshold number. Upon determining that the number of interactions is equal to or exceeds the threshold number, the system can terminate the simulation and approximate interactions between the virtual ray of light and the volume using a second rendering technique that is computationally less expensive than simulating the interactions.

Classes IPC  ?

81.

Method for operating a character rig in an image-generation system using constraints on reference nodes

      
Numéro d'application 17239381
Numéro de brevet 11450054
Statut Délivré - en vigueur
Date de dépôt 2021-04-23
Date de la première publication 2022-04-07
Date d'octroi 2022-09-20
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Choi, Byung Kuk
  • Steinmetz, Albrecht

Abrégé

A character rig may be representable as a data structure specifying a plurality of articulated character parts, an element tree specifying relations between character parts, and a set of constraints on the character parts. After receiving rotoscoping movement input data corresponding to attempted alignments of movements of at least some of the character parts with elements moving in a captured live action scene, a rotoscoping constraints may be received. The rotoscoping constraint may include at least a first constraint on the character rig other than a second constraint specified by the data structure of the character rig, Thereafter, rig movement inputs for a second set of character parts distinct from the first set of character parts may be accepted and the character rig may be moved according to the rig movement inputs while constrained by the rotoscoping constraints.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels

82.

Method and user interface for generating tangent vector fields usable for generating computer generated imagery

      
Numéro d'application 17105390
Numéro de brevet 11328466
Statut Délivré - en vigueur
Date de dépôt 2020-11-25
Date de la première publication 2022-04-07
Date d'octroi 2022-05-10
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Atkinson, Kevin

Abrégé

A representation of a surface of one or more objects is positioned in a virtual space is obtained in a computer animation system. Thereafter, a guide curve specification of a guide curve in the virtual space relative to the surface is received. Thereafter, the computer animation system computes a first set of tangent vector values for differentiable locations along the guide curve and computes a second set of tangent vector values for nondifferentiable locations along the guide curve. Using the first set and second set, the computer animation system computes a third set of tangent vector values for locations on the surface other than locations along the guide curve and computes a tangent vector field over the surface from at least the first set of tangent vector values, the second set of tangent vector values, and the third set of tangent vector values.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

83.

Facial model mapping with a neural network trained on varying levels of detail of facial scans

      
Numéro d'application 17197208
Numéro de brevet 11354774
Statut Délivré - en vigueur
Date de dépôt 2021-03-10
Date de la première publication 2022-04-07
Date d'octroi 2022-06-07
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

In an image processing system, a scan of an actor is converted to a high-resolution two-dimensional map, which is converted to low-resolution map and to a facial rig model. Manipulations of the facial rig create a modified facial rig. A new low-resolution two-dimensional map can be obtained of the modified facial rig and a neural network can be used to generate a new high-resolution two-dimensional map that can be used to generate a mesh that is a mesh of the scan, modified by the manipulations of the facial rig.

Classes IPC  ?

  • G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques

84.

Method for editing computer-generated images to maintain alignment between objects specified in frame space and objects specified in scene space

      
Numéro d'application 17498668
Numéro de brevet 11393155
Statut Délivré - en vigueur
Date de dépôt 2021-10-11
Date de la première publication 2022-04-07
Date d'octroi 2022-07-19
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Hillman, Peter M.

Abrégé

In an image processing system, an image insertion is to be included onto, or relative to, a first and second frame, each depicting images of a set of objects of a geometric model. A point association is determined for a depicted object that is depicted in both the first frame and the second frame, representing reference coordinates in a virtual scene space of a first location on the depicted object independent of at least one position change and a mapping of a first image location in the first image to where the first location appears in the first image. A corresponding location in the second image is determined based on where the first location on the depicted object appears according to the reference coordinate in the virtual scene space and a second image location on the second image where the first location appears in the second image.

Classes IPC  ?

  • G06T 15/04 - Mappage de texture
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
  • G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
  • G06T 15/20 - Calcul de perspectives

85.

Computer graphics system user interface for obtaining artist inputs for objects specified in frame space and objects specified in scene space

      
Numéro d'application 17394020
Numéro de brevet 11417048
Statut Délivré - en vigueur
Date de dépôt 2021-08-04
Date de la première publication 2022-04-07
Date d'octroi 2022-08-16
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Hillman, Peter M.

Abrégé

In an image processing system, artist user interface provides for user input of specifications for an inserted object, specified in frame space. The inserted objects can be specified in frame space but can be aligned with object points in a virtual scene space. For other frames, where the object points move in the frame space, the object movements are applied to the inserted object in the frame space. The alignment can be manual by the user or programmatically determined.

Classes IPC  ?

  • G06T 15/04 - Mappage de texture
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
  • G06T 15/20 - Calcul de perspectives
  • G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments

86.

Image modification of motion captured scene for reconstruction of obscured views using uncoordinated cameras

      
Numéro d'application 17546908
Numéro de brevet 11689815
Statut Délivré - en vigueur
Date de dépôt 2021-12-09
Date de la première publication 2022-03-31
Date d'octroi 2023-06-27
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Thurston, Iii, Kimball D.
  • Hillman, Peter M.

Abrégé

An imagery processing system determines alternative pixel color values for pixels of captured imagery where the alternative pixel color values are obtained from alternative sources. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and that alternative imagery is processed to provide user-selectable alternatives for pixel ranges from the main imagery.

Classes IPC  ?

  • H04N 23/80 - Chaînes de traitement de la caméraLeurs composants
  • G06T 7/11 - Découpage basé sur les zones
  • H04N 9/64 - Circuits pour le traitement de signaux de couleur

87.

Systems and methods for generating a skull surface for computer animation

      
Numéro d'application 17347437
Numéro de brevet 11443473
Statut Délivré - en vigueur
Date de dépôt 2021-06-14
Date de la première publication 2022-03-24
Date d'octroi 2022-09-13
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

An animation system wherein a machine learning model is adopted to generate animated facial actions based on parameters obtained from a live actor. Specifically, the anatomical structure such as a facial muscle topology and a skull surface that are specific to the live actor may be used. A skull surface that is specific to a live actor based on facial scans of the live actor and generic tissue depth data. For example, the facial scans of the live actor may provide a skin surface topology of the live actor, based on which the skull surface underneath the skin surface can be derived by “offsetting” the skin surface with corresponding soft tissue depth at different sampled points on the skin surface.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
  • G06F 16/53 - Requêtes
  • G06T 7/50 - Récupération de la profondeur ou de la forme
  • G06N 3/08 - Méthodes d'apprentissage
  • G06N 20/00 - Apprentissage automatique
  • G06T 15/20 - Calcul de perspectives
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions

88.

Systems and methods for building a pseudo-muscle topology of a live actor in computer animation

      
Numéro d'application 17079058
Numéro de brevet 11403801
Statut Délivré - en vigueur
Date de dépôt 2020-10-23
Date de la première publication 2022-03-24
Date d'octroi 2022-08-02
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

An animation system wherein a machine learning model is adopted to generate animated facial actions based on parameters obtained from a live actor. Specifically, the anatomical structure such as a facial muscle topology and a skull surface that are specific to the live actor may be used. A muscle structure of simplified “pseudo” muscles that approximate the actual muscle topology but with reduced degree of freedom is determined to improve computational efficiency.

Classes IPC  ?

  • G06T 15/20 - Calcul de perspectives
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
  • G06N 20/00 - Apprentissage automatique
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

89.

Systems and methods for generating a skull surface for computer animation

      
Numéro d'application 17079078
Numéro de brevet 11410366
Statut Délivré - en vigueur
Date de dépôt 2020-10-23
Date de la première publication 2022-03-24
Date d'octroi 2022-08-09
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

An animation system wherein a machine learning model is adopted to generate animated facial actions based on parameters obtained from a live actor. Specifically, the anatomical structure such as a facial muscle topology and a skull surface topology that are specific to the live actor may be used. A skull surface that is specific to a live actor based on facial scans of the live actor and generic tissue depth data. For example, the facial scans of the live actor may provide a skin surface topology of the live actor, based on which the skull surface underneath the skin surface can be derived by “offsetting” the skin surface with corresponding soft tissue depth at different sampled points on the skin surface.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
  • G06F 16/53 - Requêtes
  • G06T 7/50 - Récupération de la profondeur ou de la forme
  • G06N 3/08 - Méthodes d'apprentissage
  • G06N 20/00 - Apprentissage automatique
  • G06T 15/20 - Calcul de perspectives
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
  • G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions

90.

Active marker strobing and synchronization for performance capture communication

      
Numéro d'application 17541082
Numéro de brevet 11380136
Statut Délivré - en vigueur
Date de dépôt 2021-12-02
Date de la première publication 2022-03-24
Date d'octroi 2022-07-05
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Momcilovic, Dejan
  • Botting, Jake

Abrégé

The present description relates to light patterns used in a live action scene of a visual production to encode information associated with objects in the scene, such as movement and position of the objects. A data capture system includes active markers that emit light of a particular wavelength in predefined strobing patterns. In some implementations, the active markers are instructed to emit an assigned signature pattern of light through a signal controller sending signals to a control unit. Various components are synchronized such that pulsing of light corresponds to time slices and particular frames captured by the performance capture system. The data representing the pattern is embedded in illuminated and blank frames. Frames showing the light pattern are analyzed to extract information about the active markers, such as identification of the active markers and objects to which they are attached.

Classes IPC  ?

  • G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
  • G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
  • G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
  • G06T 7/20 - Analyse du mouvement
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
  • A61B 90/00 - Instruments, outillage ou accessoires spécialement adaptés à la chirurgie ou au diagnostic non couverts par l'un des groupes , p. ex. pour le traitement de la luxation ou pour la protection de bords de blessures
  • A61B 5/11 - Mesure du mouvement du corps entier ou de parties de celui-ci, p. ex. tremblement de la tête ou des mains ou mobilité d'un membre
  • A61B 5/00 - Mesure servant à établir un diagnostic Identification des individus
  • G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage

91.

SYSTEMS AND METHODS FOR BUILDING A MUSCLE-TO-SKIN TRANSFORMATION IN COMPUTER ANIMATION

      
Numéro de document 03192467
Statut En instance
Date de dépôt 2020-11-11
Date de disponibilité au public 2022-03-17
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing "smile," the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface "smile."

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie

92.

Wearable article supporting performance capture equipment

      
Numéro d'application 17535196
Numéro de brevet 11457127
Statut Délivré - en vigueur
Date de dépôt 2021-11-24
Date de la première publication 2022-03-17
Date d'octroi 2022-09-27
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Momcilovic, Dejan
  • Botting, Jake

Abrégé

Implementations provide a wearable article for a performance capture system. In some implementations, a wearable article includes one or more regions, where the one or more regions are configured to be worn on at least a portion of a body of a user, and where at least one of the one or more regions are configured to hold performance capture equipment in predetermined positions. In some implementations, the wearable article also includes a plurality of mounting mechanisms coupled to the one or more regions for mounting reference markers to be used for position determination. In some implementations, the wearable article also includes a plurality of fastening mechanisms coupled to the one or more regions for fastening devices and accessories for controlling the reference markers.

Classes IPC  ?

  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  • G08C 17/02 - Dispositions pour transmettre des signaux caractérisées par l'utilisation d'une voie électrique sans fil utilisant une voie radio

93.

Method for generating simulations of thin film interfaces for improved animation

      
Numéro d'application 17530293
Numéro de brevet 11417043
Statut Délivré - en vigueur
Date de dépôt 2021-11-18
Date de la première publication 2022-03-10
Date d'octroi 2022-08-16
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Stomakhin, Alexey

Abrégé

A method for generating one or more visual representations of an object colliding with an interface between a simulated fluid and a material. The method includes obtaining shape and movement data of a bulk fluid and an object, identifying an interface where the bulk fluid covers a portion of the object, generating an emitted fluid at the interface, generating shape and movement data of the emitted fluid interacting with the object.

Classes IPC  ?

  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 13/60 - Animation tridimensionnelle [3D] de phénomènes naturels, p. ex. la pluie, la neige, l'eau ou les plantes

94.

Method for applying a vignette effect to rendered images

      
Numéro d'application 17086046
Numéro de brevet 11308586
Statut Délivré - en vigueur
Date de dépôt 2020-10-30
Date de la première publication 2022-03-10
Date d'octroi 2022-04-19
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Hillman, Peter

Abrégé

Methods and systems for applying a vignette effect to a rendered computer-generated image are presented. A kernel image including a kernel shape is overlaid on a control image including a vignette shape, creating a region where the kernel shape and control image overlap. A region of the rendered image corresponding to the overlapping region is blurred based on the shape of the overlapping region. A vignetting factor indicating a loss in intensity as a result of the blurring is computed based on the area of the overlapping region and the area of the kernel shape. The intensity of the blurred region is scaled upward based on the vignetting factor. The position of the kernel image on the control shape is shifted, resulting in a new overlapping region. Blurring and intensity compensation are performed for the region of the rendered image corresponding to the new overlapping region.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 5/20 - Amélioration ou restauration d'image utilisant des opérateurs locaux
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

95.

Method for emulating defocus of sharp rendered images

      
Numéro d'application 17086032
Numéro de brevet 11328437
Statut Délivré - en vigueur
Date de dépôt 2020-10-30
Date de la première publication 2022-03-10
Date d'octroi 2022-05-10
Propriétaire
  • UNITY SOFTWARE INC. (USA)
  • UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Hillman, Peter

Abrégé

Methods and systems for defocusing a rendered computer-generated image are presented. Pixel values for a pixel array are determined from a scene description. A blur amount for each pixel is determined based on a lens function representing a lens shape and/or effect. A blur amount and blur transparency value are determined for the pixel based on the lens function and pixel depth. A convolution range comprising pixels adjacent to the pixel is determined based on the blur amount. A blend color value is determined for the pixel based on the color value of the pixel, color values of pixels in the convolution range, and the blur transparency value. The blend color value is scaled based on the blend color value and a modified pixel color value is determined from scaled blend color values.

Classes IPC  ?

  • G06T 7/529 - Récupération de la profondeur ou de la forme à partir de la texture
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 7/13 - Détection de bords
  • G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement

96.

Systems and methods for data bundles in computer animation

      
Numéro d'application 17082859
Numéro de brevet 11341702
Statut Délivré - en vigueur
Date de dépôt 2020-10-28
Date de la première publication 2022-03-10
Date d'octroi 2022-05-24
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.”

Classes IPC  ?

  • G06T 13/00 - Animation
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 15/00 - Rendu d'images tridimensionnelles [3D]
  • G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
  • G06N 3/08 - Méthodes d'apprentissage
  • G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation

97.

Systems and methods for building a muscle-to-skin transformation in computer animation

      
Numéro d'application 17082890
Numéro de brevet 11875504
Statut Délivré - en vigueur
Date de dépôt 2020-10-28
Date de la première publication 2022-03-10
Date d'octroi 2024-01-16
Propriétaire Unity Technologies SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.”.

Classes IPC  ?

98.

Motion capture calibration

      
Numéro d'application 17185933
Numéro de brevet 11282233
Statut Délivré - en vigueur
Date de dépôt 2021-02-25
Date de la première publication 2022-03-10
Date d'octroi 2022-03-22
Propriétaire
  • UNITY SOFTWARE INC. (USA)
  • UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Momcilovic, Dejan
  • Botting, Jake

Abrégé

Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on at least three reference points, where the at least three reference points are positioned within the live action scene, and where distances between the at least three reference points are predetermined. The system further determines a location and orientation of each camera based on the reference point data.

Classes IPC  ?

  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra

99.

Systems and Methods for Building a Skin-to-Muscle Transformation in Computer Animation

      
Numéro d'application 17082895
Statut En instance
Date de dépôt 2020-10-28
Date de la première publication 2022-03-10
Propriétaire
  • UNITY SOFTWARE INC. (USA)
  • UNITY TECHNOLOGIES SF (USA)
Inventeur(s) Choi, Byung Kuk

Abrégé

An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.”

Classes IPC  ?

100.

MOTION CAPTURE CALIBRATION USING DRONES

      
Numéro de document 03191221
Statut En instance
Date de dépôt 2021-03-17
Date de disponibilité au public 2022-03-03
Propriétaire UNITY TECHNOLOGIES SF (USA)
Inventeur(s)
  • Momcilovic, Dejan
  • Botting, Jake

Abrégé

Embodiments facilitate the calibration of cameras in a live action scene using drones. In some embodiments, a method configures a plurality of reference cameras to observe at least one portion of the live action scene. The method further configures one or more moving cameras having unconstrained motion to observe one or more moving objects in the live action scene and to observe at least three known reference points associated with the plurality of reference cameras. The method further receives reference point data in association with the one or more moving cameras, where the reference point data is based on the at least three known reference points. The method further computes a location and an orientation of each moving camera of the one or more moving cameras based on one or more of the reference point data and one or more locations of one or more reference cameras of the plurality of reference cameras.

Classes IPC  ?

  • G01B 21/04 - Dispositions pour la mesure ou leurs détails, où la technique de mesure n'est pas couverte par les autres groupes de la présente sous-classe, est non spécifiée ou est non significative pour mesurer la longueur, la largeur ou l'épaisseur en mesurant les coordonnées de points
  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra
  • H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
  1     2     3        Prochaine page