Disclosed is a method to deform points in space using a curve. A processor obtains the curve and a point on the curve. The processor computes a tangent to the curve at the point and determines a first vector different from the tangent to the curve. The processor constructs a plane at the point on the curve based on the tangent and the first vector. The processor defines the first and second side of the curve, where the first and second side include a first and second plurality of points on opposite sides of the plane, respectively. The processor defines a deformation associated with the curve and influencing the points in space, where a magnitude of the deformation affecting a second point among the points in space depends on whether the second point is on the first or second side. The processor applies the deformation to the second point.
A system and method for application interaction and/or communication, the system maintaining a polyspatial input/output (I/O) graph specifying how applications can interact within a unified logical space. In some embodiments, the polyspatial graph specifies an application hierarchy comprising at least a host application, one or more hosted applications to be executed within the host application, one of the hosted applications corresponding to an intermediate host application for an additional application. The host application and the one or more hosted applications are executed, the executing comprising: receiving, at the host application, input to be transmitted to the one or more hosted applications; coordinating, by the host application, interactions among the one or more hosted applications; generating, by the host application, of an aggregated output based on outputs of the hosted applications and comprising a scene graph; and displaying, by the host application, of a display based on the generated aggregated output.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
A universal server and host system and method, the system configuring a universal server to transmit, to one or more universal hosts, asset and scene information to generate one or more local scene graphs, each local scene graph replicating a scene graph associated with a simulation running at the universal server and being associated with a local simulation running at a universal host. Upon receiving input from the one or more universal hosts, the universal server updates an internal state based on the received input, generate commands encoding changes to an output state, and transmits the commands to the one or more universal hosts for updating each local scene graph at its respective universal host, at least one local scene graph to be rendered at a local device associated with its respective universal host. The universal server and the one or more universal hosts are applications.
A63F 13/355 - Réalisation d’opérations pour le compte de clients ayant des capacités de traitement restreintes, p. ex. serveurs transformant une scène de jeu qui évolue en flux vidéo codé à transmettre à un téléphone portable ou à un client léger
A63F 13/358 - Adaptation du déroulement du jeu en fonction de la charge du réseau ou du serveur, p. ex. pour diminuer la latence due aux différents débits de connexion entre clients
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
A method of rendering 3D content in a mixed reality environment is disclosed. A volume camera is defined within first virtual environment. The volume camera is configured to capture a 3D volume of space. The defining includes setting parameters for a position or an orientation to be captured by the volume camera. A volume view transform is applied to the volume camera to establish a perspective of the volume camera within the 3D volume of space. The volume view transform is configured to define the position or the orientation relative to the 3D content in the 3D volume of space. Selected 3D content is processed within the 3D volume of space based on a configuration of the volume camera. The selected 3D content is rendered into a second virtual environment based on the configuration. The rendering includes adapting to a perspective of a user within the second virtual environment.
An AI-assisted texture generation system and method that generates a texture for an object such as a 3D mesh. At a given iteration, the system renders, from a viewpoint, one or more of an object depth and a cumulative texture associated with a cumulative pixel quality map; determines an estimated pixel quality map associated with the viewpoint; generates, at the viewpoint, a new texture associated with the object based on one or more of the object depth, the cumulative texture, and a refine mask; updates the cumulative texture based on the new texture; and updates the cumulative pixel quality map based on the estimated pixel quality map. In some examples, the refine mask is computed based on the cumulative pixel quality map and an estimated pixel quality map. In some examples, the new texture is generating using an augmented inpainting diffusion model for image generation.
A method of generating an image is disclosed. A mask and descriptive text associated with a subject are received. The descriptive text comprises a text prompt. The mask is resized to fit within a predefined bounding box and the resized mask is centered on a background image. The centered mask is filled with noise. Output of an image of the subject on a solid background is received from a generative AI model in response to a passing of a request to the generative AI model. The request includes the noise-filled mask and the descriptive text.
System and methods for importing, converting, optimizing and/or executing a computational graph or AST at an endpoint target. The system includes accessing an input computational graph corresponding to a trained machine-learning (ML) model; converting the input computational graph into an internal computational graph; based on determined characteristics of the internal computational graph, optimizing the internal computational graph to generate an optimized computational graph by applying one or more of at least a graph element reordering operation, a graph element fusing operation, or a graph element creation operation; converting the optimized computational graph to executable instructions enabled to be executed on an endpoint associated with a backend and a platform; generating associated scheduling instructions; and executing the executable instructions on the endpoint based on the scheduling instructions. The executable instructions can forgo references to the input, internal or optimized computational graphs, and/or be reused by other systems, engines or applications.
The system efficiently distributes data in a cloud pipeline to geographically distributed users. To enable efficient distribution of resources to multiple users, the production pipeline can be represented as two flow graphs: a flow graph of tasks as well as a flow graph of data. The flow graph of tasks can indicate a sequence of tasks, departments, and users in a production pipeline. The flow graph of data can indicate location of data needed for each task in the flow graph of tasks, and location of the users assigned to the particular task. Based on the next task that needs to be executed in the flow graph of tasks, the system can proactively gather the data from the various locations and deliver the data to the users assigned to executing the next task.
A method of performing operations for executing a progressive interactive augmented reality (AR) advertisement display during execution of a running application is disclosed. based on a receiving of a first trigger event from the running application, a first aspect of the progressive AR advertisement is executed within a display window while the application is executing. The first aspect includes an interactive advertisement. Based on a receiving of a second trigger event from within the first aspect of the progressive AR advertisement, a second aspect of the progressive AR advertisement is executed in the display window while the application is executing. The second aspect includes an interactive AR advertisement.
A63F 13/00 - Jeux vidéo, c.-à-d. jeux utilisant un affichage à plusieurs dimensions généré électroniquement
A63F 13/213 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de photo-détection, p. ex. des caméras, des photodiodes ou des cellules infrarouges
A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
A63F 13/61 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu utilisant des informations publicitaires
Embodiments described herein provide a three-dimensional (3D) facial processing system that can be used for animator-centric and anatomically-driven 3D facial modeling, animation and transfer. Specifically, a collection of muscle fiber curves may be considered as an anatomic basis, whose contraction and relaxation are defined as a fine-grained parameterization of human facial expression. An end-to-end modular deformation architecture may then be built using this representation to implemtn automatic optimization of the parameters of a specific face from high-quality dynamic facial scans; face animation driven by performance capture, keyframes, or dynamic simulation; interactive and direct manipulation of facial expression; and animation transfer from an actor to a character.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
G06V 40/18 - Caractéristiques de l’œil, p. ex. de l’iris
11.
Speaker separation based on real-time latent speaker state characterization
Systems, methods, and non-transitory computer-readable media can obtain a stream of audio waveform data that represents speech involving a plurality of speakers. As the stream of audio waveform data is obtained, a plurality of audio chunks can be determined. An audio chunk can be associated with one or more identity embeddings. The stream of audio waveform data can be segmented into a plurality of segments based on the plurality of audio chunks and respective identity embeddings associated with the plurality of audio chunks. A segment can be associated with a speaker included in the plurality of speakers. Information describing the plurality of segments associated with the stream of audio waveform data can be provided.
G10L 17/02 - Opérations de prétraitement, p. ex. sélection de segmentReprésentation ou modélisation de motifs, p. ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principalesSélection ou extraction des caractéristiques
G10L 17/04 - Entraînement, enrôlement ou construction de modèle
G10L 17/06 - Techniques de prise de décisionStratégies d’alignement de motifs
G10L 17/08 - Utilisation d’une mesure de distorsion ou d’une distance particulière entre un motif d’analyse et les modèles de référence
Peer-to-peer video streaming software that takes video from a host's Graphical Processing Unit (GPU), moves it to the encoder, manages the color conversions, moves the video into the networking software, and in turn manages the video on a client. This software also handles the inputs from the client and tells the host how to react to the client's inputs. In executing the video streaming, the host broadcasts and renders the video that is or will be streamed. It encodes the video frame with a hardware encoder that is part of the GPU to eliminate latency associated with transferring frames to the central processing unit (CPU). The host transmits compressed video and audio data to the client, and the client uses its hardware decoder and the video streaming software to render the video. The client sends mouse, keyboard, joystick, and controller signals to the host, which varies the bit rate based on the network connection.
G06F 15/16 - Associations de plusieurs calculateurs numériques comportant chacun au moins une unité arithmétique, une unité programme et un registre, p. ex. pour le traitement simultané de plusieurs programmes
H04L 65/60 - Diffusion en flux de paquets multimédias
H04L 65/612 - Diffusion en flux de paquets multimédias pour la prise en charge des services de diffusion par flux unidirectionnel, p. ex. radio sur Internet pour monodiffusion [unicast]
A method of populating a digital environment with an anonymous humanoid digital character is disclosed. One or more values are randomly determined for one or more target human properties. An asset pool is searched for assets that are compatible with the one or more values. The assets include a human body mesh. A blend of height and weight vertex animation textures (VATs) is applied to the human body mesh. The blended human body mesh is placed into the digital environment.
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable computer software for enabling collaborative gaming, game software development, and project management Providing a website featuring technology that enables collaborative gaming, game software development, and project management
15.
SYSTEM AND METHOD FOR AI ASSISTED CHARACTER POSE AUTHORING
A method of optimizing a pose of a character is disclosed. An input is received. The input defines one or more effectors. A pose is generated for the character using a learned inverse kinematics (LIK) machine-learning (ML) component. The LIK ML component is trained using a motion dataset. The generating of the pose is based on one or more criteria. The one or more criteria include explicit intent expressed as the one or more effectors. The generated pose is adjusted using an ordinary inverse kinematics (OIK) component. The OIK component solves an output from the LIK ML component to increase an accuracy at which the explicit intent is reached. A final pose is generated from the adjusted pose. The generating of the final pose includes applying a physics engine (PE) to an output from the OIK component to increase a physics accuracy of the pose.
Systems, methods, and non-transitory computer-readable media can provide audio waveform data that corresponds to a voice sample to a temporal convolutional network for evaluation. The temporal convolutional network can pre-process the audio waveform data and can output an identity embedding associated with the audio waveform data. The identity embedding associated with the voice sample can be obtained from the temporal convolutional network. Information describing a speaker associated with the voice sample can be determined based at least in part on the identity embedding.
G10L 17/02 - Opérations de prétraitement, p. ex. sélection de segmentReprésentation ou modélisation de motifs, p. ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principalesSélection ou extraction des caractéristiques
G06N 3/049 - Réseaux neuronaux temporels, p. ex. éléments à retard, neurones oscillants ou entrées impulsionnelles
An animation system wherein scanned facial expressions are processed to form muscle models based on live actors combines muscle models over a plurality of live actors to form a facial rig usable for generating expressions based on specification of a strain vector and a control vector of a muscle model for varying characters corresponding to live actors.
An active marker relay system is provided to operate responsive active markers coupled to an object in a live action scene for performance capture, via a trigger unit that relays energy pulse information to responsive active markers. Using use simple sensors, the responsive active markers sense control energy pulses projected from the trigger unit. In return, the responsive active markers produce energy pulses that emulate at least one characteristic of the control energy pulses, such as a particular pulse rate or wavelength of energy. The reactivity of the responsive active markers to control energy pulses enables simple control of the responsive active markers through the trigger unit.
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable computer software using artificial intelligence for creating, enhancing, enabling, operating, and displaying computer applications, computer games, online games, online computer applications, multi-player computer games, interactive 3D game and non-game worlds, mixed 3D and 2D content, films, television programs, videos, and digital media; Downloadable computer software using artificial intelligence for creating, enhancing, enabling, operating, and displaying virtual, augmented, mixed reality, and extended reality experiences; Downloadable computer software using artificial intelligence for input, output, and display of live-linked interactive 3D renderings, virtual reality, augmented reality, and mixed reality for collaborative design; Downloadable software development kits (SDK); Downloadable computer software for integrating and embedding digital virtual agents, machine learning, deep neural networks, intelligence agents, predictive systems, cognitive process automation, visual computing applications, knowledge virtualization, robotics, natural language processing, genetic and deep learning algorithms, reinforcement learning, semantic ontologies, pattern recognition, knowledge modeling technologies, cognitive enhancement in respect of experience and productivity, accelerating processes, automation, and autonomy in the field of artificial intelligence Providing temporary use of on-line non-downloadable software and applications using artificial intelligence for creating, enhancing, enabling, operating, and displaying computer applications, computer games, online games, online computer applications, multi-player computer games, interactive 3D game and non-game worlds, mixed 3D and 2D content, films, television programs, videos, and digital media; Providing temporary use of on-line non-downloadable software and applications using artificial intelligence for creating, enhancing, enabling, operating, and displaying virtual, augmented, mixed reality, and extended reality experiences; Providing temporary use of on-line non-downloadable software and applications using artificial intelligence for input, output, and display of live-linked interactive 3D renderings, virtual reality, augmented reality, and mixed reality for collaborative design; Providing temporary use of on-line non-downloadable software for integrating and embedding digital virtual agents, machine learning, deep neural networks, intelligence agents, predictive systems, cognitive process automation, visual computing applications, knowledge virtualization, robotics, natural language processing, genetic and deep learning algorithms, reinforcement learning, semantic ontologies, pattern recognition, knowledge modeling technologies, cognitive enhancement in respect of experience and productivity, accelerating processes, automation, and autonomy in the field of artificial intelligence
20.
Computer graphics interface using visual indicator representing object global volume and/or global volume changes and method therefore
A computer-implemented method and interface provides visualization for a volume and/or a change in a volume of a virtual object, the representation usable in a user interface, comprising a first representation of the virtual object represented by a surface positioned in a three-dimensional (3D) virtual space, a bounding object input that defines a bounding object relative to the 3D virtual space, a plurality of local volumes of the virtual object, wherein a local volume of the plurality of local volumes is computed for geometry based on a bounding object feature associated with the bounding object and based on at least some vertices of the corresponding face, wherein at least some of the plurality of local volumes are aggregated to determine a global volume for the set of polygonal faces, and providing for display of a first visual indicator representing a value of the global volume.
A compositor generates a representation of a gradient image corresponding to a replaceable background object in a scene. The representation can be generated from user set of reference points in the scene.
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images utilisant des procédés basés sur les caractéristiques
Disclosed is a method to determine influence of a deformation of a curve on a surface. A processor obtains deformations of the curve and desired deformations of the surface. The processor obtains a first point on the surface and determines the influence of the deformation of the curve on the surface by determining a second point on the curve and iterating over a desired deformation of the surface and a corresponding curve. To iterate, the processor determines a desired position of the first point of the desired deformation, determines a position of the first point based on a position of the second point on the corresponding curve and a parameter indicating the influence, and calculates a difference between the desired position of the first point and the determined position of the first point. Based on the difference, the processor adjusts the parameter, and determines the influence based on the parameter.
An image generation system defines a set of deformation handles having an associated set of one or more control parameters, obtains a set of object points representative of a virtual object, maps the set of object points to a plurality of key pose states, wherein the key pose state is represented by a key pose state data structure corresponding to a key pose that specifies control parameters to place the set of the deformation handles to coincide with the set of object points, determines corresponding key pose state data structures, receives a set of deformations to be applied to the set of deformation handles, interpolates poses among an interpolated key pose state set to form a current pose state based on the set of deformations, and adjusts the virtual object based on the interpolated key pose state set.
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.
A virtual scene can be generated for use in a captured scene with focus settings that take into account camera position. Virtual images displayed in a virtual scene that is presented on a display wall and captured in a scene can be presented in the virtual scene with a focus or defocus that is dependent on a virtual object position in the virtual scene and a position of a camera relative to the display wall. Defocusing of virtual objects can be such that an eventual defocus when captured by the camera corresponds to what would be a defocus of an object distant from the camera by a distance that represents a first distance from the camera to the display wall and a second distance being a virtual distance in the virtual scene from the virtual object to a virtual camera plane of the virtual scene.
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.
H04N 13/133 - Égalisation des caractéristiques de différentes composantes d’images, p. ex. de la luminance moyenne ou de l’équilibre des couleurs
H04N 13/15 - Aspects des signaux d’images en rapport avec la couleur
H04N 13/293 - Génération d’images stéréoscopiques mixtesGénération d’images stéréoscopiques et monoscopiques mixtes, p. ex. une fenêtre avec une image stéréoscopique en superposition sur un arrière-plan avec une image monoscopique
H04N 13/178 - Métadonnées, p. ex. informations sur la disparité
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/363 - Reproducteurs d’images utilisant des écrans de projection
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values for a replacement wall image of higher resolution than the precursor image is determined, and the image data of the captured scene is adjusted using the pixel display values and the image matte.
A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
G06T 7/174 - DécoupageDétection de bords impliquant l'utilisation de plusieurs images
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/92 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour le surjeu
G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
Disclosed here is a system and method to increase resolution of a display, such as an LED wall. Some examples describe increasing resolution during postprocessing. In some examples, there is described methods and systems that can modify luminance of a display. In further examples, the display may operate at a predetermined frequency by displaying a first image at a first time and a second image at a second time. A selective screen disposed between the display and the light receiver can include multiple light transmitting elements, such as pixel masks. A light transmitting element A can redirect a first portion of light transmitted by the display. A light transmitting element B can allow a second portion of light transmitted by the display to reach the light receiver. The selective screen can increase the resolution of the display by operating at the predetermined frequency and causing a first portion of the first image to be shown at the first time, and a second portion of the second image to be shown at the second time, where the first portion of the first image and the second portion of the second image are different. The predetermined frequency enables the light receiver to form an image based on the first portion of the first image, and the second portion of the second image.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
09 - Appareils et instruments scientifiques et électriques
35 - Publicité; Affaires commerciales
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
(1) Downloadable and recorded computer software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Downloadable and recorded computer software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Downloadable and recorded computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Downloadable and recorded computer software for enabling software developers to provide advertising content; Downloadable and recorded computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Downloadable and recorded computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Downloadable software development kits (SDK) (1) Business monitoring and consulting services, namely, analyzing digital games to provide strategy and insight regarding digital game creation, design, development, management, monitoring, improvement, monetization, marketing, advertising, and sales for business purposes; Advertising, marketing and promotion services for others provided via mobile and online communications networks; Market research services; Marketing analytics services, namely, providing Internet traffic metrics, statistics, user activity data, audience measurement and marketing and advertising reports; Advertising, marketing and business services, namely, assisting others in developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing an online network and online marketplace for others to engage in online advertising exchange, direct deals and cross-promotion with other publishers and providers of online mobile applications; Promoting the goods and services of others, namely, connecting software developers and advertisers in order to distribute advertising content to end users; Providing information through electronic networks about products and services for advertising and sales purposes; Business management services, namely, tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Business data analysis services, namely, providing predictive data analysis for developers and publishers; Database management in the field of advertisers, advertising content, computer games, software developers, and software applications; Advertisement via mobile phone networks; Digital advertising services; Dissemination of advertising for others via public and private wireless networks for display on mobile devices; Mobile advertising services for others in the field of ad monetization; Online advertising and marketing services in the field of ad monetization
(2) Providing temporary use of non-downloadable software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Software as a service (SAAS) services featuring software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Platform as a service (PAAS) featuring computer software platforms for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Providing temporary use of non-downloadable computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing temporary use of on-line non-downloadable software and applications for enabling software developers to provide advertising content; Providing temporary use of non-downloadable computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Providing temporary use of non-downloadable computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Application service provider, namely, hosting, developing, managing, analyzing, tracking and maintaining applications, software and online networks of others in the fields of advertising and marketing
09 - Appareils et instruments scientifiques et électriques
35 - Publicité; Affaires commerciales
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable and recorded computer software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Downloadable and recorded computer software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Downloadable and recorded computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Downloadable and recorded computer software for enabling software developers to provide advertising content; Downloadable and recorded computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Downloadable and recorded computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Downloadable software development kits (SDK). Business monitoring and consulting services, namely, analyzing digital games to provide strategy and insight regarding digital game creation, design, development, management, monitoring, improvement, monetization, marketing, advertising, and sales for business purposes; Advertising, marketing and promotion services for others provided via mobile and online communications networks; Market research services; Marketing analytics services, namely, providing Internet traffic metrics, statistics, user activity data, audience measurement and marketing and advertising reports; Advertising, marketing and business services, namely, assisting others in developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing an online network and online marketplace for others to engage in online advertising exchange, direct deals and cross-promotion with other publishers and providers of online mobile applications; Promoting the goods and services of others, namely, connecting software developers and advertisers in order to distribute advertising content to end users; Providing information through electronic networks about products and services for advertising and sales purposes; Business management services, namely, tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Business data analysis services, namely, providing predictive data analysis for developers and publishers; Database management in the field of advertisers, advertising content, computer games, software developers, and software applications; Advertisement via mobile phone networks; Digital advertising services; Dissemination of advertising for others via public and private wireless networks for display on mobile devices; Mobile advertising services for others in the field of ad monetization; Online advertising and marketing services in the field of ad monetization. Providing temporary use of non-downloadable software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Software as a service (SAAS) services featuring software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Platform as a service (PAAS) featuring computer software platforms for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Providing temporary use of non-downloadable computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing temporary use of on-line non-downloadable software and applications for enabling software developers to provide advertising content; Providing temporary use of non-downloadable computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Providing temporary use of non-downloadable computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Application service provider, namely, hosting, developing, managing, analyzing, tracking and maintaining applications, software and online networks of others in the fields of advertising and marketing.
09 - Appareils et instruments scientifiques et électriques
35 - Publicité; Affaires commerciales
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable and recorded computer software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Downloadable and recorded computer software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Downloadable and recorded computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Downloadable and recorded computer software for enabling software developers to provide advertising content; Downloadable and recorded computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Downloadable and recorded computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Downloadable software development kits (SDK) Business monitoring and consulting services, namely, analyzing digital games to provide strategy and insight regarding digital game creation, design, development, management, monitoring, improvement, monetization, marketing, advertising, and sales for business purposes; Advertising, marketing and promotion services for others provided via mobile and online communications networks; Market research services; Marketing analytics services, namely, providing Internet traffic metrics, statistics, user activity data, audience measurement and marketing and advertising reports; Advertising, marketing and business services, namely, assisting others in developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing an online network and online marketplace for others to engage in online advertising exchange, direct deals and cross-promotion with other publishers and providers of online mobile applications; Promoting the goods and services of others, namely, connecting software developers and advertisers in order to distribute advertising content to end users; Providing information through electronic networks about products and services for advertising and sales purposes; Business management services, namely, tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Business data analysis services, namely, providing predictive data analysis for developers and publishers; Database management in the field of advertisers, advertising content, computer games, software developers, and software applications; Advertisement via mobile phone networks; Digital advertising services; Dissemination of advertising for others via public and private wireless networks for display on mobile devices; Mobile advertising services for others in the field of ad monetization; Online advertising and marketing services in the field of ad monetization Providing temporary use of non-downloadable software for enabling service providers to optimize user monetization, marketing, and customer satisfaction; Software as a service (SAAS) services featuring software for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Platform as a service (PAAS) featuring computer software platforms for creating, developing, managing, monitoring, improving, monetizing, marketing, advertising, and selling digital games; Providing temporary use of non-downloadable computer software for developing, launching, displaying, editing, managing, measuring, tracking and analyzing advertisements and sales and advertising campaigns; Providing temporary use of on-line non-downloadable software and applications for enabling software developers to provide advertising content; Providing temporary use of non-downloadable computer software for use in engaging in online advertising exchange, direct deals and cross promotion activities with other publishers and providers of online mobile applications; Providing temporary use of non-downloadable computer software for tracking, analyzing and managing advertising, sales and ecommerce activities within mobile computer games and applications; Application service provider, namely, hosting, developing, managing, analyzing, tracking and maintaining applications, software and online networks of others in the fields of advertising and marketing
34.
Method for image processing of image data for a two-dimensional display wall with three-dimensional objects
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.
H04N 13/133 - Égalisation des caractéristiques de différentes composantes d’images, p. ex. de la luminance moyenne ou de l’équilibre des couleurs
H04N 13/15 - Aspects des signaux d’images en rapport avec la couleur
H04N 13/178 - Métadonnées, p. ex. informations sur la disparité
H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/111 - Transformation de signaux d’images correspondant à des points de vue virtuels, p. ex. interpolation spatiale de l’image
The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
A compositor generates a representation of a gradient image corresponding to a replaceable background object in a scene. The representation can be generated from user set of reference points in the scene.
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p. ex. le suivi des coins ou des segments
G06T 3/40 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images utilisant des procédés basés sur les caractéristiques
37.
Method for associating production elements with a production approach
A computer-implemented method for managing associations between production elements and production approaches includes displaying, within a breakdown panel, a representation of at least one scene obtained from a script, the representation including a plurality of production elements; displaying, within an approaches panel, a representation of at least some of the plurality of production elements displayed within the breakdown panel, and associated respective user selection indicators; and displaying, within the approaches panel, a representation of at least one production approach. The method further includes receiving a user selection of at least one production element displayed within the approaches panel; receiving a user selection of the at least one production approach. The selected at least one production element is associated with the selected at least one production approach.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
38.
Method for image processing of image data for image and visual effects on a two-dimensional display wall
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.
A realistic feather generation may be represented via a user interface in a modeling system. To perform the feather generation, a curve defining a spine of a feather is received from input to the user interface of the modeling system. The feather is created in the user interface based at least on the curve, where the feather includes barbs extended away from the spine of the feather in one or more directions. The feather is deformed based on one or more deformation parameters. A feather contour of the barbs along at least one of the one or more directions is generated based on a contour parameter defining the feather contour. A barb density for a number of the barbs along the spine of the feather is established based on a barb density parameter. Thereafter, a data structure representing the feather is generated.
A realistic feather generation may be represented via a user interface in a modeling system. To perform the object generation, such as surface object generation e.g., feather generation, a curve defining a line of an object (e.g., spine of a feather) is received from input to the user interface of the modeling system. The object is created in the user interface based at least on the curve, where the object includes structures (e.g., barbs of a feather) extended away from the line of the object in one or more directions. The object is deformed based on one or more deformation parameters. A object contour of the structures along at least one of the one or more directions is generated based on a contour parameter defining the object contour. A structure density for a number of the structures along the line of the object is established based on a structure density parameter. Thereafter, a data structure representing the object (e.g., feather) is generated.
Embodiments described herein provide an approach of animating a character face of an artificial character based on facial poses performed by a live actor. Geometric characteristics of the facial surface corresponding to each facial pose performed the live actor may be learnt by a machine learning system, which in turn build a mesh of a facial rig of an array of controllable elements applicable on a character face of an artificial character.
The system obtains an indication of a shape of a cross-section of an elongated shape, and an orientation of the shape. Based on the shape of the cross-section of the elongated shape and the orientation of the shape, the system creates a nonuniform distribution of random numbers mapping uniformly distributed input values to multiple points on the surface of the elongated shape. The system provides an input value randomly selected from a uniform distribution of random numbers to the nonuniform distribution of random numbers to obtain a point among the multiple sample points on the surface of the elongated shape. The system applies a function to the input value to obtain an indication of a normal associated with the sample point among the multiple sample points. Finally, the system computes an illumination of the elongated shape using the normal.
A combustion simulation system is provided. The combustion simulation system can be performed using a computing device operated by a computer user or artist. The computer-implemented method of generating one or more visual representations of a combustion even is provided. The method includes simulating the combustion event, which transforms combustion reactants into combustion products, the combustion event occurring at a reference pressure, automatically determining values of combustion properties, the values of the combustion properties being calculated as a function of a nonzero pressure field, and generating the one or more visual representations of the combustion event based on the values of combustion properties.
G06T 13/60 - Animation tridimensionnelle [3D] de phénomènes naturels, p. ex. la pluie, la neige, l'eau ou les plantes
G06F 30/28 - Optimisation, vérification ou simulation de l’objet conçu utilisant la dynamique des fluides, p. ex. les équations de Navier-Stokes ou la dynamique des fluides numérique [DFN]
G06F 17/11 - Opérations mathématiques complexes pour la résolution d'équations
G05B 19/406 - Commande numérique [CN], c.-à-d. machines fonctionnant automatiquement, en particulier machines-outils, p. ex. dans un milieu de fabrication industriel, afin d'effectuer un positionnement, un mouvement ou des actions coordonnées au moyen de données d'un programme sous forme numérique caractérisée par le contrôle ou la sécurité
Methods and systems are presented for determining a virtual focus model for a camera apparatus, the camera apparatus comprising one or more image capture elements and one or more optics device through which light in an optical path passes from a stage environment to at least one of the one or more image capture elements, the stage environment including virtual scene display for displaying a virtual scene.
A realistic feather growth may be represented between two surface manifolds in a modeling system. To perform the feather growth, a feather groom for a plurality of feathers between an inner shell of a creature and an outer shell of the creature is received. An inner manifold for the inner shell and an outer manifold for the outer shell is determined with a plurality of follicle points and a plurality of tip points. A first surface contour definition for the inner manifold and a second surface contour definition for the outer manifold is determined and used to determine a volumetric vector field between the inner manifold and the outer manifold. Thereafter, the plurality of feathers is generated between the inner manifold and the outer manifold using the follicle points, the tip points, and the volumetric vector fields.
The processor obtains a third pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the third pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
H04N 5/92 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour le surjeu
A computer-implemented method for managing associations between production elements and production approaches includes displaying, within a breakdown panel, a representation of at least one scene obtained from a script, the representation including a plurality of production elements; displaying, within an approaches panel, a representation of at least some of the plurality of production elements displayed within the breakdown panel, and associated respective user selection indicators; and displaying, within the approaches panel, a representation of at least one production approach. The method further includes receiving a user selection of at least one production element displayed within the approaches panel; receiving a user selection of the at least one production approach. The selected at least one production element is associated with the selected at least one production approach.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 16/26 - Exploration de données visuellesNavigation dans des données structurées
The disclosed system and method can increase resolution of a display in postprocessing. The processor can obtain multiple images presented on a display, where the display is configured to present the multiple images at a first frame rate higher than a frame rate needed to form a perception of motion. The processor can obtain a mask corresponding to one or more images among the multiple images, where the mask indicates a portion of the one or more images among the multiple images to include in an output image. The processor can increase resolution of the display in proportion to the number of multiple images presented to the display by combining, based on the mask, the one or more images among the multiple images to obtain the output image.
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
H04N 9/64 - Circuits pour le traitement de signaux de couleur
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
H04N 5/76 - Enregistrement du signal de télévision
Methods and systems are presented for generating a virtual scene rendering usable in a captured scene based on a camera position of a camera in a stage environment, a mapping of a plurality of subregions of a virtual scene display in the stage environment to corresponding positions in the stage environment, and details of a virtual scene element. The details might include a subregion of the plurality of subregions for the virtual scene element where on the virtual scene display the given virtual scene element would, at least in part, appear, and stage subregion depth values. A blur factor for a corresponding subregion might be determined based at least in part on the stage subregion depth value and the virtual subregion depth value. Rendering the virtual scene might take into account the blur factor for the given virtual scene element.
Disclosed is a method to derive the absorption coefficient, transparency, and/or the scattering coefficient from the user-specified parameters including roughness, phase function, index of refraction (IOR), and color by performing the simulation once, and storing the results of the simulation in an easy to retrieve representation, such as a lookup table, or an analytic function. To create the analytic function, one or more analytic functions can be fitted to the results of the simulation for the multiple parameters including roughness, phase function, IOR, and color. The lookup table can be combined with the analytic representation. For example, the lookup table can be used to represent the color, roughness, and phase function, while the IOR can be represented by an analytic function. For example, when the IOR is above 2, the lookup table becomes three-dimensional and the IOR is calculated using the analytic function.
An animation system is provided for generating an animation control rig for character development, configured to manipulate a skeleton of an animated character. Hierarchical representation of puppets includes groups of functions related in a hierarchy according to character specialization for creating the animated rig are derived using base functions of a core component node. The hierarchical nodes may include an archetype node, at least one appendage node, and at least one feature node. In some implementations, portions of a hierarchical node, including the functions from the core component node, may be shared to generate different animation rigs for a variety of characters. In some implementations, portions of a hierarchical node, including the component node functions, may be reused to build similar appendages of a same animation rig.
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable computer software for use in creating digital animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics, interactive computer graphics, computer modeled humans and animals avatars; downloadable computer software for creating, scanning, displaying, analyzing, editing, processing, recording, transmitting, and printing digital imagery, 3D models, graphics, text, audio and computer-generated effects; downloadable computer software for the integration of text, audio, graphics, and still images; downloadable computer software for the integration of digital imagery and 3D models into an interactive delivery for multimedia applications; downloadable computer software for use in manipulating graphic images and photographic images; downloadable computer software for use in creating interactive 3D content; downloadable computer software that enables the input, output, and display of interactive 3D renderings; none of the foregoing for physical effects production, including prosthetics and makeup services, costumes, models, miniatures, weaponry armor, and set and prop construction; none of the foregoing for professional services consisting of visual effects artist services in connection with audio-visual productions for movie, television, and other production studios Industrial analysis and industrial research services in the field of computer graphics creation, graphics processing, 3D modeling, 3D rendering and virtual computer simulations; computer graphics services; Computer graphics design services, namely, creating 3D models and renderings, animations, virtual characters and avatars; providing temporary use of online, non-downloadable software for 3D-visualization, 3D modeling and 3D rendering; providing temporary use of online, non-downloadable software for image editing and the simulation, visualization and rendering of digital objects and images; providing temporary use of online, non-downloadable computer software for the integration of digital imagery and 3D models into an interactive delivery for multimedia applications; providing temporary use of online, non-downloadable software for graphics processing; providing temporary use of online, non-downloadable cloud-based software for use in creating digital animation, computer-generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, film making, computer-generated graphics, interactive computer graphics, computer modeled humans and animals avatars; providing temporary use of online, non-downloadable cloud-based computer software for creating, scanning, displaying, analyzing, editing, processing, recording, transmitting, and printing digital imagery, 3D models, graphics, text, audio and computer-generated effects; providing temporary use of online, non-downloadable cloud-based software for the integration of text, audio, graphics, and still images; providing temporary use of online, non-downloadable computer software for use in manipulating graphic images and photographic images; providing temporary use of online, non-downloadable software that enables the input, output, and display of interactive 3D renderings; design, development and maintenance of software, software applications and software application programming interfaces, including in the fields of animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, computer-generated graphics and interactive computer graphics; providing temporary use of online, non-downloadable software and web-based software applications for use in creating digital animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, computer-generated graphics and interactive computer graphics; providing temporary use of online, non-downloadable software and web-based software applications for use in creating digital animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics and interactive computer graphics; software-as-a service (SAAS) services and platform-as-a-service (PAAS) featuring computer software platforms, both featuring software for use in creating animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics and interactive computer graphics; video and computer game software design and development services; providing temporary use of on-line non-downloadable software development tools for the design and development of electronic games; providing temporary use of on-line non-downloadable software and web-based applications for accessing streaming audio and video files, games, social networks, text files and multimedia files; application service provider services, namely, hosting, managing, developing, and maintaining applications, software, and web sites of others in the fields of animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving image production, motion capture, computer-generated graphics and interactive computer graphics; cloud computing featuring software for use in creating animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving images, motion capture, computer-generated graphics and interactive computer graphics; technical support services, namely, troubleshooting of computer software problems; cloud computing featuring software for use in computer graphics creation, 3D modeling, 3D rendering and virtual simulations; platform as a service (PAAS) featuring computer software platforms for editing images, sound, video, and digital media; Platform as a service (PAAS) featuring computer software platforms for computer graphics creation, 3D modeling, 3D rendering, virtual simulations, animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, visual effects, moving-image production, motion capture, computer-generated graphics and interactive computer graphics; information, advisory and consultancy services relating to the design of animation, computer generated effects, computer generated scenes, special effects, computer-generated imagery, computer-generated graphics and interactive computer graphics; none of the foregoing for physical effects production, including prosthetics and makeup services, costumes, models, miniatures, weaponry armor, and set and prop construction; none of the foregoing for professional services consisting of visual effects artist services in connection with audio-visual productions for movie, television, and other production studios
53.
Increasing dynamic range of a virtual production display
A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
A programmatic arbitrary distribution of items in a modeling system may be provided. To perform the distribution, a surface may be received, and a point count of application points associated with locations on the surface may be determined. A density map may be applied over the surface to assign a density to portions of the surface for the point count. Application points are then assigned to locations on the surface according to the density map and a scattering function of the point count, where the scattering function is based on one or more repulsion forces between neighboring points. The one or more repulsion forces are treated as pushing each of the neighboring point apart. Thereafter, the surface may be provided having the application points scattered across the surface based on the one or more repulsion forces.
A processor calibrates the camera by presenting an input image on the display to obtain a presented image. The camera, arbitrarily positioned relative to the display, records the presented image. The processor obtains the input image via a channel different from the display. The processor obtains an indication of a display region associated with the display. The processor determines an input image region corresponding to the display region, and a recorded image region corresponding to the display region. The processor obtains a first pixel value associated with the input image region and a second pixel value associated with the recorded image region. The processor determines a mapping between the first pixel value and the second pixel value, where applying the mapping to the second pixel value substantially produces the first pixel value. The processor stores an identifier associated with the recorded image region and the mapping.
A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.
An image dataset is processed with a shadow map generated from objects in a virtual scene that can cast shadows and the scene is rendered independent of the shadows. The shadow might be edited separately, and then applied to a post-render image of the scene to form a shadowed image. Light factor values for pixels of the shadow map might be stored as summed-area table values.
The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
(1) Software as a service (SaaS) services featuring software that gives multiple users the ability to import, review and make annotations on images, three-dimensional models, files, audio, video and audio-visual content; Providing temporary use of non-downloadable cloud-based software that allows synchronized real-time reviews of digital media content; Providing temporary use of non-downloadable cloud-based software that gives users the ability to provide input, edits, modifications, suggestions and comments on digital media content; Providing temporary use of non-downloadable cloud-based software for sharing, reviewing, manipulating and collaborating simultaneously on digital media content; Providing temporary use of non-downloadable cloud-based software for synchronous presentation; Providing temporary use of non-downloadable cloud-based software for editing digital media content, sharing and downloading feedback notes.
60.
Removing moving objects from a video scene captured by a moving camera
Methods, an apparatus, and software media are provided for removing unwanted information such as moving or temporary foreground objects from a video sequence. The method performs, for each pixel, a statistical analysis to create a background data model whose color values can be used to detect and remove the unwanted information. The method assumes that for each pixel the background is present in a majority of the frames. The camera that records the video sequence may move relative to the geometry of the video scene. A pixel in a first frame is matched to a location in the geometry. The method determines color values of pixels, matched to the location in the geometry, in successive frames and clusters color values to determine a background color value range. It may use quadratic or better interpolation and extrapolation to determine background color values for unavailable frames.
A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.
H04N 5/20 - Circuits pour la commande de la courbe de réponse en amplitude
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
Methods, an apparatus, and software media are provided for removing unwanted information such as moving or temporary foreground objects from a video sequence. The method performs, for each pixel, a statistical analysis to create a background data model whose color values can be used to detect and remove the unwanted information. This includes determining a prevalent color cluster from among k clusters of color values for the pixel in successive frames. The method uses k-means clustering. To replace the unwanted information, the method iterates frames to find frames in which a pixel's color value is not included in the prevalent color cluster. In those frames, it replaces the pixel's color value with a value from the prevalent color cluster.
H04N 13/15 - Aspects des signaux d’images en rapport avec la couleur
H04N 13/211 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant le multiplexage temporel
The disclosed system increases resolution of a display. The display operates at a predetermined frequency by displaying a first image at a first time and a second image at a second time. A selective screen disposed between the display and the camera includes multiple light transmitting elements. A light transmitting element A redirects a first portion of light transmitted by the display. A light transmitting element B allows a second portion of light transmitted by the display to reach the camera. The selective screen increases the resolution of the display by operating at the predetermined frequency and causing a first portion of the first image to be shown at the first time, and a second portion of the second image to be shown at the second time. The camera forms an image from the first portion of the first image, and the second portion of the second image.
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
64.
Blending real and virtual focus in a virtual display environment
Methods and systems are presented for generating a virtual scene rendering of a captured scene based on a relative position of a camera and a virtual scene display in a stage environment, along with real and virtual lens effects. The details might include determining the camera position and virtual display position in the stage environment, and determining a depth value of a virtual scene element displayed on the virtual scene display. A desired focus model can then be determined from focus parameters of the camera, the depth value, and a desired lens effect, and an adjusted focus for the virtual scene element can be determined from the desired focus model. The adjusted focus can then be applied to the camera, the image of the virtual scene element on the virtual scene display, or pixels representing the virtual scene element in a composite image captured by the camera.
Disclosed here are various techniques to increase dynamic range of an image recorded from a display. A processor performing preprocessing splits an input image containing both bright and dark regions into two images, image A containing bright regions, and image B containing dark regions. The display presents image A and image B in alternating fashion. Camera is synchronized with the display to record image A and image B independently. In postprocessing, a processor obtains the recorded images A and B. The processor increases the pixel value of the recorded image A to obtain image A with increased pixel value. Finally, the processor increases pixel value of the image recorded from the display by combining the first recorded image with increased pixel value and the second recorded image.
G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
Disclosed here are various techniques to increase dynamic range of an image recorded from a display. A processor performing preprocessing splits an input image containing both bright and dark regions into two images, image A containing bright regions, and image B containing dark regions. The display presents image A and image B in alternating fashion. Camera is synchronized with the display to record image A and image B independently. In postprocessing, a processor obtains the recorded images A and B. The processor increases the pixel value of the recorded image A to obtain image A with increased pixel value. Finally, the processor increases pixel value of the image recorded from the display by combining the first recorded image with increased pixel value and the second recorded image.
The processor obtains a first pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the first pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.
The processor obtains a first pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the first pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
H04N 5/92 - Transformation du signal de télévision pour l'enregistrement, p. ex. modulation, changement de fréquenceTransformation inverse pour le surjeu
A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
71.
Systems and methods for computer animation of an artificial character using facial poses from a live actor
Embodiments described herein provide an approach of animating a character face of an artificial character based on facial poses performed by a live actor. Geometric characteristics of the facial surface corresponding to each facial pose performed the live actor may be learnt by a machine learning system, which in turn build a mesh of a facial rig of an array of controllable elements applicable on a character face of an artificial character.
A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.
An image generator generates images of a set of virtual fibers and effects thereon by processing representations of the set of fibers and computing representation of a virtual surface for a fiber clump in the set of virtual fibers from an artist parameter representing a desired artist effect, computing correlations of the vertices from a set of vertices based on associations of the vertices corresponding to the artist parameter, computing a set of relevant vertices using the correlations of the vertices, computing orthogonal gradients to produce a plurality of gradients using a selected vertex and the set of relevant vertices for the fiber clump, and computing the virtual surface of the fiber clump from the plurality of gradients.
Disclosed herein is a system to smoothly change the focus of a camera between multiple targets. The system can obtain an indication of a target, an indication of a manner of focus transition between a first target and a second target, and camera settings. The system can determine a point associated with the second target, where the point has a property that focusing the camera on the point places the second target in focus, and the point is closer to the current focus point of the camera than a substantial portion of other points having the property. The system can obtain a nonlinear function indicating a second manner of focus transition between the first target and the second target. The system can change the focus of the camera between the first target and the second target by changing the focus of the camera from the current focus point to the determined point based on the nonlinear function.
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Software as a service (SaaS) services featuring software that gives multiple users the ability to import, review and make annotations on images, three-dimensional models, files, audio, video and audio-visual content; Providing temporary use of non-downloadable cloud-based software that allows synchronized real-time reviews of digital media content; Providing temporary use of non-downloadable cloud-based software that gives users the ability to provide input, edits, modifications, suggestions and comments on digital media content; Providing temporary use of non-downloadable cloud-based software for sharing, reviewing, manipulating and collaborating simultaneously on digital media content; Providing temporary use of non-downloadable cloud-based software for synchronous presentation; Providing temporary use of non-downloadable cloud-based software for editing digital media content, sharing and downloading feedback notes
76.
System and method for progressive enhancement of in-app augmented reality advertising
A method of performing operations for executing a progressive interactive augmented reality (AR) advertisement display during execution of a running application is disclosed. based on a receiving of a first trigger event from the running application, a first aspect of the progressive AR advertisement is executed within a display window while the application is executing. The first aspect includes an interactive advertisement. Based on a receiving of a second trigger event from within the first aspect of the progressive AR advertisement, a second aspect of the progressive AR advertisement is executed in the display window while the application is executing. The second aspect includes an interactive AR advertisement.
A63F 13/00 - Jeux vidéo, c.-à-d. jeux utilisant un affichage à plusieurs dimensions généré électroniquement
A63F 13/61 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu utilisant des informations publicitaires
A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
A63F 13/213 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de photo-détection, p. ex. des caméras, des photodiodes ou des cellules infrarouges
G06Q 30/02 - MarketingEstimation ou détermination des prixCollecte de fonds
77.
Method for controlling digital feather generations through a user interface in a computer modeling system
A realistic feather generation may be represented via a user interface in a modeling system. To perform the feather generation, a curve defining a spine of a feather is received from input to the user interface of the modeling system. The feather is created in the user interface based at least on the curve, where the feather includes barbs extended away from the spine of the feather in one or more directions. The feather is deformed based on one or more deformation parameters. A feather contour of the barbs along at least one of the one or more directions is generated based on a contour parameter defining the feather contour. A barb density for a number of the barbs along the spine of the feather is established based on a barb density parameter. Thereafter, a data structure representing the feather is generated.
A programmatic arbitrary distribution of items in a modeling system may be provided. To perform the distribution, a surface may be received, and a point count of application points associated with locations on the surface may be determined. A density map may be applied over the surface to assign a density to portions of the surface for the point count. Application points are then assigned to locations on the surface according to the density map and a scattering function of the point count, where the scattering function is based on one or more repulsion forces between neighboring points. The one or more repulsion forces are treated as pushing each of the neighboring point apart. Thereafter, the surface may be provided having the application points scattered across the surface based on the one or more repulsion forces.
An image generator generates images of a set of virtual fibers and effects thereon by processing representations of the set of fibers and computing representation of a virtual surface for a fiber clump in the set of virtual fibers from an artist parameter representing a desired artist effect, computing correlations of the vertices from a set of vertices based on associations of the vertices corresponding to the artist parameter, computing a set of relevant vertices using the correlations of the vertices, computing orthogonal gradients to produce a plurality of gradients using a selected vertex and the set of relevant vertices for the fiber clump, and computing the virtual surface of the fiber clump from the plurality of gradients.
Presented here is a system and method to increase the speed of computation of a volumetric scattering render technique. The volumetric scattering can include path tracing which simulates interactions between a virtual ray of light and a volume. The interaction can include reflection of the virtual ray of light of a particle within the volume. The system can obtain a threshold number of interactions between a virtual ray of light and a three-dimensional object through which the virtual ray of light is traveling. As the system performs the simulation, the system can compare a number of the interactions to the threshold number. Upon determining that the number of interactions is equal to or exceeds the threshold number, the system can terminate the simulation and approximate interactions between the virtual ray of light and the volume using a second rendering technique that is computationally less expensive than simulating the interactions.
A character rig may be representable as a data structure specifying a plurality of articulated character parts, an element tree specifying relations between character parts, and a set of constraints on the character parts. After receiving rotoscoping movement input data corresponding to attempted alignments of movements of at least some of the character parts with elements moving in a captured live action scene, a rotoscoping constraints may be received. The rotoscoping constraint may include at least a first constraint on the character rig other than a second constraint specified by the data structure of the character rig, Thereafter, rig movement inputs for a second set of character parts distinct from the first set of character parts may be accepted and the character rig may be moved according to the rig movement inputs while constrained by the rotoscoping constraints.
A representation of a surface of one or more objects is positioned in a virtual space is obtained in a computer animation system. Thereafter, a guide curve specification of a guide curve in the virtual space relative to the surface is received. Thereafter, the computer animation system computes a first set of tangent vector values for differentiable locations along the guide curve and computes a second set of tangent vector values for nondifferentiable locations along the guide curve. Using the first set and second set, the computer animation system computes a third set of tangent vector values for locations on the surface other than locations along the guide curve and computes a tangent vector field over the surface from at least the first set of tangent vector values, the second set of tangent vector values, and the third set of tangent vector values.
In an image processing system, a scan of an actor is converted to a high-resolution two-dimensional map, which is converted to low-resolution map and to a facial rig model. Manipulations of the facial rig create a modified facial rig. A new low-resolution two-dimensional map can be obtained of the modified facial rig and a neural network can be used to generate a new high-resolution two-dimensional map that can be used to generate a mesh that is a mesh of the scan, modified by the manipulations of the facial rig.
In an image processing system, an image insertion is to be included onto, or relative to, a first and second frame, each depicting images of a set of objects of a geometric model. A point association is determined for a depicted object that is depicted in both the first frame and the second frame, representing reference coordinates in a virtual scene space of a first location on the depicted object independent of at least one position change and a mapping of a first image location in the first image to where the first location appears in the first image. A corresponding location in the second image is determined based on where the first location on the depicted object appears according to the reference coordinate in the virtual scene space and a second image location on the second image where the first location appears in the second image.
In an image processing system, artist user interface provides for user input of specifications for an inserted object, specified in frame space. The inserted objects can be specified in frame space but can be aligned with object points in a virtual scene space. For other frames, where the object points move in the frame space, the object movements are applied to the inserted object in the frame space. The alignment can be manual by the user or programmatically determined.
An imagery processing system determines alternative pixel color values for pixels of captured imagery where the alternative pixel color values are obtained from alternative sources. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and that alternative imagery is processed to provide user-selectable alternatives for pixel ranges from the main imagery.
An animation system wherein a machine learning model is adopted to generate animated facial actions based on parameters obtained from a live actor. Specifically, the anatomical structure such as a facial muscle topology and a skull surface that are specific to the live actor may be used. A skull surface that is specific to a live actor based on facial scans of the live actor and generic tissue depth data. For example, the facial scans of the live actor may provide a skin surface topology of the live actor, based on which the skull surface underneath the skin surface can be derived by “offsetting” the skin surface with corresponding soft tissue depth at different sampled points on the skin surface.
An animation system wherein a machine learning model is adopted to generate animated facial actions based on parameters obtained from a live actor. Specifically, the anatomical structure such as a facial muscle topology and a skull surface that are specific to the live actor may be used. A muscle structure of simplified “pseudo” muscles that approximate the actual muscle topology but with reduced degree of freedom is determined to improve computational efficiency.
An animation system wherein a machine learning model is adopted to generate animated facial actions based on parameters obtained from a live actor. Specifically, the anatomical structure such as a facial muscle topology and a skull surface topology that are specific to the live actor may be used. A skull surface that is specific to a live actor based on facial scans of the live actor and generic tissue depth data. For example, the facial scans of the live actor may provide a skin surface topology of the live actor, based on which the skull surface underneath the skin surface can be derived by “offsetting” the skin surface with corresponding soft tissue depth at different sampled points on the skin surface.
The present description relates to light patterns used in a live action scene of a visual production to encode information associated with objects in the scene, such as movement and position of the objects. A data capture system includes active markers that emit light of a particular wavelength in predefined strobing patterns. In some implementations, the active markers are instructed to emit an assigned signature pattern of light through a signal controller sending signals to a control unit. Various components are synchronized such that pulsing of light corresponds to time slices and particular frames captured by the performance capture system. The data representing the pattern is embedded in illuminated and blank frames. Frames showing the light pattern are analyzed to extract information about the active markers, such as identification of the active markers and objects to which they are attached.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
A61B 90/00 - Instruments, outillage ou accessoires spécialement adaptés à la chirurgie ou au diagnostic non couverts par l'un des groupes , p. ex. pour le traitement de la luxation ou pour la protection de bords de blessures
A61B 5/11 - Mesure du mouvement du corps entier ou de parties de celui-ci, p. ex. tremblement de la tête ou des mains ou mobilité d'un membre
A61B 5/00 - Mesure servant à établir un diagnostic Identification des individus
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
91.
SYSTEMS AND METHODS FOR BUILDING A MUSCLE-TO-SKIN TRANSFORMATION IN COMPUTER ANIMATION
An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing "smile," the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface "smile."
Implementations provide a wearable article for a performance capture system. In some implementations, a wearable article includes one or more regions, where the one or more regions are configured to be worn on at least a portion of a body of a user, and where at least one of the one or more regions are configured to hold performance capture equipment in predetermined positions. In some implementations, the wearable article also includes a plurality of mounting mechanisms coupled to the one or more regions for mounting reference markers to be used for position determination. In some implementations, the wearable article also includes a plurality of fastening mechanisms coupled to the one or more regions for fastening devices and accessories for controlling the reference markers.
A method for generating one or more visual representations of an object colliding with an interface between a simulated fluid and a material. The method includes obtaining shape and movement data of a bulk fluid and an object, identifying an interface where the bulk fluid covers a portion of the object, generating an emitted fluid at the interface, generating shape and movement data of the emitted fluid interacting with the object.
Methods and systems for applying a vignette effect to a rendered computer-generated image are presented. A kernel image including a kernel shape is overlaid on a control image including a vignette shape, creating a region where the kernel shape and control image overlap. A region of the rendered image corresponding to the overlapping region is blurred based on the shape of the overlapping region. A vignetting factor indicating a loss in intensity as a result of the blurring is computed based on the area of the overlapping region and the area of the kernel shape. The intensity of the blurred region is scaled upward based on the vignetting factor. The position of the kernel image on the control shape is shifted, resulting in a new overlapping region. Blurring and intensity compensation are performed for the region of the rendered image corresponding to the new overlapping region.
Methods and systems for defocusing a rendered computer-generated image are presented. Pixel values for a pixel array are determined from a scene description. A blur amount for each pixel is determined based on a lens function representing a lens shape and/or effect. A blur amount and blur transparency value are determined for the pixel based on the lens function and pixel depth. A convolution range comprising pixels adjacent to the pixel is determined based on the blur amount. A blend color value is determined for the pixel based on the color value of the pixel, color values of pixels in the convolution range, and the blur transparency value. The blend color value is scaled based on the blend color value and a modified pixel color value is determined from scaled blend color values.
An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.”
An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.”.
Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on at least three reference points, where the at least three reference points are positioned within the live action scene, and where distances between the at least three reference points are predetermined. The system further determines a location and orientation of each camera based on the reference point data.
An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.”
Embodiments facilitate the calibration of cameras in a live action scene using drones. In some embodiments, a method configures a plurality of reference cameras to observe at least one portion of the live action scene. The method further configures one or more moving cameras having unconstrained motion to observe one or more moving objects in the live action scene and to observe at least three known reference points associated with the plurality of reference cameras. The method further receives reference point data in association with the one or more moving cameras, where the reference point data is based on the at least three known reference points. The method further computes a location and an orientation of each moving camera of the one or more moving cameras based on one or more of the reference point data and one or more locations of one or more reference cameras of the plurality of reference cameras.
G01B 21/04 - Dispositions pour la mesure ou leurs détails, où la technique de mesure n'est pas couverte par les autres groupes de la présente sous-classe, est non spécifiée ou est non significative pour mesurer la longueur, la largeur ou l'épaisseur en mesurant les coordonnées de points
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c.-à-d. étalonnage de caméra
H04N 5/222 - Circuits de studioDispositifs de studioÉquipements de studio