Embodiments of the invention provide efficient and intuitive techniques for creating digital characters. One embodiment of the invention provides a method of customizing a digital avatar. A digital avatar may be displayed on a display of an electronic device. An audio-visual user interface may be provided for customizing the digital avatar based on a spoken conversation between a user and the digital avatar.
Described is a method for controlling motion of a digital character, comprising: receiving one or more behaviour commands; translating the one or more behaviour commands into a time-sequence of channel parameters for one or more animation channels; receiving one or more motion sources; determining one or more motion parameters by applying the channel parameters to corresponding blend nodes in a blend tree based on the motion sources; and controlling motion of the digital character based on the one or more motion parameters. Also described is a system implementing the method.
Interaction with a Computer is provided via an autonomous virtual embodied Agent. The Computer outputs Digital Content, which includes any content that exists in the form of digital data and is representable to a User. A subset of, or all, Digital Content is configured as Shared Digital Content which is representable to both the User and to the Agent.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Visual Prompt Tuning provides fine-tuning for transformer-based vision models. Prompt Vectors are added as additional inputs to Vision Transformer models. alongside image patches which have been linearly projected and combined with a positional embedding. The transformer architecture allows prompts to be optimized using gradient descent. without modifying or removing any of the Vision Transformer parameters. A Image Recognition System with Visual Prompt Tuning improves a pre-trained vision model by adapting the pre-trained vision model to downstream tasks by tuning the pretrained vision model using a visual prompt.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
A computer graphics animation system is provided to assist prevent the generation of undesirable shapes, by providing realistic examples of a subject which are incorporated into an interpolation function which can be used to animate a new shape deformation of the subject.
Embodiments described herein relate to methods and systems for animating (bringing to life) an Agent, which may be a virtual object, digital entity, and/or robot. A Router enables seamless interactions between a user and the Agent via multiple Skill Modules. Skill Modules may include conversation corpora and/or other applications. Embodiments described herein may improve Conversation Orchestration in Interactive Agents in the context of multi-modal human-computer interactions.
Skeletal Animation is improved using an Actuation System for animating a Virtual Character or Digital Entity including a plurality of Joints associated with a Skeleton of the Virtual Character or Digital Entity and at least one Actuation Unit Descriptor defining a Skeletal Pose with respect to a first Skeletal Pose. The Actuation Unit Descriptors are represented using Rotation Parameters and one or more of the Joints of the Skeleton are driven using corresponding Actuation Unit Descriptors.
Embodiments described herein relate to the autonomous animation of Gestures by the automatic application of animations to Input Text—or the automatic application of animation Mark-up wherein the Mark-up triggers nonverbal communication expressions or Gestures. In order for an Embodied Agent's movements to come across as natural and human-like as possible, a Text-To-Gesture Algorithm (TTG Algorithm) analyses Input Text of a Communicative Utterance before it is uttered by a Embodied Agent, and marks it up with appropriate and meaningful Gestures given the meaning, context, and emotional content of Input Text and the gesturing style or personality of the Embodied Agent.
A computer implemented method for parsing a sensorimotor Event experienced by an Embodied Agent into symbolic fields of a WM event representation mapping to a sentence defining the Event is described the method including the steps of: attending a participant object; classifying the participant object; and making a series of cascading determinations about the Event, wherein some determinations are conditional on the results of previous determinations, wherein each determination sets a field in the WM event representation
The present invention relates to a computer implemented system for animating a virtual object or digital entity. It has particular relevance to animation using biologically based models, or behavioural models particularly neurobehavioural models. There is provided a plurality of modules having a computational element and a graphical element. The modules are arranged in a required structure and have at least one variable and being associated with at least one connector. The connectors link variables between modules across the structure, and the modules together provide a neurobehavioural model. There is also provided a method of controlling a digital entity in response to an external stimulus.
A Markup System includes a Rule Processor, and a set of Rules for applying Markup to augment the communication of a Communicative Intent by an Embodied Agent. Markup applied to a Communicative Utterance applies Behaviour Modifiers and/or Elegant Variations to the Communicative Utterances.
Embodiments of architecture, systems, and methods for modeling dynamics between behavior and emotional states in an artificial nervous system are described herein. A computer implemented emotion system of an artificial nervous system for animating a virtual object, digital entity, or robot, is provided, comprising: a plurality of states, each state of the plurality of states representing an emotional state (ES) of the artificial nervous system; a module for processing a plurality of inputs, the processed plurality of inputs applied to the plurality of states. Other embodiments may be described and claimed.
Embodiments described herein relate to a method of changing the connectivity of a Cognitive Architecture for animating an Embodied Agent, which may be a virtual object, digital entity, and/or robot, by applying Mask Variables to Connectors linking computational Modules. Mask Variables may turn Connectors on or off—or more flexibly, they may module the strength of Connectors. Operations which apply several Mask Variables at once put the Cognitive Architecture in different Cognitive Modes of behaviour.
Computational structures provide Embodied Agents with memory which can be populated in real time from Experience, and/or or authored. Embodied Agents (which may be virtual objects, digital entities or robots) are provided with one or more Experience Memory Stores which influence or direct the behaviour of the Embodied Agents. An Experience Memory Store may include a Convergence Divergence Zone (CDZ), which simulates the ability of human memory to represent external reality in the form of mental imagery or simulation that can be re-experienced during recall. A Memory Database be generated in a simple, authorable way, enabling Experiences to be learned during live operation of the Embodied Agents or authored. Eligibility-Based Learning determines which aspects from streams of multimodal information are stored in the Experience Memory Store.
To realistically animate a String (such as a sentence) a hierarchical search algorithm is provided to search for stored examples (Animation Snippets) of sub-strings of the String, in decreasing order of sub-string length, and concatenate retrieved sub-strings to complete the String of speech animation. In one embodiment, real-time generation of speech animation uses model visemes to predict the animation sequences at onsets of visemes and a look-up table based (data-driven) algorithm to predict the dynamics at transitions of visemes. Specifically posed Model Visemes may be blended with speech animation generated using another method at corresponding time points in the animation when the visemes are to be expressed. An Output Weighting Function is used to map Speech input and Expression input into Muscle-Based Descriptor weightings.
Systems and methods for image retargeting are provided. Image data may be acquired that includes motion capture data indicative of motion of a plurality of markers disposed on a surface of a first subject. Each of the markers may be associated with a respective location on the first subject. A plurality of blendshapes may be calculated for the motion capture data based on a configuration of the markers. An error function may be identified for the plurality of blendshapes, and it may be determined that the plurality of blendshapes can be used to retarget a second subject based on the error function. The plurality of blendshapes may then be applied to a second subject to generate a new animation.
A method for creating a model of a virtual object or digital entity is described, the method comprising receiving a plurality of basic shapes for a plurality of models; receiving a plurality of specified modification variables specifying a modification to be made to the basic shapes; and applying the specified modification(s) to the plurality of basic shapes to generate a plurality of modified basic shapes for at least one model.
Interaction with a Computer is provided via an autonomous virtual embodied Agent. The Computer outputs Digital Content, which includes any content that exists in the form of digital data and is representable to a User. A subset of, or all, Digital Content is configured as Shared Digital Content which is representable to both the User and to the Agent.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
The present invention relates to a computer implemented system for animating a virtual object or digital entity. It has particular relevance to animation using biologically based models, or behavioural models particularly neurobehavioural models. There is provided a plurality of modules having a computational element and a graphical element. The modules are arranged in a required structure and have at least one variable and being associated with at least one connector. The connectors link variables between modules across the structure, and the modules together provide a neurobehavioural model. There is also provided a method of controlling a digital entity in response to an external stimulus.
Systems and methods for image retargeting are provided. Image data may be acquired that includes motion capture data indicative of motion of a plurality of markers disposed on a surface of a first subject. Each of the markers may be associated with a respective location on the first subject. A plurality of blendshapes may be calculated for the motion capture data based on a configuration of the markers. An error function may be identified for the plurality of blendshapes, and it may be determined that the plurality of blendshapes can be used to retarget a second subject based on the error function. The plurality of blendshapes may then be applied to a second subject to generate a new animation.
The present invention relates to a computer implemented system for animating a virtual object or digital entity. It has particular relevance to animation using biologically based models, or behavioural models particularly neurobehavioural models. There is provided a plurality of modules having a computational element and a graphical element. The modules are arranged in a required structure and have at least one variable and being associated with at least one connector. The connectors link variables between modules across the structure, and the modules together provide a neurobehavioural model. There is also provided a method of controlling a digital entity in response to an external stimulus.
Systems and methods for image retargeting are provided. Image data may be acquired that includes motion capture data indicative of motion of a plurality of markers disposed on a surface of a first subject. Each of the markers may be associated with a respective location on the first subject. A plurality of blendshapes may be calculated for the motion capture data based on a configuration of the markers. An error function may be identified for the plurality of blendshapes, and it may be determined that the plurality of blendshapes can be used to retarget a second subject based on the error function. The plurality of blendshapes may then be applied to a second subject to generate a new animation.
The present invention relates to a computer implemented system for animating a virtual object or digital entity. It has particular relevance to animation using biologically based models, or behavioral models particularly neurobehavioral models. There is provided a plurality of modules having a computational element and a graphical element. The modules are arranged in a required structure and have at least one variable and being associated with at least one connector. The connectors link variables between modules across the structure, and the modules together provide a neurobehavioral model. There is also provided a method of controlling a digital entity in response to an external stimulus.