A gaming system may allow for a user to capture and/or edit simulation state data of gameplay in a video game such that a replay of the gameplay may be rendered and/or shared. The gaming system may receive simulation state data and a request. The simulation state data may include simulation state(s) which include a model and pose state of an avatar corresponding to a player in a game simulation of a video game previously rendered as rendered view(s). The request may request a replay of the simulation state data with modification(s). The gaming system may modify the simulation state data to generate modified simulation state data and render, based on the modified simulation state data, replay view(s) that differ from the previously rendered view(s). The gaming system may then output the replay view(s) to a display of a computing device.
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/525 - Changing parameters of virtual cameras
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
2.
SYSTEMS AND METHODS FOR HANDLING BEVELS IN MESH SIMPLIFICATION
A method, device, and computer-readable storage medium for simplifying a mesh including bevels. The method includes: receiving a polygonal mesh representing a three-dimensional (3D) object; identifying a set of edges in the polygonal mesh as bevel edges; performing a mesh simplification operation on the polygonal mesh to generate a simplified mesh, wherein the mesh simplification operation removes at least one edge that includes a vertex that is of a bevel edge, and wherein two vertices in the polygonal mesh are collapsed to a collapse vertex in the simplified mesh; and updating stored normals of the collapse vertex based on copying stored normals of the two vertices removed from the polygonal mesh to the collapse vertex.
A method, device, and computer-readable storage medium for simplifying a convex hull are disclosed. A first queue of candidate vertices of a convex hull for vertex removal is generated, wherein the candidate vertices are sorted in the first queue by ascending values of a first cost metric associated with removal of the candidate vertex. A second queue of candidate faces of the convex hull for face removal is generated, wherein the candidate faces are sorted in the second queue by ascending values of a second cost metric associated with removal of the candidate face. A simplification operation is performed on the convex hull to generate a simplified version of the convex hull by performing a vertex removal operation on the candidate vertex in the first queue with lowest first cost metric or performing a face removal operation on the candidate face in the second queue with lowest second cost metric.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
41 - Education, entertainment, sporting and cultural services
09 - Scientific and electric apparatus and instruments
Goods & Services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
8.
SYSTEM FOR GENERATING ANIMATION WITHIN A VIRTUAL ENVIRONMENT
The present disclosure discloses the use of machine learning to address the process of motion synthesis and generation of intermediate poses for virtual entities. A transformer-based model can be used to generate intermediate poses for an animation based on a set of key frames.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
9.
SYSTEM FOR GENERATING ANIMATION WITHIN A VIRTUAL ENVIRONMENT
The present disclosure discloses the use of machine learning to address the process of motion synthesis and generation of intermediate poses for virtual entities. A transformer-based model can be used to generate intermediate poses for an animation based on a set of key frames.
Embodiments of systems and methods for enabling access to an online game, modifying user progress within the online game, monitoring user interactions with the online game, or adjusting user gameplay with the online game, via multiple platforms. The multiple platforms may include virtual reality platforms and non-virtual reality platforms.
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
This specification describes a method for testing a user interface of a video game, the method implemented by one or more processors, the method comprising: obtaining, by one or more of the processors, a screenshot of the video game; processing, by one or more of the processors, the screenshot of the video game to detect one or more user interface elements; and performing, by one or more of the processors, one or more actions in the video game based upon the detected one or more user interface elements for testing the user interface of the video game.
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
12.
Expressive speech audio generation for video games
This specification describes a computer-implemented method of training a machine-learned speech audio generation system to generate predicted acoustic features for generated speech audio for use in a video game. The training comprises receiving one or more training examples. Each training example comprises: (i) ground-truth acoustic features for speech audio, (ii) speech content data representing speech content of the speech audio, and (iii) speech expression data representing speech expression of the speech audio. Parameters of the machine-learned speech audio generation system are updated by: (i) minimizing a measure of difference between the predicted acoustic features for a training example and the corresponding ground-truth acoustic features of the training example, and (ii) minimizing a measure of difference between the predicted prosodic features for the training example and the corresponding ground-truth prosodic features for the training example.
G10L 13/00 - Speech synthesisText to speech systems
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
The systems and methods described herein provide for an automated photosensitivity detection system (PDS) configured to automatically execute processes for flash detection and pattern detection of a video. PDS outputs an analysis result for each type of pattern detection analysis for the video. The PDS can execute each type of pattern detection analysis independently of the other pattern detection processes. Each pattern detection process is a distinct process that can be calculated without reference to the other processes. The final analysis result can aggregate the results of each detection process executed by the PDS.
The systems and methods described herein provide for an automated photosensitivity detection system (PDS) configured to automatically execute processes for flash detection and pattern detection of a video. PDS outputs an analysis result for each type of pattern detection analysis for the video. The PDS can execute each type of pattern detection analysis independently of the other pattern detection processes. Each pattern detection process is a distinct process that can be calculated without reference to the other processes. The final analysis result can aggregate the results of each detection process executed by the PDS.
A gaming system may provide for interactable environment geometry (IEG) detection. The gaming system may determine a sensory perspective of a sense of a non-player character (NPC) in a virtual environment of a game simulation, generate perception data of the NPC from the sensory perspective of the sense, input the perception data of the NPC into a detection model associated with the sense and receive, from the detection model, detection data for a detected item. The gaming system may then generate an awareness-based character decision for the NPC based on the detection data of the detected item.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
This specification describes a computer-implemented method of generating context-dependent speech audio in a video game. The method comprises obtaining contextual information relating to a state of the video game. The contextual information is inputted into a prosody prediction module. The prosody prediction module comprises a trained machine learning model which is configured to generate predicted prosodic features based on the contextual information. Input data comprising the predicted prosodic features and speech content data associated with the state of the video game is inputted into a speech audio generation module. An encoded representation of the speech content data dependent on the predicted prosodic features is generated using one or more encoders of the speech audio generation module. Context-dependent speech audio is generated, based on the encoded representation, using a decoder of the speech audio generation module.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 13/02 - Methods for producing synthetic speechSpeech synthesisers
G10L 19/04 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
17.
SYSTEM FOR RENDERING SKIN TONE WITHIN A GAME APPLICATION ENVIRONMENT
The present disclosure provides a system for rendering skin tones of virtual entities using dynamic lighting systems within the virtual environment. The dynamic lighting system can be used to modify parameters of light sources within a game environment to increase the range of renderable skin tones of a virtual entity.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software for identifying, monitoring, and reporting cheating in video games; downloadable computer software for ensuring compliance and integrity in video games; downloadable computer software for monitoring and analyzing video game play; downloadable computer software for monitoring and analyzing computer systems; downloadable computer software for monitoring and analyzing video game systems; downloadable computer software for monitoring and managing a gaming community to prevent cheating; downloadable computer software for preventing cheating in video games; downloadable computer software for preventing video game players from utilizing cheat codes; downloadable computer software for preventing video game players from obtaining an unfair advantage by using third-party tools; downloadable computer software for preventing video game players from using unauthorized third-party tools; downloadable anti-cheat game software; downloadable computer software for detecting, eradicating and preventing computer viruses; downloadable computer software for ensuring the security of software applications, games, and video and audio files; downloadable computer software packages for ensuring the security of software applications, games, and video and music files; downloadable computer software for game security and to prevent hacking; downloadable computer software for protecting video and computer games from security breaches Providing non-downloadable computer software for identifying, monitoring, and reporting cheating in video games; providing non-downloadable computer software for ensuring compliance and integrity in video games; providing non-downloadable computer software for monitoring and analyzing video game play; providing non-downloadable computer software for monitoring and analyzing computer systems; providing non-downloadable computer software for monitoring and analyzing video game systems; providing non-downloadable computer software for monitoring and managing a gaming community to prevent cheating; providing non-downloadable computer software for preventing cheating in video games; providing non-downloadable computer software for preventing video game players from utilizing cheat codes; providing non-downloadable computer software for preventing video game players from obtaining an unfair advantage by using third-party tools; providing non-downloadable computer software for preventing video game players from using unauthorized third-party tools; providing non-downloadable anti-cheat game software; providing non-downloadable computer software for detecting, eradicating and preventing computer viruses; providing non-downloadable computer software for ensuring the security of software applications, games, and video and audio files; providing non-downloadable computer software packages for ensuring the security of software applications, games, and video and music files; providing non-downloadable computer software for game security and to prevent hacking; providing non-downloadable computer software for protecting video and computer games from security breaches
41 - Education, entertainment, sporting and cultural services
09 - Scientific and electric apparatus and instruments
Goods & Services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
20.
SKIN MICROSTRUCTURE TEXTURE FILTERING FOR RUNTIME FACIAL ANIMATIONS
A method of skin microstructure texture filtering for facial animation includes obtaining a plurality of one-dimensional (1D) filtered tiles corresponding to a plurality of filter axis angles and a plurality of filter parameters applied to a neutral tile, and at runtime, for each pixel representing a region of human skin, determining a principal direction of deformation, a principal filter parameter corresponding to the principal direction of deformation, and a secondary filter parameter corresponding to a secondary direction of deformation orthogonal to the principal direction of deformation, and selecting a first 1D filtered tile among the plurality of 1D filtered tiles, the first 1D filter tile corresponding to the secondary direction of deformation and the secondary filter parameter, and generating a respective two-dimensional (2D) filtered tile by convolving the first 1D filtered tile with a second 1D filter kernel corresponding to the principal direction of deformation and the principal filter parameter.
Systems and methods are provided for enhanced animation generation based on generative modeling. An example method includes training models based on faces and information associated with persons. The modeling system being trained to reconstruct expressions, textures, and models of persons.
In response to receiving user input command for sending a contextually aware communication, a computer system is configured to use game state data to determine a target location that a player is focusing on in a virtual environment in a video game, identify a unit that the player likely wants to communicate about based on at least priorities of unit types and proximities of units to the target location, and select a communication action for performance. Different communication actions can be performed in response to the same user input command when the game state data indicates different game states.
A63F 13/5372 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
A63F 13/23 - Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
A system is disclosed that is able to combine motion capture data with volumetric capture data to capture player style information for a player. This player style information or player style data may be used to modify animation models used by a video game to create a more realistic look and feel for a player being emulated by the video game. This more realistic look and feel can enable the game to replicate play style of a player. For example, one soccer player may run with his elbows closer to his body and his forearm may swing across his torso. While another soccer player who is perhaps more muscular may run with his elbows and arms further from his body and his forearms may not cross in front of his torso when running.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
The disclosure provides a video playback system for use within a game application and/or other interactive computing environments. The video playback system can be used to capture gameplay during execution of a game application. The captured gameplay video can be processed and stored within the game application or in a network accessible location.
A63F 13/86 - Watching games played by other players
A63F 13/20 - Input arrangements for video game devices
A63F 13/25 - Output arrangements for video game devices
A63F 13/30 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers
A63F 13/32 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using local area network [LAN] connections
A63F 13/33 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections
A63F 13/332 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using wireless networks, e.g. cellular phone networks
A63F 13/335 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using Internet
This specification describes a computer-implemented method of training a machine-learned speech audio generation system for use in video games. The training comprises: receiving one or more training examples. Each training example comprises: (i) ground-truth acoustic features for speech audio, (ii) speech content data representing speech content of the speech audio, and (iii) a ground-truth speaker identifier for a speaker of the speech audio. Parameters of the machine-learned speech audio generation system are updated to: (i) minimize a measure of difference between the predicted acoustic features of a training example and the corresponding ground-truth acoustic features of the training example, (ii) maximize a measure of difference between the first speaker classification for the training example and the corresponding ground-truth speaker identifier of the training example, and (iii) minimize a measure of difference between the second speaker classification for the training example and the corresponding ground-truth speaker identifier of the training example.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 13/02 - Methods for producing synthetic speechSpeech synthesisers
G10L 17/04 - Training, enrolment or model building
This specification described systems, methods, and apparatus for policy models for selecting an action in a game environment based on persona data, as well as the use of said models. According to one aspect of this specification, there is described a computer implemented method of controlling an agent in an environment, the method comprising: for a plurality of timesteps in a sequence of timesteps: inputting, into a machine-learned policy model, input data comprising a current state of the environment and an auxiliary input, the auxiliary input indicating a target action style for the agent; processing, by the machine-learned policy model, the input data to select an action for a current timestep; performing, by the agent in the environment, the selected action; and determining, subsequent to the selected action being performed, an update to the current state of the environment.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for preventing rendering of a character in a video game. The method includes receiving an action regarding a first character rendered in a first-person point of view (POV), wherein the POV of the first character is changed from the first-person POV to a third-person POV. The method includes detecting a change in the POV of the first character. The method includes determining characters are outside the first character field of view (FOV) in the first-person POV and would be within the FOV of the first character in the third-person POV. The method includes changing the POV of the first character from the first-person POV to a third person POV. The method includes causing rendering of the video game in a third-person POV of the first character, the rendering preventing rendering of other characters.
An imitation learning system may learn how to play a video game based on user interactions by a tester or other user of the video game. The imitation learning system may develop an imitation learning model based, at least in part, on the tester's interaction with the video game and the corresponding state of the video game to determine or predict actions that may be performed when interacting with the video game. The imitation learning system may use the imitation learning model to control automated agents that can play additional instances of the video game. Further, as the user continues to interact with the video game during testing, the imitation learning model may continue to be updated. Thus, the interactions by the automated agents with the video game may, over time, almost mimic the interaction by the user enabling multiple tests of the video game to be performed simultaneously.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
G06N 3/088 - Non-supervised learning, e.g. competitive learning
Various aspects of the subject technology relate to systems, methods, and machine-readable media for generating insights for video games. The method includes gathering information regarding a player for a plurality of video games, the information comprising at least one of in-world state data, player action data, player progression data, and/or real-world events relevant to each video game. The method also includes tracking events in at least one video game of the plurality of video games, the events comprising an action event or a standby event. The method also includes determining that an event of the tracked events is an action event. The method also includes generating insights regarding the action event based on the information gathered regarding the player, the insights for improving the player's performance in the video game. The method also includes relaying the insights to the player to improve the player's performance in the video game.
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
A63F 13/85 - Providing additional services to players
This specification describes a computing system for generating visual assets for video games. The computing system comprises an image segmentation model, a first 3D generation model, and a second 3D generation model. At least one of the first 3D generation model and the second 3D generation model comprises a machine-learning model. The system is configured to obtain: (i) a plurality of images corresponding to the visual asset, each image showing a different view of an object to be generated in the visual asset, and (ii) orientation data for each image that specifies an orientation of the object in the image. A segmented image is generated for each image. This comprises processing the image using the image segmentation model to segment distinct portions of the image into one or more classes of a predefined set of classes. For each image, 3D shape data is generated for a portion of the object displayed in the image. This comprises processing the segmented image of the image, the orientation data of the image, and style data for the visual asset using the first 3D generation model. 3D shape data is generated for the visual asset. This comprises processing the generated 3D shape data of each image using the second 3D generation model.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for rendering audio via a game engine for a game. Various aspects may include determining sound source reverb metrics and listener reverb metrics. Aspects may include determining reverbs within a reverb possibility space for all rooms or spaces of the game rendered by the game engine. Aspects may also include determining sound tuning parameters describing reverb attenuation over distance. Aspects may include calculating acoustic parameters based on the reverb metrics, relative positions, and sound tuning parameters. Aspects may include rendering audio according to a fit of determined reverbs to the acoustic parameters.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
H04S 7/00 - Indicating arrangementsControl arrangements, e.g. balance control
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
A player profile management system collects player data from various systems and generates and manages player profiles. A snapshot pipeline of the player profile management system generates a snapshot player profile associated with a player. The player profile management system receives, after generating the snapshot player profile associated with the player, player data associated with the player. An update pipeline of the player profile management system generates, based on the snapshot player profile and the player data associated with the player, an update player profile associated with the player.
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
09 - Scientific and electric apparatus and instruments
Goods & Services
recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; Downloadable computer game software; recorded video game software
The specification relates to the generation of in-game animation data and the evaluation of in-game animations. According to a first aspect of the present disclosure, there is described a computer implemented method comprising: inputting, into one or more neural network models, input data comprising one or more current pose markers indicative of a current pose of an in-game object, one or more target markers indicative of a target pose of an in-game object and an object trajectory of the in-game object; processing, using the one or more neural networks, the input data to generate one or more intermediate pose markers indicative of an intermediate pose of the in-game object positioned between the current pose and the target pose; outputting, from the one or more neural networks, the one or more intermediate pose markers; and generating, using the one or more intermediate pose markers, an intermediate pose of the in-game object, wherein the intermediate pose of the in-game object corresponds to a pose of the in-game object at an intermediate frame of in-game animation between a current frame of in-game animation in which the in-game object is in the current pose and a target frame of in-game animation in which the in-game object is in the target pose.
The present disclosure provides a system for generating and rendering virtual objects, such as mesh particles, using dynamic color blending within the virtual environment. Mesh particles may be divided up into portions. For example, the portions of the mesh particle may be a single pixel or a group of pixels. The color of the mesh particles can be dynamically determined for the portions of a mesh particle.
Systems and methods are provided for enhanced animation generation based on generative control models. An example method includes accessing an autoencoder trained based on character control information generated using motion capture data, the character control information indicating, at least, trajectory information associated with the motion capture data, and the autoencoder being trained to reconstruct, via a latent feature space, the character control information. First character control information associated with a trajectory of an in-game character of an electronic game is obtained. A latent feature representation is generated and the latent feature representation is modified. A control signal is output to a motion prediction network for use in updating a character pose of the in-game character.
A collusion detection system may detect collusion between entities participating in online gaming. The collusion detection system may identify a plurality of entities associated with and opponents within an instance of an online game, determine social data associated with the plurality of entities, determine in-game behavior data associated with the plurality of entities, and determine, for one or more pairings of the plurality of entities, respective pairwise feature sets based at least in part on the social data and the in-game behavior data. The collusion detection system may then perform anomaly detection on the respective pairwise feature sets and, in response to the anomaly detection detecting one or more anomalous pairwise feature sets, output one or more suspect pairings of the plurality of entities corresponding to the one or more anomalous pairwise feature sets as suspected colluding pairings.
A persona system determines a player persona for a player of a gaming system based on gameplay information for the user and, for example, performs dynamic content generation or additional product recommendations based on the player persona. The persona system may receive a request for content based on a persona of a player and receive gameplay data associated with gameplay of the player in a plurality of games. The persona system may then generate a player persona of the player based on the gameplay data associated with the gameplay of the player in the plurality of games, determine persona based content based at least in part on a portion of the player persona, and output the persona based content in response to the request.
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
42.
CHARACTER CONTROLLERS USING MOTION VARIATIONAL AUTOENCODERS (MVAES)
Some embodiments herein can include methods and systems for predicting next poses of a character within a virtual gaming environment. The pose prediction system can identify a current pose of a character, generate a gaussian distribution representing a sample of likely poses based on the current pose, and apply the gaussian distribution to the decoder. The decoder can be trained to generate a predicted pose based on a gaussian distribution of likely poses. The system can then render the predicted next pose of the character within the three-dimensional virtual gaming environment. Advantageously, the pose prediction system can apply a decoder that does not include or use input motion capture data that was used to train the decoder.
In a video game, a player's character can start in a normal state, receive first damage, and change to an incapacitated state. The player's character can be revived from the incapacitated state back to the normal state. The player's character can be changed from the incapacitated state to a preliminarily defeated state, and in response, a player respawn activation item can be generated. The player respawn activation item can be used by the player's teammates to respawn the player's character at one or more respawn locations.
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Administration of loyalty programs. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
Methods, apparatus and systems are provided for generating an interactive non-player character (NPC) scene for a computer game environment of a video game. Changes are detected in relation to a script associated with the interactive NPC scene. For each NPC, a set of NPC data associated with the interactions said each NPC has within the script is generated corresponding to the changes. The generated set of NPC data is processed with an NPC rig associated with said each NPC to generate an NPC asset. A camera solver is applied to a region of the computer game environment associated with the script for determining locations of NPC assets and one or more cameras within said region in relation to said interactive NPC scene. Data representative of said each NPC asset and said determined NPC asset and camera locations for use by a game development engine for generating said interactive NPC scene.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/5258 - Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G10L 13/027 - Concept to speech synthesisersGeneration of natural phrases from machine-based concepts
46.
VERSION AGNOSTIC CENTRALIZED STATE MANAGEMENT IN VIDEO GAMES
A state system providing version agnostic centralized state management can use node graphs corresponding to virtual entities to maintain a world state among any version of a video game. As states to virtual entities change, corresponding nodes of the node graph are updated in response to the state change to account for and store the state change. As a data structure referencing, associating, and/or corresponding to virtual entities themselves, the node graph can facilitate centralized state management for a video game in a version agnostic manner. Additionally, the state system is also configured to validate node dependencies of a node graph when a corresponding change in state of a corresponding virtual entity occurs during gameplay, to avoid and/or prevent game state errors.
G06F 8/71 - Version control Configuration management
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A party system of a gaming environment provides to users of the gaming environment parties of sizes that extend beyond the party size limitations of a video game. An extendable party can dynamically create subset parties of users for gameplay as needed, without requiring the party to be ceased. Therefore, an extendable party allows a subset of users, or multiples thereof, to enter into gameplay while maintaining the party as a whole. A party system, video game, or gaming environment can be configured to apply one or more rules or policies to a party that limit or alter a feature or function of an extended party, such as to prevent players among extended parties from providing or receiving competitive advantages.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/80 - Special adaptations for executing a specific game genre or game mode
48.
Systems and methods for smoothing and simplifying proxy meshes
A method, device, and computer-readable storage medium for generating a proxy mesh are disclosed. The method includes: receiving a reference mesh, wherein the reference mesh comprises a polygonal mesh that is a computer representation of a three-dimensional (3D) object; computing quadrics corresponding to the reference mesh; receiving a second polygonal mesh, wherein the second polygonal mesh comprises a polygonal mesh generated based on the reference mesh; transferring the quadrics corresponding to the reference mesh to the second polygonal mesh; and generating a proxy mesh based on the quadrics corresponding to the reference mesh transferred to the second polygonal mesh.
A gaming system may provide for interactable environment geometry (IEG) detection. The gaming system detect one or more IEG features in an area of a virtual environment of a game including an avatar of a player, determine, for an IEG feature of the one or more potentially IEG features, one or more unprocessed potential interactions that are valid for the IEG feature, the determining that an individual unprocessed potential interaction of the one or more unprocessed potential interactions is a valid interaction for the IEG feature being based on corresponding criteria of the individual unprocessed potential interactions, and determine, based at least in part on a position of the avatar in the virtual environment, whether the valid interaction for the IEG feature is available for the avatar.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/55 - Controlling game characters or game objects based on the game progress
50.
SYSTEM FOR MOTION RETARGETING WITH OBJECT INTERACTION
A system may perform animation retargeting that may allow an existing animation to be repurposed for a different skeleton and/or a different environment geometry from that associated with the existing animation. The system may input, to a machine learning (ML) retargeting model, an input animation, a target skeleton and environment geometry data of an environment for a predicted animation, wherein the ML retargeting model is configured to generate the predicted animation based on the input animation, the target skeleton and the environment geometry data of the environment for the predicted animation and receive, from the ML retargeting model, the predicted animation based on the input animation, the target skeleton and the environment geometry data of the environment for the predicted animation.
Embodiments of the present application provide systems and methods for world prediction within a game application environment. The systems and methods can include a world prediction module for predicting collisions between virtual objects in the game application environment. The world prediction module can use game state data to simulate the virtual objects into future instances. The world prediction module can parse the future instances to find collisions in a farfield representation of the virtual objects and collisions in a nearfield representation of the virtual objects. The world prediction module can use collision information to update a game engine of the game application.
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
52.
SYSTEM FOR AUTOMATED GENERATION OF FACIAL SHAPES FOR VIRTUAL CHARACTER MODELS
Systems and methods are provided for enhanced face shape generation for virtual entities based on generative modeling techniques. An example method includes training models based on synthetically generated faces and information associated with an authoring system. The modeling system being trained to reconstruct face shapes for virtual entities based on a latent space embedding of a face identity.
A quality analysis tool for visual-programming scripting languages uses machine learning to process changes from visual-programming environments. The quality analysis tool can receive data associated with a code submission via a visual-programming scripting language, process the data to identify features in the data that correspond to previously identified defects, apply a pattern matching algorithm to the identified features, determine a risk prediction based on a learned pattern recognition model associated with a pattern in the features, and transmit a notice of predicted risk. The quality analysis tool can train models for use with visual-programming scripting languages and visual-programming environments.
Systems and methods are presented herein from providing assist indication in an interactive virtual environment. Game data of a game session of a virtual interactive environment is received. Based in part on the game data, a navigation assist used in the game session is identified. An assist indication to render is determined based on the game data and the navigation assist. The assist indication is configured for rendering during runtime. The assist indication is rendered in the virtual interactive environment of the game session.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A method comprises determining positions of render strands based on a simulation model of simulation strands. Each simulation strand corresponds to a render strand. For a first range of values of a metric up to a threshold value, the simulation model is determined in a first simulation level using a first set of simulation strands. For a second range of values of the metric from a second threshold value, the simulation model is determined in a second simulation level using a subset of the first set of simulation strands. For metric values between the first and second threshold values, a transition between first and second simulation levels comprises computing the simulation model in the first level of detail. Positions of the render strands during the transition are derived from the first set of simulation strands having a first weight, and the second set of simulation strands having a second weight.
A body mesh to be collided with a cloth mesh are received, together with collider objects (that correspond to or approximate the body mesh) divided into cells. Polygons of the body mesh are projected onto the surface of the collider objects from a location within the collider object to identify cells of the collider object that overlap the projection of the polygons. A set of cloth features that collide with the collider object are projected onto the surface of the collider object to identify cells onto which the cloth features are projected. For each cell that includes a projection of a cloth feature, collision tests are performed between the cloth feature and the polygons whose projections also overlap the same cell. Using the collider object as an acceleration structure allows for cloth simulation to be performed while reducing collision tests for each cloth feature to a limited number of polygons.
An example method of simulating dribbling ball behavior in interactive videogames includes: determining, a current spatial position of a simulated ball rolling on a surface of a simulated terrain; determining, based on a slope of the surface of the simulated terrain, a likelihood of ball dribbling; identifying a segment of a path of the simulated ball over the surface from the current spatial position of the simulated ball, such that a dribbling criterion based on the likelihood of ball dribbling is satisfied on the segment of the path; determining, based on a speed of the simulated ball, a dribble-simulating surface angle adjustment range; choosing a dribble-simulating surface angle adjustment value from the dribble-simulating surface angle adjustment range; adjusting, based on the dribble-simulating surface angle adjustment value, a surface normal of a segment of the surface on the path; and determining, based on the adjusted surface normal, a next spatial position of the simulated ball.
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
A video game system and method analyze virtual contact between an avatar and a virtual object within a video game. The point of contact of the virtual contact on the virtual object and/or the intensity of contact of the virtual contact may then be used to determine a subsequent virtual action to be performed within the video game. The virtual action, with any virtual movement thereof, may be carried out in a realistic manner within the video game by determining a virtual trajectory of the motion. The virtual trajectory may be determined using a motion model. The motion model may provide the virtual trajectory of the virtual object based at least in part on one or more parameters of the virtual object, such as a weight parameter. The motion model may be trained using training video clips with realistic motion of virtual objects.
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
A device may access a feature vector generated based on interactions by a user with a video game. The device may access a cluster map comprising a mapping of user clusters, wherein each location within the cluster map is associated with a set of users whose feature vectors are within a threshold degree of similarity of each other. The cluster map may be generated using a plurality of extracted feature vectors obtained from interaction information. A device may determine a map location within the cluster map associated with the user based at least in part on the feature vector. A device may determine a target map location within the cluster map. A device may determine a guidance action based at least in part on the target map location and the map location associated with the user. A device may execute the guidance action.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
Embodiments of the present application provide an interactive computing system with a game development server that can host multiple game editing sessions and allow multiple game developer systems to work on the same game assets at the same time. The game development server can manage some or all change requests from game developers to make changes to the game data.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06F 16/176 - Support for shared access to filesFile sharing support
The techniques described herein include using a system for enabling assisted gameplay in a computer game using real-time detection of predefined scene features and mapping of the detected features to recommended actions. For example, the system may generate a scanning query (e.g., a segment cast) toward a target area within a virtual scene, determine a geometric feature based on the scanning query, determine a scene feature based on the geometric feature, determine an action associated with the scene feature, and control an avatar based on the action. Examples of scene features that may have mappings to recommended actions include obstacles within a predicted trajectory of the avatar and transitions in the ground level of the virtual scene.
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
A personalization system determines a playstyle associated with a player of a gaming system based on gameplay information for the player and, for example, generates personalized animation for the player based on the player's playstyle. The personalization system can receive gameplay data associated with a playstyle of a player in one or more games and receive persona data associated with the player and the gameplay. The persona system can generate an animation for the player based on the gameplay data associated with the playstyle of the player in the one or more games, dynamically generate, based at least in part on a portion of the playstyle of the player, content including personalized animation, wherein the content including personalized animation is dynamically generated personalized content associated with the player, and transmit the content including personalized animation for presentation in a game associated with the player.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
An automated light rigging system (ALRS) that adjusts, or creates, light rigs that more accurately causes lighting from light rigs and light objects to conform to, or more closely conform with, target lux values of a target lux map for one or more regions of a virtual interactive environment. The ALRS can receive light rig data to sample lux values of the virtual interactive environment to determine where a loss or discrepancy of luminance occurs among the virtual interactive environment, based at least in part on a target lux map.
Systems and methods for user interface navigation may include a computing system which causes display of a user interface including a set of first user interface elements. The computing system receives a request to expand a switcher menu including a set of second user interface elements including at least one of the first user interface elements. Each second user interface element, when selected within the switcher menu, is displayed with one or more respective third user interface elements. The computing system causes display of a set of third user interface elements corresponding to one of the second user interface elements, and causes, responsive to a second request to navigate from the switcher menu to one of the set of third user interface elements, the switcher menu to collapse and display the second user interface element with at least some of the third user interface elements.
The systems and methods described herein provide a bakeless keyframe animation solver that enables creation of keyframe poses by manipulation of a skeleton and authors animations through interpolation. A manipulation module enables the manipulation of joints of a skeleton to produce keyframes without the need to bake animations that are driven by a rig. An interpolation module uses the manipulation module to change the kinematic properties (e.g., FK and IK) of one or more joints when interpolating between one or more keyframes to create an animation.
Gameplay API (G-API) calls are embedded by an anomaly system to detect anomalous gameplay among a video game. Anomalous gameplay is detected by identifying anomalous sequences of G-API calls made during gameplay. Anomalous gameplay can correspond to issues that disrupt and/or degrade the user experience of a video game; such as the existence of a bug or exploit, or the use of cheats, and/or bots by users of a video game. A machine learning embedding model among an anomaly system is trained to embed G-API calls corresponding to a video game. Once trained, a distance analysis and distribution analysis is performed by the anomaly system on embedded G-API calls to detect anomalies among the G-API calls made by a video game. Data corresponding to the detected anomalies can be included among a generated anomaly detection report by the anomaly system for further analysis, such as by video game developers.
An animation system is configured to accessibly curate selectable animations and/or stylized animations based in part on vocal audio data provided by a user during gameplay of a video game application. The vocal audio data is encoded by way of a machine learning model to produce and/or extract feature embeddings corresponding to the utterances among the vocal audio data. The feature embeddings are used in part to create a list of selectable animations and to create stylized animations that can be displayed to the user. In turn, the animation system enables users to use their voice to personalize their gameplay experience.
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
A method, device, and computer-readable storage medium for generating a shadow mesh. The method includes: receiving a graphics mesh; computing a set of LOD versions for each component of the graphics mesh, where each successive LOD version in the set of LOD versions includes fewer polygons than the preceding LOD version; computing a set of shadow versions for each component of the graphics mesh, where each successive shadow version in the set of shadow versions includes fewer polygons than the preceding shadow version, and each successive shadow version includes vertices that lie within a mesh defined by the preceding shadow version; generate N LOD meshes for the graphics mesh by selecting, for each LOD, a LOD version of each component to include in the LOD mesh; and generating a shadow mesh by selecting a shadow version of each component to include in the shadow mesh.
A method, computer-readable storage medium, and device for generating a master representation of input models. The method comprises: receiving a first base mesh and a second base mesh, wherein the first base mesh has a first topology and is associated with a first set of blendshapes to deform the first base mesh, the second base mesh has a second topology and is associated with a second set of blendshapes to deform the second base mesh, and the second topology is different from the first topology; combining the first topology and the second topology into a combined mesh topology representation; combining the first set of blendshapes and the second set of blendshapes into a combined blendshape representation; and outputting the combined mesh topology representation and the combined blendshape representation as a master representation, wherein the master representation can be queried with a target topology and blendshape.
A high-skilled-low-level detection system may detect high-skilled entities in low-level matches of an online gaming. The system may identify a plurality of entities that are within a first category of entities eligible to be matched by a matchmaking algorithm. The system may then determine respective feature sets based at least in part on gameplay data associated with the plurality of entities and perform anomaly detection on the respective feature sets. The system may then determine, based on the anomaly detection, an anomalous entity of the plurality of entities and cause the matchmaking algorithm to match the anomalous entity with other entities that are in a second category of entities.
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
A spectator system may provide for spectating in online gaming. The spectator system may receive, at a spectator server, game state data from a game simulation server hosting an online game for one or more players, generate one or more spectator game state data corresponding to one or more spectator devices and output the one or more spectator game state data to the spectator devices. The spectator server may further output the game state data to another spectator server.
A computing system may provide functionality for controlling an animated model to perform actions and to perform transitions therebetween. The system may determine, from among a plurality of edges from a first node of a control graph to respective other nodes of the control graph, a selected edge from the first control node to a selected node. The system may then determine controls for an animated model in a simulation based at least in part on the selected edge, control data associated with the selected node, a current simulation state of the simulation, and a machine learned algorithm, determine an updated simulation state of the simulation based at least in part on the controls for the animated model, and adapt one or more parameters of the machine learned algorithm based at least in part on the updated simulation state and a desired simulation state.
A video game includes a single player mode where completion of storyline objectives advances the single player storyline. The video game also includes a multiplayer mode where a plurality of players can play on an instance of a multiplayer map. Storyline objectives from the single player mode are selected and made available for completion to players in the multiplayer mode, and the single player storylines can be advanced by players completing respective storyline objectives while playing in the multiplayer mode. Combinations of storyline objectives are selected from pending storyline objectives for players connecting to a multiplayer game for compatibility with multiplayer maps. Constraints can be used to determine compatibility.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/335 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/47 - Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
A63F 13/48 - Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
41 - Education, entertainment, sporting and cultural services
Goods & Services
Providing information on-line relating to computer games; providing a website featuring information regarding automobile and motor sports culture, competitions, and current events featuring automobiles and motor sports; Entertainment services, namely, providing information and news online relating to automobiles and motor sports
A system for use in video game development to generate expressive speech audio comprises a user interface configured to receive user-input text data and a user selection of a speech style. The system includes a machine-learned synthesizer comprising a text encoder, a speech style encoder and a decoder. The machine-learned synthesizer is configured to generate one or more text encodings derived from the user-input text data, using the text encoder of the machine-learned synthesizer; generate a speech style encoding by processing a set of speech style features associated with the selected speech style using the speech style encoder of the machine-learned synthesizer; combine the one or more text encodings and the speech style encoding to generate one or more combined encodings; and decode the one or more combined encodings with the decoder of the machine-learned synthesizer to generate predicted acoustic features. The system includes one or more modules configured to process the predicted acoustic features, the one or more modules comprising a machine-learned vocoder configured to generate a waveform of the expressive speech audio.
G10L 13/00 - Speech synthesisText to speech systems
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06N 3/044 - Recurrent networks, e.g. Hopfield networks
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
77.
PLAYTESTING COVERAGE WITH CURIOSITY DRIVEN REINFORCEMENT LEARNING AGENTS
Systems and methods of curiosity driven reinforcement learning agents promote novel exploration of a virtual interactive environment. The data of the exploration can be stored in a buffer to determine, generate, and display visualizations in the virtual interactive environment. The visualizations can correspond to identify issues in the virtual interactive environment and/or identify relationships between regions of the virtual interactive environment.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
The present disclosure provides a state stream game engine for a video game application. The state stream game engines can decouple the simulation of a video game application from the rendering of the video game application. The simulation of the video game is handled by a simulation engine. The rendering of the video game is handled by a presentation engine. The data generated by the simulation engine can be communicated to the presentation engine 124 using a state stream.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/352 - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
79.
Intelligent system for automated allocation of computing resources
Some embodiments herein disclose intelligent priority evaluators configured to perform a method that prioritizes tasks submitted by various users, even if the tasks are similarly classified. The scheduling system can collect, calculate, and use various criteria to determine a reward score in order to prioritize one task over another, such as for dynamic scheduling purposes. This can be performed in addition to or as a replacement for receiving user designations of priority.
In some embodiments, the dynamic animation generation system can provide a deep learning framework to produce a large variety of martial arts movements in a controllable manner from unstructured motion capture data. The system can imitate animation layering using neural networks with the aim to overcome challenges when mixing, blending and editing movements from unaligned motion sources. The system can synthesize movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, but also reconstruct signature motions of different fighters. For achieving this task, the dynamic animation generation system can adopt a modular framework that is composed of the motion generator, that maps the trajectories of a number of key joints and root trajectory to the full body motion, and a set of different control modules that map the user inputs to such trajectories.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
Various aspects of the subject technology relate to systems, methods, and machine-readable media for authenticating user accounts. The method includes authenticating at least one user account of a plurality of user accounts for lobbying into a session. The method includes generating at least one authenticated socket in a server for each authenticated user account. The method includes adding the at least one user account to the at least one authenticated socket. The method includes adding non-authenticated user accounts to non-authenticated sockets of a plurality of non-authenticated sockets. The method includes detecting network traffic flowing into the plurality of authenticated sockets and the plurality of non-authenticated sockets. The method includes disconnecting non-authenticated user accounts from the server when a threshold amount of suspicious and/or malicious network traffic is detected in a non-authenticated socket.
Embodiments of the systems and methods described herein provide a terrain generation and population system that can determine terrain population rules for terrain population objects and features when placing objects and features in a three dimensional virtual space. As such, the terrain generation and population system can generate realistic terrain for use in game. The terrain generation and population system can receive an image, such as a satellite image, and utilize artificial intelligence to perform image segmentation at the pixel level to segment features and/or objects in the image. The game terrain system can automatically detect and apply feature and object masks based on the identified features and/or objects from the image segmentation. The game terrain system can place the features and/or objects in corresponding masks in the three dimensional space according to the application of terrain population rules.
Systems and methods are provided for technical improvements with respect to generation of facial expressions, facial riggings, and models for characters configured for use in electronic games. The systems and methods may generate a facial animation rig that can be used to generate realistic facial expressions based on analyzing data captured from real-world persons. Advantageously, the system may allow for substantially automated facial rig generation.
09 - Scientific and electric apparatus and instruments
28 - Games; toys; sports equipment
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Skateboards; skateboard wheels; skateboard trucks; skateboard decks. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
A gaming system may allow for a user to capture and/or edit simulation state data of gameplay in a video game such that a replay of the gameplay may be rendered and/or shared. The gaming system may receive simulation state data and a request. The simulation state data may include simulation state(s) which include a model and pose state of an avatar corresponding to a player in a game simulation of a video game previously rendered as rendered view(s). The request may request a replay of the simulation state data with modification(s). The gaming system may modify the simulation state data to generate modified simulation state data and render, based on the modified simulation state data, replay view(s) that differ from the previously rendered view(s). The gaming system may then output the replay view(s) to a display of a computing device.
A63F 13/49 - Saving the game statusPausing or ending the game
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/525 - Changing parameters of virtual cameras
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
89.
Videogame telemetry data and game asset tracker for session recordings
Disclosed are issue tracking systems, troubleshooting techniques, and user interfaces for troubleshooting, which are associated with a user of a software application triggering the recording of a session. During the session, the user may narrate suggestions or problems for the application while they interact with the application in real-time, and a recording engine of the application may record various types of session data, such as the user's interaction, narration, telemetry data, call stack data, and so forth. The session data is automatically submitted to an issue tracking system to process a support ticket. The issue tracking system may provide a user interface enabling a developer to review a support ticket and any associated session data to quickly determine the relevant portion of the application data (e.g., underlying program code) that needs to be modified. The issue tracking system may also process session data to identify related tickets or recurring issues.
A63F 13/86 - Watching games played by other players
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
90.
Interactive videogame verification using cryptographically protected transaction records
An example method of performing interactive videogame verification using cryptographically protected transaction records includes: receiving, by a videogame server, from a first videogame client device, a first transaction record reflecting a first set of events associated with an interactive videogame session, wherein the first transaction record is cryptographically signed by a first private cryptographic key associated with the first videogame client device; receiving, from a second videogame client device, a second transaction record reflecting a second set of events associated with the interactive videogame session, wherein the second transaction record is cryptographically signed by a second private cryptographic key associated with the second videogame client device; and validating the first transaction record based on the second transaction record.
A63F 13/71 - Game security or game management aspects using secure communication between game devices and game servers, e.g. by encrypting game data or authenticating players
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
91.
Predicting the Appearance of Deformable Objects in Video Games
This specification describes a computer-implemented method of predicting the appearance of a deformable object in a video game. The method comprises determining a configuration of a moveable object underlying the deformable object. Input data is inputted into a machine-learning model. The input data comprises a representation of the configuration of the moveable object. A model output is generated as output of the machine-learning model for predicting the appearance of the deformable object. Mesh and texture data for the deformable object is determined from the model output. The deformable object is rendered using the generated mesh and texture data.
Systems and methods are disclosed for calculating dynamic ambient occlusion (AO) values for character models to yield high-quality approximations of global illumination effects. The approach utilizes a dual component machine-learning model that factorizes dynamic AO computation into a non-linear component, in which visibility is determined by approximating spheres and their casted shadows, and a linear component that leverages a skinning-like algorithm for efficiency. The parameters of both components are trained in a regression against ground-truth AO values. The resulting model accommodates lighting interactions with external objects and can be generalized without requiring carefully constructed training data.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for contextual matchmaking. The method includes receiving player information for a plurality of players, the player information for each player comprising at least one of player statistics, player tendencies, and player preferences. The method also includes generating player archetypes for each of the plurality of players based on the player information, each of the player archetypes comprising at least one premier scenario. The method also includes matching the players based on the player archetypes. The method also includes generating in-game objectives based on the player archetypes.
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
System and methods for using a deep learning framework to customize animation of an in-game character of a video game. The system can be preconfigured with animation rule sets corresponding to various animations. Each animation can be comprised of a series of distinct poses that collectively form the particular animation. The system can provide an animation-editing interface that enables a user of the video game to make modifications to at least one pose or frame of the animation. The system can realistically extrapolate these modifications across some or all portions of the animation. In addition or alternatively, the system can realistically extrapolate the modifications across other types of animations.
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
Systems and methods for performing a processing operation for a tiled image region are disclosed. The tiled image region may include a plurality of tiles or images. Further, the tiled image region may correspond to a plurality of image resolutions. A system may execute a game development application to perform the processing operation for the tiled image region. The system may identify the tiled image region corresponding to the processing operation. The system can utilize a texture array, a lookup texture, and a scaling factor to determine position data for the tiled image region. The system can then render a continuous image region that represents the tiled image region. The system can seamlessly process the continuous image region according to the processing operation and use the continuous image region to update the tiled image region.
A method, device, and computer-readable storage medium for retrieving a stored triangulation of a polygonal mesh. The method includes: receiving an input polygonal mesh that is a computer representation of a three-dimensional (3D) object; identifying one or more faces of the input polygonal mesh that have more than three edges; retrieving for each face of the one or more faces, integer counts of a number of triangles that are incident to each vertex of the face stored in face-vertex properties of the face, wherein a specific triangulation of the face is represented by the number of triangles that are incident to each vertex of the face; and generating a triangulated polygonal mesh corresponding to the input polygonal mesh by triangulating, for each face of the one or more faces, the face based on the integer counts of the number of triangles that are incident to each vertex of the face.
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Online video gaming services; Provision of information relating to electronic computer games provided via the Internet.
Embodiments of an automated fraud detection system are disclosed that can detect user accounts that are engaging in unauthorized activities within a game application. The fraud detection system can provide an automated system that identifies parasitic accounts. The fraud detection system may identify patterns using machine learning based on characteristics, such as gameplay and transaction characteristics, associated with the parasitic user accounts. The fraud detection system may generate a model that can be applied to existing accounts within the game in order to automatically identify users that are engaging in unauthorized activities. The fraud detection system may automatically identify these parasitic accounts and implement appropriate actions to prevent the accounts from impacting legitimate users within the game application.
Embodiments of the present application provide a phased streaming system and process using a dynamic video game client. The dynamic video game client can utilize a state stream game engine in combination with a game application streaming service to provide users with the ability to begin playing games quickly on a huge range of devices.
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 13/352 - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
The present disclosure provides a video game based seasonal reward distribution system. The seasonal reward system can provide users with a non-linear map that allows the users to choose how to progress through the reward map when advancing or leveling up a virtual character or user account within the video game. The virtual map can provide a visual representation of a non-linear pathway or tracks that a user can follow based on how the user would like to proceed and what types of rewards the user prefers to unlock. The reward map provides a series of reward nodes connected by links, resulting in a plurality of pathways or tracks that a user can select during advancement within the video game. The user can select individual reward nodes when the virtual character levels up and progress along a pathway on the virtual map.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions