Systems and methods are provided for implementing an improve mapping process to help identify disparate information associated with a software application in separately stored files. In this way, the information may remain separate and distinct, often times assigned to different teams, devices, and locations, and still be used to create a software application from the disparate information. For example, the system can generate a graph that comprises nodes that identify various information/functions from disparate data sources and edges that identify relationships between this information. Using the graph, the system may receive a query from a user device and generate a response to the query, where the graph can help narrow the search space in determining the response to the query.
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing on-line computer games; entertainment services, namely, providing computer games accessed and played via mobile and cellular phones and other wireless devices; provision of information relating to electronic computer games provided via the Internet; organizing, conducting and operating video game competitions and tournaments; entertainment services in the nature of arranging of electronic sports and video game contests, games, tournaments and competition; entertainment services, namely, providing a website featuring non-downloadable videos featuring live video game tournaments played by video game players; Sporting and cultural activities.
3.
SYSTEM FOR IDENTIFYING VISUAL ANOMALIES AND CODING ERRORS WITHIN A VIDEO GAME
A visual anomaly detection system can test a video game under test. The testing can involve applying captured video frames to a large language model using dynamically generated prompts. The captured video frames can be obtained directly from an output port of a user computing system enabling the video frames to be applied to the machine learning model without modification and with minimal to no user intervention. Additionally, the systems disclosed herein can control the user computing system hosting the video game under test enabling the test system to react to test results in real-time or near real-time (e.g., within milliseconds, while the video game is executing, before a next action is performed with respect to the video game, and the like) and to modify the testing process as tests are being performed.
A method of generating a three-dimensional (3D) model includes obtaining a set of two-dimensional (2D) images of a scene acquired by one or more cameras from a plurality of camera angles at a plurality of camera positions. Each 2D image corresponds to a respective camera angle and a respective camera position. The method further includes obtaining the respective camera angle and the respective camera position for each 2D image, and generating one or more semantic masks from the set of 2D images. Each semantic mask corresponds to a class of one or more objects in the scene. The method further includes training a neural radiance field (NeRF) model, using the set of 2D images and the one or more semantic masks as a training dataset, to obtain a trained NeRF model. The trained NeRF model is an implicit 3D model of the one or more objects in the scene.
A system may perform motion capture using motion capture targets with concave reflector structures. For example, the motion capture target include a target body and a plurality of tracking markers located on respective portions of the surface of the target body. At least one tracking marker of the plurality of tracking markers may be a concave reflector structure including a tapered hole in the surface of the target body and at least a portion of a surface of the tapered hole may be reflective.
A system may provide gameplay complexity assistance in gaming. The system operate, during gameplay of a game including a set of controls for a player of the game, a simulated player model to provide gameplay complexity assistance for the player including inputting a game state of the game to the simulated player model to cause the simulated player model to generate at least one simulated control corresponding to at least one control of the set of controls for the player and receiving the at least one simulated control input from the simulated player model. The system may then utilize, in the gameplay of the game, the at least one simulated control input from the simulated player model as a player input of the corresponding one of the set of controls of the player.
A63F 13/422 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
7.
SYSTEMS AND METHODS FOR EYE MODELING AND IRIS TEXTURING
A method for generating a three-dimensional (3D) model of a head is disclosed. One or more images of the head are obtained and the head includes eyes. A parametric model for the eyes that includes a set of parameters is retrieved. Values are assigned for each parameter in the set of parameters of the parametric model for the eyes based on the one or more images. Eye patch areas of areas surrounding the eyes are generated based on the values of the parameters in the set of parameters of the parametric model for the eyes. The 3D model of the head that includes the eyes and the eye patch areas is generated. The eyes are normalized to be spaced a fixed distance apart from one another in the 3D model, and a size of the head in the 3D model is scaled based on the fixed distance between the eyes.
This specification describes a method for generating background audio in a video game. The method is implemented by one or more processors and the method comprises: obtaining, by one or more of the processors, text data comprising text for speech audio that is to be present in the background audio; obtaining, by one or more of the processors, contextual data comprising data descriptive of an environment in the video game; and generating, by one or more of the processors, the background audio based upon processing the text data and the contextual data using one or more machine learning models.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 15/02 - Feature extraction for speech recognitionSelection of recognition unit
G10L 15/18 - Speech classification or search using natural language modelling
G10L 15/183 - Speech classification or search using natural language modelling using context dependencies, e.g. language models
This specification describes systems, methods and apparatus for generating computer game levels using machine learning. According to a first aspect of this specification, there is described a computer implemented method comprising: extracting, from a known computer game level in a training dataset of known computer game levels for a computer game, a set of level features; processing, using an encoder neural network model, the known computer game level to generate an embedding of the known computer game level; processing, using a decoder neural network model, the embedding of the known game level and the set of level features to generate data indicative of a candidate computer game level for the computer game; determining a value of an objective function based on the data indicative of the candidate computer game level; and updating parameters of the encoder model and/or decoder model based at least in part on the value of the objective function.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
This specification describes a method for generating audio for a video game. The method is implemented by one or more processors. The method comprises: obtaining, by one or more of the processors, acoustic feature data comprising a value for one or more audio characteristics; selecting, by one or more of the processors, a first latent embedding from a codebook of latent embeddings based upon processing the acoustic feature data using an acoustic machine learning model; and generating, by one or more of the processors, an output audio sample based upon the selected first latent embedding.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
G10L 19/032 - Quantisation or dequantisation of spectral components
11.
VIDEO GAME TESTING AND GAMEPLAY FEEDBACK USING EYE TRACKING
A device may as implemented by an interactive computing system configured with specific computer-executable instructions, capturing one or more image frames of a video game, receiving, from one or more sensors, eye tracking information associated with a user playing the video game, associating the eye tracking information with the one or more image frames, identifying at least a first frame based at least in part on the eye tracking information, identifying at least one feature of interest within the first frame based on the eye tracking information, and outputting an indication associated with the at least one feature of interest.
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A gaming system may allow for a user to capture simulation state data of gameplay in a video game such that, upon occurrence of a cinematic rendering event, a cinematic rendered views of the gameplay may be rendered. Specifically, the gaming system may receive simulation state data and determine based thereon that a cinematic rendering event occurred. The gaming system may then receive previously stored simulation state data and render and output a plurality of cinematic rendered views based at least in part on a cinematic rendering timeline, the one or more simulation states of the simulation state data, and the one or more prior simulation states of the previously stored simulation state data. The cinematic rendering timeline may include a first shot and a second shot which include different configurations for rendering corresponding portions the plurality of cinematic rendered views.
A video game animation method comprises generating, for each of one or more entities to be animated, a position sequence for use in animating the movement of the entity along one or more paths in a virtual environment. The position sequence defines a position in the virtual environment at each of a plurality of time steps. Generating the position sequence comprises accessing one or more regions of position data, each region of position data comprising position data items for successive positions along a respective one of the one or more paths.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
Systems and methods are provided for performing patch calculation and block scanning both a current/existing build and a target build at a server or other location that is remote from a client computing device, where patch calculations/generation is conventionally performed. Patch calculation may be performed using scan block sizes that are smaller than what is conventionally used, and is not limited files (source/target) having the same name. Additionally, adjacent blocks of data representative of binary resources may be concatenated, and block edges can be scanned to determine if still other data/resources could be used for the target build.
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A method for matchmaking of game players includes receiving player data associated with a user account, receiving match state data associated with a match of the game, and based on the player data and the match state data, extracting engagement prediction features. The method further includes providing, as an input to an engagement prediction model, the engagement prediction features, and receiving, as an output from the engagement prediction model, a predicted engagement metric, the predicted engagement metric being based on the engagement prediction features. The method further includes providing the predicted engagement metric as an input to a matchmaking system and receiving from the matchmaking system a decision whether to match the user account to the match of the game for gameplay.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
16.
GENERATIVE MODEL FOR CANONICAL AND LOCALIZED GAME CONTENT
The present disclosure provides a system for generating gameplay content by a generative modeling system. The system can generate gameplay content via one or more machine-learning models trained using game and player data. The system can add content generated by the one or more machine-learning models to the game and player data and retrain the models using the generated content. The system can also localize generated content based on player locations and language preferences.
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
The present disclosure provides a system for customizing virtual entities via a multi-track system. The system can generate a user interface that uses multiple tracks to manage and display virtual entities and virtual objects. The virtual entity can be in-line with a track, and virtual display objects can move along tracks that intersect with the virtual entity track. When a particular virtual display object intersects with the virtual entity track, a three-dimensional representation of an item associated with the particular virtual display object can be rendered with the virtual entity.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
18.
FACILITATING COMMUNICATION BETWEEN COMPUTING PLATFORMS
Facilitating communication between computing platforms, including establishing, via a game agnostic communication service (GACS), communication with a first and a second computing platform; receiving i) first game state data associated with a first video game application executing at the first computing platform and ii) second game state data associated with a second video game application executing at the second computing platform; receiving a request to display at the first computing platform a first communication interface of the GACS; generating content for the first communication interface based at least in part on the first or the second game state data; causing to display the generated content via the first communication interface; receiving selection of one of the communication messages to transmit to the second computing platform; transmitting the selected communication message to the second computing platform for display via a second communication interface of the GACS at the second computing platform.
This specification provides a system comprising: or more computing devices; and one or more storage devices communicatively coupled to the one or more computing devices. The one or more storage devices store instructions that, when executed by the one or more computing devices, cause the one or more computing devices to perform operations comprising: receiving input data derived from speech audio; generating facial animation data, comprising processing the input data and a conditioning input using a machine-learned generative model; generating further animation data, comprising processing the input data using a further machine-learned generative model; and generating animation data for at least a face in a video game using the facial animation data and the further animation data, wherein the animation data animates at least the face in the video game in accordance with speech sounds of the speech audio.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
A system may provide for the converting pull-based program code into push-based program code. The system may receive source code comprising pull-based programming language instructions, wherein the pull-based programming language instructions comprise a plurality of nodes connected by a plurality of edges to form a directed graph and convert the source code into push-based programming language instructions at least in part by traversing the pull-based programming language instructions to determine one or more scopes of the directed graph including respective groups of one or more nodes of the plurality of nodes, the determining of the one or more scopes associated with one or more conditional nodes of the plurality of nodes and generating the push-based programming language instructions for the source code based on the one or more scopes of the directed graph.
A gaming device may provide for. The gaming device may receive video game execution data of a video game including data generated by instrumented code of the video game during execution of the video game, a rendered output of the video game during the execution of the video game, and telemetry data of the video game generated during the execution of the video game. The gaming device may then configure a ML model to at least one of detect or predict a type of events in the execution of the video game using at least the data generated by instrumented code, the rendered output of the video game, and the telemetry data as training data for the ML model.
The systems and processes described herein can provide dynamic and realistic route generation based on actual route data within the game environment. The system provides for generating a route database for use with a sports simulation game application. The present disclosure also provides for generation of routes during runtime of the game application. The route generation system can help address the problem of generating realistic and lifelike routes based on real life movements of athletes.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
Various aspects of the subject technology relate to systems, methods, and machine-readable media for determining player skill for video games. The method includes aggregating a plurality of player statistics for match outcomes from a plurality of video games. The method also includes calculating, for each player in a pool of players, a matchmaking rating for each player based on the plurality of player statistics, the matchmaking rating for each player comprising a predicted number of points each player will contribute to a match. The method also includes selecting, based on the matchmaking rating for each player, players from the pool of players. The method also includes matching the players based on the matchmaking rating for each player, a sum of the matchmaking ratings comprising a total predicted team score for the match.
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A gaming system may allow for a user to capture and/or edit simulation state data of gameplay in a video game such that a replay of the gameplay may be rendered and/or shared. The gaming system may receive simulation state data and a request. The simulation state data may include simulation state(s) which include a model and pose state of an avatar corresponding to a player in a game simulation of a video game previously rendered as rendered view(s). The request may request a replay of the simulation state data with modification(s). The gaming system may modify the simulation state data to generate modified simulation state data and render, based on the modified simulation state data, replay view(s) that differ from the previously rendered view(s). The gaming system may then output the replay view(s) to a display of a computing device.
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/525 - Changing parameters of virtual cameras
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
25.
SYSTEMS AND METHODS FOR HANDLING BEVELS IN MESH SIMPLIFICATION
A method, device, and computer-readable storage medium for simplifying a mesh including bevels. The method includes: receiving a polygonal mesh representing a three-dimensional (3D) object; identifying a set of edges in the polygonal mesh as bevel edges; performing a mesh simplification operation on the polygonal mesh to generate a simplified mesh, wherein the mesh simplification operation removes at least one edge that includes a vertex that is of a bevel edge, and wherein two vertices in the polygonal mesh are collapsed to a collapse vertex in the simplified mesh; and updating stored normals of the collapse vertex based on copying stored normals of the two vertices removed from the polygonal mesh to the collapse vertex.
A method, device, and computer-readable storage medium for simplifying a convex hull are disclosed. A first queue of candidate vertices of a convex hull for vertex removal is generated, wherein the candidate vertices are sorted in the first queue by ascending values of a first cost metric associated with removal of the candidate vertex. A second queue of candidate faces of the convex hull for face removal is generated, wherein the candidate faces are sorted in the second queue by ascending values of a second cost metric associated with removal of the candidate face. A simplification operation is performed on the convex hull to generate a simplified version of the convex hull by performing a vertex removal operation on the candidate vertex in the first queue with lowest first cost metric or performing a face removal operation on the candidate face in the second queue with lowest second cost metric.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
31.
SYSTEM FOR GENERATING ANIMATION WITHIN A VIRTUAL ENVIRONMENT
The present disclosure discloses the use of machine learning to address the process of motion synthesis and generation of intermediate poses for virtual entities. A transformer-based model can be used to generate intermediate poses for an animation based on a set of key frames.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
32.
SYSTEM FOR GENERATING ANIMATION WITHIN A VIRTUAL ENVIRONMENT
The present disclosure discloses the use of machine learning to address the process of motion synthesis and generation of intermediate poses for virtual entities. A transformer-based model can be used to generate intermediate poses for an animation based on a set of key frames.
Embodiments of systems and methods for enabling access to an online game, modifying user progress within the online game, monitoring user interactions with the online game, or adjusting user gameplay with the online game, via multiple platforms. The multiple platforms may include virtual reality platforms and non-virtual reality platforms.
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
This specification describes a method for testing a user interface of a video game, the method implemented by one or more processors, the method comprising: obtaining, by one or more of the processors, a screenshot of the video game; processing, by one or more of the processors, the screenshot of the video game to detect one or more user interface elements; and performing, by one or more of the processors, one or more actions in the video game based upon the detected one or more user interface elements for testing the user interface of the video game.
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
35.
Expressive speech audio generation for video games
This specification describes a computer-implemented method of training a machine-learned speech audio generation system to generate predicted acoustic features for generated speech audio for use in a video game. The training comprises receiving one or more training examples. Each training example comprises: (i) ground-truth acoustic features for speech audio, (ii) speech content data representing speech content of the speech audio, and (iii) speech expression data representing speech expression of the speech audio. Parameters of the machine-learned speech audio generation system are updated by: (i) minimizing a measure of difference between the predicted acoustic features for a training example and the corresponding ground-truth acoustic features of the training example, and (ii) minimizing a measure of difference between the predicted prosodic features for the training example and the corresponding ground-truth prosodic features for the training example.
G10L 13/00 - Speech synthesisText to speech systems
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
The systems and methods described herein provide for an automated photosensitivity detection system (PDS) configured to automatically execute processes for flash detection and pattern detection of a video. PDS outputs an analysis result for each type of pattern detection analysis for the video. The PDS can execute each type of pattern detection analysis independently of the other pattern detection processes. Each pattern detection process is a distinct process that can be calculated without reference to the other processes. The final analysis result can aggregate the results of each detection process executed by the PDS.
The systems and methods described herein provide for an automated photosensitivity detection system (PDS) configured to automatically execute processes for flash detection and pattern detection of a video. PDS outputs an analysis result for each type of pattern detection analysis for the video. The PDS can execute each type of pattern detection analysis independently of the other pattern detection processes. Each pattern detection process is a distinct process that can be calculated without reference to the other processes. The final analysis result can aggregate the results of each detection process executed by the PDS.
A gaming system may provide for interactable environment geometry (IEG) detection. The gaming system may determine a sensory perspective of a sense of a non-player character (NPC) in a virtual environment of a game simulation, generate perception data of the NPC from the sensory perspective of the sense, input the perception data of the NPC into a detection model associated with the sense and receive, from the detection model, detection data for a detected item. The gaming system may then generate an awareness-based character decision for the NPC based on the detection data of the detected item.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
This specification describes a computer-implemented method of generating context-dependent speech audio in a video game. The method comprises obtaining contextual information relating to a state of the video game. The contextual information is inputted into a prosody prediction module. The prosody prediction module comprises a trained machine learning model which is configured to generate predicted prosodic features based on the contextual information. Input data comprising the predicted prosodic features and speech content data associated with the state of the video game is inputted into a speech audio generation module. An encoded representation of the speech content data dependent on the predicted prosodic features is generated using one or more encoders of the speech audio generation module. Context-dependent speech audio is generated, based on the encoded representation, using a decoder of the speech audio generation module.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 13/02 - Methods for producing synthetic speechSpeech synthesisers
G10L 19/04 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
40.
SYSTEM FOR RENDERING SKIN TONE WITHIN A GAME APPLICATION ENVIRONMENT
The present disclosure provides a system for rendering skin tones of virtual entities using dynamic lighting systems within the virtual environment. The dynamic lighting system can be used to modify parameters of light sources within a game environment to increase the range of renderable skin tones of a virtual entity.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable computer software for identifying, monitoring, and reporting cheating in video games; downloadable computer software for ensuring compliance and integrity in video games; downloadable computer software for monitoring and analyzing video game play; downloadable computer software for monitoring and analyzing computer systems; downloadable computer software for monitoring and analyzing video game systems; downloadable computer software for monitoring and managing a gaming community to prevent cheating; downloadable computer software for preventing cheating in video games; downloadable computer software for preventing video game players from utilizing cheat codes; downloadable computer software for preventing video game players from obtaining an unfair advantage by using third-party tools; downloadable computer software for preventing video game players from using unauthorized third-party tools; downloadable anti-cheat game software; downloadable computer software for detecting, eradicating and preventing computer viruses; downloadable computer software for ensuring the security of software applications, games, and video and audio files; downloadable computer software packages for ensuring the security of software applications, games, and video and music files; downloadable computer software for game security and to prevent hacking; downloadable computer software for protecting video and computer games from security breaches Providing non-downloadable computer software for identifying, monitoring, and reporting cheating in video games; providing non-downloadable computer software for ensuring compliance and integrity in video games; providing non-downloadable computer software for monitoring and analyzing video game play; providing non-downloadable computer software for monitoring and analyzing computer systems; providing non-downloadable computer software for monitoring and analyzing video game systems; providing non-downloadable computer software for monitoring and managing a gaming community to prevent cheating; providing non-downloadable computer software for preventing cheating in video games; providing non-downloadable computer software for preventing video game players from utilizing cheat codes; providing non-downloadable computer software for preventing video game players from obtaining an unfair advantage by using third-party tools; providing non-downloadable computer software for preventing video game players from using unauthorized third-party tools; providing non-downloadable anti-cheat game software; providing non-downloadable computer software for detecting, eradicating and preventing computer viruses; providing non-downloadable computer software for ensuring the security of software applications, games, and video and audio files; providing non-downloadable computer software packages for ensuring the security of software applications, games, and video and music files; providing non-downloadable computer software for game security and to prevent hacking; providing non-downloadable computer software for protecting video and computer games from security breaches
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
43.
ENHANCED SYSTEM FOR GENERATION AND OPTIMIZATION OF FACIAL MODELS AND ANIMATION
Systems and methods are provided for enhanced animation generation based on generative modeling. An example method includes training models based on faces and information associated with persons. The modeling system being trained to reconstruct expressions, textures, and models of persons.
In response to receiving user input command for sending a contextually aware communication, a computer system is configured to use game state data to determine a target location that a player is focusing on in a virtual environment in a video game, identify a unit that the player likely wants to communicate about based on at least priorities of unit types and proximities of units to the target location, and select a communication action for performance. Different communication actions can be performed in response to the same user input command when the game state data indicates different game states.
A63F 13/5372 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
A63F 13/23 - Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
45.
Skin microstructure texture filtering for runtime facial animations
A method of skin microstructure texture filtering for facial animation includes obtaining a plurality of one-dimensional (1D) filtered tiles corresponding to a plurality of filter axis angles and a plurality of filter parameters applied to a neutral tile, and at runtime, for each pixel representing a region of human skin, determining a principal direction of deformation, a principal filter parameter corresponding to the principal direction of deformation, and a secondary filter parameter corresponding to a secondary direction of deformation orthogonal to the principal direction of deformation, and selecting a first 1D filtered tile among the plurality of 1D filtered tiles, the first 1D filter tile corresponding to the secondary direction of deformation and the secondary filter parameter, and generating a respective two-dimensional (2D) filtered tile by convolving the first 1D filtered tile with a second 1D filter kernel corresponding to the principal direction of deformation and the principal filter parameter.
A system is disclosed that is able to combine motion capture data with volumetric capture data to capture player style information for a player. This player style information or player style data may be used to modify animation models used by a video game to create a more realistic look and feel for a player being emulated by the video game. This more realistic look and feel can enable the game to replicate play style of a player. For example, one soccer player may run with his elbows closer to his body and his forearm may swing across his torso. While another soccer player who is perhaps more muscular may run with his elbows and arms further from his body and his forearms may not cross in front of his torso when running.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
The disclosure provides a video playback system for use within a game application and/or other interactive computing environments. The video playback system can be used to capture gameplay during execution of a game application. The captured gameplay video can be processed and stored within the game application or in a network accessible location.
A63F 13/86 - Watching games played by other players
A63F 13/20 - Input arrangements for video game devices
A63F 13/25 - Output arrangements for video game devices
A63F 13/30 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers
A63F 13/32 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using local area network [LAN] connections
A63F 13/33 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections
A63F 13/332 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using wireless networks, e.g. cellular phone networks
A63F 13/335 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using Internet
This specification describes a computer-implemented method of training a machine-learned speech audio generation system for use in video games. The training comprises: receiving one or more training examples. Each training example comprises: (i) ground-truth acoustic features for speech audio, (ii) speech content data representing speech content of the speech audio, and (iii) a ground-truth speaker identifier for a speaker of the speech audio. Parameters of the machine-learned speech audio generation system are updated to: (i) minimize a measure of difference between the predicted acoustic features of a training example and the corresponding ground-truth acoustic features of the training example, (ii) maximize a measure of difference between the first speaker classification for the training example and the corresponding ground-truth speaker identifier of the training example, and (iii) minimize a measure of difference between the second speaker classification for the training example and the corresponding ground-truth speaker identifier of the training example.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 13/02 - Methods for producing synthetic speechSpeech synthesisers
G10L 17/04 - Training, enrolment or model building
This specification described systems, methods, and apparatus for policy models for selecting an action in a game environment based on persona data, as well as the use of said models. According to one aspect of this specification, there is described a computer implemented method of controlling an agent in an environment, the method comprising: for a plurality of timesteps in a sequence of timesteps: inputting, into a machine-learned policy model, input data comprising a current state of the environment and an auxiliary input, the auxiliary input indicating a target action style for the agent; processing, by the machine-learned policy model, the input data to select an action for a current timestep; performing, by the agent in the environment, the selected action; and determining, subsequent to the selected action being performed, an update to the current state of the environment.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for preventing rendering of a character in a video game. The method includes receiving an action regarding a first character rendered in a first-person point of view (POV), wherein the POV of the first character is changed from the first-person POV to a third-person POV. The method includes detecting a change in the POV of the first character. The method includes determining characters are outside the first character field of view (FOV) in the first-person POV and would be within the FOV of the first character in the third-person POV. The method includes changing the POV of the first character from the first-person POV to a third person POV. The method includes causing rendering of the video game in a third-person POV of the first character, the rendering preventing rendering of other characters.
An imitation learning system may learn how to play a video game based on user interactions by a tester or other user of the video game. The imitation learning system may develop an imitation learning model based, at least in part, on the tester's interaction with the video game and the corresponding state of the video game to determine or predict actions that may be performed when interacting with the video game. The imitation learning system may use the imitation learning model to control automated agents that can play additional instances of the video game. Further, as the user continues to interact with the video game during testing, the imitation learning model may continue to be updated. Thus, the interactions by the automated agents with the video game may, over time, almost mimic the interaction by the user enabling multiple tests of the video game to be performed simultaneously.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
G06N 3/088 - Non-supervised learning, e.g. competitive learning
Various aspects of the subject technology relate to systems, methods, and machine-readable media for generating insights for video games. The method includes gathering information regarding a player for a plurality of video games, the information comprising at least one of in-world state data, player action data, player progression data, and/or real-world events relevant to each video game. The method also includes tracking events in at least one video game of the plurality of video games, the events comprising an action event or a standby event. The method also includes determining that an event of the tracked events is an action event. The method also includes generating insights regarding the action event based on the information gathered regarding the player, the insights for improving the player's performance in the video game. The method also includes relaying the insights to the player to improve the player's performance in the video game.
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
A63F 13/85 - Providing additional services to players
This specification describes a computing system for generating visual assets for video games. The computing system comprises an image segmentation model, a first 3D generation model, and a second 3D generation model. At least one of the first 3D generation model and the second 3D generation model comprises a machine-learning model. The system is configured to obtain: (i) a plurality of images corresponding to the visual asset, each image showing a different view of an object to be generated in the visual asset, and (ii) orientation data for each image that specifies an orientation of the object in the image. A segmented image is generated for each image. This comprises processing the image using the image segmentation model to segment distinct portions of the image into one or more classes of a predefined set of classes. For each image, 3D shape data is generated for a portion of the object displayed in the image. This comprises processing the segmented image of the image, the orientation data of the image, and style data for the visual asset using the first 3D generation model. 3D shape data is generated for the visual asset. This comprises processing the generated 3D shape data of each image using the second 3D generation model.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for rendering audio via a game engine for a game. Various aspects may include determining sound source reverb metrics and listener reverb metrics. Aspects may include determining reverbs within a reverb possibility space for all rooms or spaces of the game rendered by the game engine. Aspects may also include determining sound tuning parameters describing reverb attenuation over distance. Aspects may include calculating acoustic parameters based on the reverb metrics, relative positions, and sound tuning parameters. Aspects may include rendering audio according to a fit of determined reverbs to the acoustic parameters.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
H04S 7/00 - Indicating arrangementsControl arrangements, e.g. balance control
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
A player profile management system collects player data from various systems and generates and manages player profiles. A snapshot pipeline of the player profile management system generates a snapshot player profile associated with a player. The player profile management system receives, after generating the snapshot player profile associated with the player, player data associated with the player. An update pipeline of the player profile management system generates, based on the snapshot player profile and the player data associated with the player, an update player profile associated with the player.
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
09 - Scientific and electric apparatus and instruments
Goods & Services
recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; Downloadable computer game software; recorded video game software
The specification relates to the generation of in-game animation data and the evaluation of in-game animations. According to a first aspect of the present disclosure, there is described a computer implemented method comprising: inputting, into one or more neural network models, input data comprising one or more current pose markers indicative of a current pose of an in-game object, one or more target markers indicative of a target pose of an in-game object and an object trajectory of the in-game object; processing, using the one or more neural networks, the input data to generate one or more intermediate pose markers indicative of an intermediate pose of the in-game object positioned between the current pose and the target pose; outputting, from the one or more neural networks, the one or more intermediate pose markers; and generating, using the one or more intermediate pose markers, an intermediate pose of the in-game object, wherein the intermediate pose of the in-game object corresponds to a pose of the in-game object at an intermediate frame of in-game animation between a current frame of in-game animation in which the in-game object is in the current pose and a target frame of in-game animation in which the in-game object is in the target pose.
The present disclosure provides a system for generating and rendering virtual objects, such as mesh particles, using dynamic color blending within the virtual environment. Mesh particles may be divided up into portions. For example, the portions of the mesh particle may be a single pixel or a group of pixels. The color of the mesh particles can be dynamically determined for the portions of a mesh particle.
Systems and methods are provided for enhanced animation generation based on generative control models. An example method includes accessing an autoencoder trained based on character control information generated using motion capture data, the character control information indicating, at least, trajectory information associated with the motion capture data, and the autoencoder being trained to reconstruct, via a latent feature space, the character control information. First character control information associated with a trajectory of an in-game character of an electronic game is obtained. A latent feature representation is generated and the latent feature representation is modified. A control signal is output to a motion prediction network for use in updating a character pose of the in-game character.
A collusion detection system may detect collusion between entities participating in online gaming. The collusion detection system may identify a plurality of entities associated with and opponents within an instance of an online game, determine social data associated with the plurality of entities, determine in-game behavior data associated with the plurality of entities, and determine, for one or more pairings of the plurality of entities, respective pairwise feature sets based at least in part on the social data and the in-game behavior data. The collusion detection system may then perform anomaly detection on the respective pairwise feature sets and, in response to the anomaly detection detecting one or more anomalous pairwise feature sets, output one or more suspect pairings of the plurality of entities corresponding to the one or more anomalous pairwise feature sets as suspected colluding pairings.
A persona system determines a player persona for a player of a gaming system based on gameplay information for the user and, for example, performs dynamic content generation or additional product recommendations based on the player persona. The persona system may receive a request for content based on a persona of a player and receive gameplay data associated with gameplay of the player in a plurality of games. The persona system may then generate a player persona of the player based on the gameplay data associated with the gameplay of the player in the plurality of games, determine persona based content based at least in part on a portion of the player persona, and output the persona based content in response to the request.
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
65.
CHARACTER CONTROLLERS USING MOTION VARIATIONAL AUTOENCODERS (MVAES)
Some embodiments herein can include methods and systems for predicting next poses of a character within a virtual gaming environment. The pose prediction system can identify a current pose of a character, generate a gaussian distribution representing a sample of likely poses based on the current pose, and apply the gaussian distribution to the decoder. The decoder can be trained to generate a predicted pose based on a gaussian distribution of likely poses. The system can then render the predicted next pose of the character within the three-dimensional virtual gaming environment. Advantageously, the pose prediction system can apply a decoder that does not include or use input motion capture data that was used to train the decoder.
In a video game, a player's character can start in a normal state, receive first damage, and change to an incapacitated state. The player's character can be revived from the incapacitated state back to the normal state. The player's character can be changed from the incapacitated state to a preliminarily defeated state, and in response, a player respawn activation item can be generated. The player respawn activation item can be used by the player's teammates to respawn the player's character at one or more respawn locations.
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Administration of loyalty programs. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
Methods, apparatus and systems are provided for generating an interactive non-player character (NPC) scene for a computer game environment of a video game. Changes are detected in relation to a script associated with the interactive NPC scene. For each NPC, a set of NPC data associated with the interactions said each NPC has within the script is generated corresponding to the changes. The generated set of NPC data is processed with an NPC rig associated with said each NPC to generate an NPC asset. A camera solver is applied to a region of the computer game environment associated with the script for determining locations of NPC assets and one or more cameras within said region in relation to said interactive NPC scene. Data representative of said each NPC asset and said determined NPC asset and camera locations for use by a game development engine for generating said interactive NPC scene.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/5258 - Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G10L 13/027 - Concept to speech synthesisersGeneration of natural phrases from machine-based concepts
69.
VERSION AGNOSTIC CENTRALIZED STATE MANAGEMENT IN VIDEO GAMES
A state system providing version agnostic centralized state management can use node graphs corresponding to virtual entities to maintain a world state among any version of a video game. As states to virtual entities change, corresponding nodes of the node graph are updated in response to the state change to account for and store the state change. As a data structure referencing, associating, and/or corresponding to virtual entities themselves, the node graph can facilitate centralized state management for a video game in a version agnostic manner. Additionally, the state system is also configured to validate node dependencies of a node graph when a corresponding change in state of a corresponding virtual entity occurs during gameplay, to avoid and/or prevent game state errors.
G06F 8/71 - Version control Configuration management
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A party system of a gaming environment provides to users of the gaming environment parties of sizes that extend beyond the party size limitations of a video game. An extendable party can dynamically create subset parties of users for gameplay as needed, without requiring the party to be ceased. Therefore, an extendable party allows a subset of users, or multiples thereof, to enter into gameplay while maintaining the party as a whole. A party system, video game, or gaming environment can be configured to apply one or more rules or policies to a party that limit or alter a feature or function of an extended party, such as to prevent players among extended parties from providing or receiving competitive advantages.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/80 - Special adaptations for executing a specific game genre or game mode
71.
DYNAMIC INDICATIONS OF NAVIGATION ASSIST IN VIRTUAL INTERACTIVE ENVIRONMENTS
Systems and methods are presented herein from providing assist indication in an interactive virtual environment. Game data of a game session of a virtual interactive environment is received. Based in part on the game data, a navigation assist used in the game session is identified. An assist indication to render is determined based on the game data and the navigation assist. The assist indication is configured for rendering during runtime. The assist indication is rendered in the virtual interactive environment of the game session.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A method comprises determining positions of render strands based on a simulation model of simulation strands. Each simulation strand corresponds to a render strand. For a first range of values of a metric up to a threshold value, the simulation model is determined in a first simulation level using a first set of simulation strands. For a second range of values of the metric from a second threshold value, the simulation model is determined in a second simulation level using a subset of the first set of simulation strands. For metric values between the first and second threshold values, a transition between first and second simulation levels comprises computing the simulation model in the first level of detail. Positions of the render strands during the transition are derived from the first set of simulation strands having a first weight, and the second set of simulation strands having a second weight.
A system may perform animation retargeting that may allow an existing animation to be repurposed for a different skeleton and/or a different environment geometry from that associated with the existing animation. The system may input, to a machine learning (ML) retargeting model, an input animation, a target skeleton and environment geometry data of an environment for a predicted animation, wherein the ML retargeting model is configured to generate the predicted animation based on the input animation, the target skeleton and the environment geometry data of the environment for the predicted animation and receive, from the ML retargeting model, the predicted animation based on the input animation, the target skeleton and the environment geometry data of the environment for the predicted animation.
Embodiments of the present application provide systems and methods for world prediction within a game application environment. The systems and methods can include a world prediction module for predicting collisions between virtual objects in the game application environment. The world prediction module can use game state data to simulate the virtual objects into future instances. The world prediction module can parse the future instances to find collisions in a farfield representation of the virtual objects and collisions in a nearfield representation of the virtual objects. The world prediction module can use collision information to update a game engine of the game application.
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
75.
SYSTEM FOR AUTOMATED GENERATION OF FACIAL SHAPES FOR VIRTUAL CHARACTER MODELS
Systems and methods are provided for enhanced face shape generation for virtual entities based on generative modeling techniques. An example method includes training models based on synthetically generated faces and information associated with an authoring system. The modeling system being trained to reconstruct face shapes for virtual entities based on a latent space embedding of a face identity.
A quality analysis tool for visual-programming scripting languages uses machine learning to process changes from visual-programming environments. The quality analysis tool can receive data associated with a code submission via a visual-programming scripting language, process the data to identify features in the data that correspond to previously identified defects, apply a pattern matching algorithm to the identified features, determine a risk prediction based on a learned pattern recognition model associated with a pattern in the features, and transmit a notice of predicted risk. The quality analysis tool can train models for use with visual-programming scripting languages and visual-programming environments.
A method, device, and computer-readable storage medium for generating a proxy mesh are disclosed. The method includes: receiving a reference mesh, wherein the reference mesh comprises a polygonal mesh that is a computer representation of a three-dimensional (3D) object; computing quadrics corresponding to the reference mesh; receiving a second polygonal mesh, wherein the second polygonal mesh comprises a polygonal mesh generated based on the reference mesh; transferring the quadrics corresponding to the reference mesh to the second polygonal mesh; and generating a proxy mesh based on the quadrics corresponding to the reference mesh transferred to the second polygonal mesh.
A gaming system may provide for interactable environment geometry (IEG) detection. The gaming system detect one or more IEG features in an area of a virtual environment of a game including an avatar of a player, determine, for an IEG feature of the one or more potentially IEG features, one or more unprocessed potential interactions that are valid for the IEG feature, the determining that an individual unprocessed potential interaction of the one or more unprocessed potential interactions is a valid interaction for the IEG feature being based on corresponding criteria of the individual unprocessed potential interactions, and determine, based at least in part on a position of the avatar in the virtual environment, whether the valid interaction for the IEG feature is available for the avatar.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/55 - Controlling game characters or game objects based on the game progress
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
79.
SYSTEM AND METHODS FOR AUTOMATED LIGHT RIGGING IN VIRTUAL INTERACTIVE ENVIRONMENTS
An automated light rigging system (ALRS) that adjusts, or creates, light rigs that more accurately causes lighting from light rigs and light objects to conform to, or more closely conform with, target lux values of a target lux map for one or more regions of a virtual interactive environment. The ALRS can receive light rig data to sample lux values of the virtual interactive environment to determine where a loss or discrepancy of luminance occurs among the virtual interactive environment, based at least in part on a target lux map.
Systems and methods for user interface navigation may include a computing system which causes display of a user interface including a set of first user interface elements. The computing system receives a request to expand a switcher menu including a set of second user interface elements including at least one of the first user interface elements. Each second user interface element, when selected within the switcher menu, is displayed with one or more respective third user interface elements. The computing system causes display of a set of third user interface elements corresponding to one of the second user interface elements, and causes, responsive to a second request to navigate from the switcher menu to one of the set of third user interface elements, the switcher menu to collapse and display the second user interface element with at least some of the third user interface elements.
The systems and methods described herein provide a bakeless keyframe animation solver that enables creation of keyframe poses by manipulation of a skeleton and authors animations through interpolation. A manipulation module enables the manipulation of joints of a skeleton to produce keyframes without the need to bake animations that are driven by a rig. An interpolation module uses the manipulation module to change the kinematic properties (e.g., FK and IK) of one or more joints when interpolating between one or more keyframes to create an animation.
Gameplay API (G-API) calls are embedded by an anomaly system to detect anomalous gameplay among a video game. Anomalous gameplay is detected by identifying anomalous sequences of G-API calls made during gameplay. Anomalous gameplay can correspond to issues that disrupt and/or degrade the user experience of a video game; such as the existence of a bug or exploit, or the use of cheats, and/or bots by users of a video game. A machine learning embedding model among an anomaly system is trained to embed G-API calls corresponding to a video game. Once trained, a distance analysis and distribution analysis is performed by the anomaly system on embedded G-API calls to detect anomalies among the G-API calls made by a video game. Data corresponding to the detected anomalies can be included among a generated anomaly detection report by the anomaly system for further analysis, such as by video game developers.
An animation system is configured to accessibly curate selectable animations and/or stylized animations based in part on vocal audio data provided by a user during gameplay of a video game application. The vocal audio data is encoded by way of a machine learning model to produce and/or extract feature embeddings corresponding to the utterances among the vocal audio data. The feature embeddings are used in part to create a list of selectable animations and to create stylized animations that can be displayed to the user. In turn, the animation system enables users to use their voice to personalize their gameplay experience.
A video game system and method analyze virtual contact between an avatar and a virtual object within a video game. The point of contact of the virtual contact on the virtual object and/or the intensity of contact of the virtual contact may then be used to determine a subsequent virtual action to be performed within the video game. The virtual action, with any virtual movement thereof, may be carried out in a realistic manner within the video game by determining a virtual trajectory of the motion. The virtual trajectory may be determined using a motion model. The motion model may provide the virtual trajectory of the virtual object based at least in part on one or more parameters of the virtual object, such as a weight parameter. The motion model may be trained using training video clips with realistic motion of virtual objects.
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
A device may access a feature vector generated based on interactions by a user with a video game. The device may access a cluster map comprising a mapping of user clusters, wherein each location within the cluster map is associated with a set of users whose feature vectors are within a threshold degree of similarity of each other. The cluster map may be generated using a plurality of extracted feature vectors obtained from interaction information. A device may determine a map location within the cluster map associated with the user based at least in part on the feature vector. A device may determine a target map location within the cluster map. A device may determine a guidance action based at least in part on the target map location and the map location associated with the user. A device may execute the guidance action.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
Embodiments of the present application provide an interactive computing system with a game development server that can host multiple game editing sessions and allow multiple game developer systems to work on the same game assets at the same time. The game development server can manage some or all change requests from game developers to make changes to the game data.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06F 16/176 - Support for shared access to filesFile sharing support
The techniques described herein include using a system for enabling assisted gameplay in a computer game using real-time detection of predefined scene features and mapping of the detected features to recommended actions. For example, the system may generate a scanning query (e.g., a segment cast) toward a target area within a virtual scene, determine a geometric feature based on the scanning query, determine a scene feature based on the geometric feature, determine an action associated with the scene feature, and control an avatar based on the action. Examples of scene features that may have mappings to recommended actions include obstacles within a predicted trajectory of the avatar and transitions in the ground level of the virtual scene.
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
A personalization system determines a playstyle associated with a player of a gaming system based on gameplay information for the player and, for example, generates personalized animation for the player based on the player's playstyle. The personalization system can receive gameplay data associated with a playstyle of a player in one or more games and receive persona data associated with the player and the gameplay. The persona system can generate an animation for the player based on the gameplay data associated with the playstyle of the player in the one or more games, dynamically generate, based at least in part on a portion of the playstyle of the player, content including personalized animation, wherein the content including personalized animation is dynamically generated personalized content associated with the player, and transmit the content including personalized animation for presentation in a game associated with the player.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A body mesh to be collided with a cloth mesh are received, together with collider objects (that correspond to or approximate the body mesh) divided into cells. Polygons of the body mesh are projected onto the surface of the collider objects from a location within the collider object to identify cells of the collider object that overlap the projection of the polygons. A set of cloth features that collide with the collider object are projected onto the surface of the collider object to identify cells onto which the cloth features are projected. For each cell that includes a projection of a cloth feature, collision tests are performed between the cloth feature and the polygons whose projections also overlap the same cell. Using the collider object as an acceleration structure allows for cloth simulation to be performed while reducing collision tests for each cloth feature to a limited number of polygons.
An example method of simulating dribbling ball behavior in interactive videogames includes: determining, a current spatial position of a simulated ball rolling on a surface of a simulated terrain; determining, based on a slope of the surface of the simulated terrain, a likelihood of ball dribbling; identifying a segment of a path of the simulated ball over the surface from the current spatial position of the simulated ball, such that a dribbling criterion based on the likelihood of ball dribbling is satisfied on the segment of the path; determining, based on a speed of the simulated ball, a dribble-simulating surface angle adjustment range; choosing a dribble-simulating surface angle adjustment value from the dribble-simulating surface angle adjustment range; adjusting, based on the dribble-simulating surface angle adjustment value, a surface normal of a segment of the surface on the path; and determining, based on the adjusted surface normal, a next spatial position of the simulated ball.
A63F 13/573 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
92.
DETECTING HIGH-SKILLED ENTITIES IN LOW-LEVEL MATCHES IN ONLINE GAMES
A high-skilled-low-level detection system may detect high-skilled entities in low-level matches of an online gaming. The system may identify a plurality of entities that are within a first category of entities eligible to be matched by a matchmaking algorithm. The system may then determine respective feature sets based at least in part on gameplay data associated with the plurality of entities and perform anomaly detection on the respective feature sets. The system may then determine, based on the anomaly detection, an anomalous entity of the plurality of entities and cause the matchmaking algorithm to match the anomalous entity with other entities that are in a second category of entities.
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
A method, device, and computer-readable storage medium for generating a shadow mesh. The method includes: receiving a graphics mesh; computing a set of LOD versions for each component of the graphics mesh, where each successive LOD version in the set of LOD versions includes fewer polygons than the preceding LOD version; computing a set of shadow versions for each component of the graphics mesh, where each successive shadow version in the set of shadow versions includes fewer polygons than the preceding shadow version, and each successive shadow version includes vertices that lie within a mesh defined by the preceding shadow version; generate N LOD meshes for the graphics mesh by selecting, for each LOD, a LOD version of each component to include in the LOD mesh; and generating a shadow mesh by selecting a shadow version of each component to include in the shadow mesh.
A method, computer-readable storage medium, and device for generating a master representation of input models. The method comprises: receiving a first base mesh and a second base mesh, wherein the first base mesh has a first topology and is associated with a first set of blendshapes to deform the first base mesh, the second base mesh has a second topology and is associated with a second set of blendshapes to deform the second base mesh, and the second topology is different from the first topology; combining the first topology and the second topology into a combined mesh topology representation; combining the first set of blendshapes and the second set of blendshapes into a combined blendshape representation; and outputting the combined mesh topology representation and the combined blendshape representation as a master representation, wherein the master representation can be queried with a target topology and blendshape.
A spectator system may provide for spectating in online gaming. The spectator system may receive, at a spectator server, game state data from a game simulation server hosting an online game for one or more players, generate one or more spectator game state data corresponding to one or more spectator devices and output the one or more spectator game state data to the spectator devices. The spectator server may further output the game state data to another spectator server.
A computing system may provide functionality for controlling an animated model to perform actions and to perform transitions therebetween. The system may determine, from among a plurality of edges from a first node of a control graph to respective other nodes of the control graph, a selected edge from the first control node to a selected node. The system may then determine controls for an animated model in a simulation based at least in part on the selected edge, control data associated with the selected node, a current simulation state of the simulation, and a machine learned algorithm, determine an updated simulation state of the simulation based at least in part on the controls for the animated model, and adapt one or more parameters of the machine learned algorithm based at least in part on the updated simulation state and a desired simulation state.
A video game includes a single player mode where completion of storyline objectives advances the single player storyline. The video game also includes a multiplayer mode where a plurality of players can play on an instance of a multiplayer map. Storyline objectives from the single player mode are selected and made available for completion to players in the multiplayer mode, and the single player storylines can be advanced by players completing respective storyline objectives while playing in the multiplayer mode. Combinations of storyline objectives are selected from pending storyline objectives for players connecting to a multiplayer game for compatibility with multiplayer maps. Constraints can be used to determine compatibility.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/335 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/47 - Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
A63F 13/48 - Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
41 - Education, entertainment, sporting and cultural services
Goods & Services
Providing information on-line relating to computer games; providing a website featuring information regarding automobile and motor sports culture, competitions, and current events featuring automobiles and motor sports; Entertainment services, namely, providing information and news online relating to automobiles and motor sports
A system for use in video game development to generate expressive speech audio comprises a user interface configured to receive user-input text data and a user selection of a speech style. The system includes a machine-learned synthesizer comprising a text encoder, a speech style encoder and a decoder. The machine-learned synthesizer is configured to generate one or more text encodings derived from the user-input text data, using the text encoder of the machine-learned synthesizer; generate a speech style encoding by processing a set of speech style features associated with the selected speech style using the speech style encoder of the machine-learned synthesizer; combine the one or more text encodings and the speech style encoding to generate one or more combined encodings; and decode the one or more combined encodings with the decoder of the machine-learned synthesizer to generate predicted acoustic features. The system includes one or more modules configured to process the predicted acoustic features, the one or more modules comprising a machine-learned vocoder configured to generate a waveform of the expressive speech audio.
G10L 13/00 - Speech synthesisText to speech systems
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06N 3/044 - Recurrent networks, e.g. Hopfield networks
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
100.
PLAYTESTING COVERAGE WITH CURIOSITY DRIVEN REINFORCEMENT LEARNING AGENTS
Systems and methods of curiosity driven reinforcement learning agents promote novel exploration of a virtual interactive environment. The data of the exploration can be stored in a buffer to determine, generate, and display visualizations in the virtual interactive environment. The visualizations can correspond to identify issues in the virtual interactive environment and/or identify relationships between regions of the virtual interactive environment.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor