ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Otto, Christopher Andreas
Bradley, Derek Edward
Weiss, Sebastian Klaus
Zoss, Gaspard
Chandran, Prashanth
Abstract
One embodiment of the present invention sets forth a technique for generating a geometry for a shape. The technique includes inputting, into a machine learning model, (i) a noise sample and (ii) one or more conditioning inputs. The technique also includes generating, via execution of the machine learning model based on the noise sample and the one or more conditioning inputs, a two-dimensional (2D) position map associated with the shape. The technique further includes generating a three-dimensional (3D) geometry for the shape based on the 2D position map.
A light device, a lighting apparatus, and a system including a controller and multiple light devices are described. An example light device includes a printed circuit board (PCB), an input connector disposed at a first side of the PCB, an output connector disposed at second side of the PCB, one or more light engines disposed on the PCB between the input connector and the output connector, and one or more drivers disposed on the PCB. Each light engine of the one or more light engines includes multiple light emitting diodes (LEDs). Each driver of the one or more drivers is configured to control illumination emitted from a respective light engine of the one or more light engines.
F21V 23/00 - Arrangement of electric circuit elements in or on lighting devices
F21V 23/04 - Arrangement of electric circuit elements in or on lighting devices the elements being switches
F21V 23/06 - Arrangement of electric circuit elements in or on lighting devices the elements being coupling devices
F21Y 105/16 - Planar light sources comprising a two-dimensional array of point-like light-generating elements characterised by the overall shape of the two-dimensional array square or rectangular, e.g. for light panels
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Zhang, Yang
Aydin, Tunc Ozan
Song, Mingyang
Tang, Siyu
Abstract
One embodiment of the present invention sets forth a technique for determining a time-varying deformation associated with a scene. The technique includes matching a query time to a time interval associated with the scene and generating, via execution of a machine learning model, (i) a first set of attributes associated with a set of canonical coordinates in the scene at a starting time of the time interval and (ii) a second set of attributes associated with the set of canonical coordinates at an ending time of the time interval. The technique also includes computing a third set of attributes associated with the set of canonical coordinates at the query time based on a spline interpolation associated with the first and second sets of attributes. The technique further includes generating a representation of the scene at the query time based on the third set of attributes.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Otto, Christopher Andreas
Bradley, Derek Edward
Weiss, Sebastian Klaus
Zoss, Gaspard
Chandran, Prashanth
Abstract
One embodiment of the present invention sets forth a technique for training a machine learning model on a geometry generation task. The technique includes generating, via execution of a diffusion model, a first set of training output corresponding to a first set of three-dimensional (3D) geometries based on a first set of conditioning inputs associated with a first conditioning mode, and training the diffusion model based on a first set of loss values associated with the first set of training output. The technique further includes generating, via execution of the diffusion model and a first adapter model, a second set of training output corresponding to a second set of 3D geometries based on a second set of conditioning inputs associated with a second conditioning mode, and training the first adapter model based on a second set of loss values associated with the second set of training output.
A motion generation system includes a tracking model, executed by a processor, configured to track at least one kinematic reference motion of a robotic device; a reward surrogate model, executed by the processor, that evaluates a performance of the tracking model with respect to the at least one kinematic reference motion and estimates at least one reward for the tracking model based on the performance; and a generative model, executed by the processor, configured to generate a motion for the robotic device based on a contextual input and the estimated at least one reward, wherein the generative model is trained with a pre-training operation and a refinement operation separate from the pre-training operation.
A system includes a hardware processor, and a memory storing a software code and at least one machine learning (ML) model trained to distinguish between a plurality of content types. The hardware processor executes the software code to receive a content file including data identifying a dataset contained by the content file as being a first content type of the plurality of content types; predict, using the at least one ML model and the dataset, based on at least one image parameter, a first probability that a content type of the dataset matches the first content type identified by the data; and determine, based on the first probability, that the content type of the dataset (i) is the first content type identified by the data, (ii) is not the first 10 content type identified by the data, or (iii) is of an indeterminate content type.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Zhang, Yang
Aydin, Tunc Ozan
Song, Mingyang
Tang, Siyu
Abstract
One embodiment of the present invention sets forth a technique for generating a neural deformation model. The technique includes inputting, into a machine learning model, (i) a set of canonical coordinates in a scene and (ii) one or more times included in a temporal trajectory of the scene. The technique also includes generating, via execution of the machine learning model, one or more sets of attributes associated with the set of canonical coordinates and the one or more times. The technique further includes computing one or more losses based on (i) a velocity included in the one or more sets of attributes and (ii) one or more representations of the scene at the one or more times, and training the machine learning model based on the one or more losses.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
8.
STYLE TRANSFER USING GENERATIVE DIFFUSION FEATURES
The present invention sets forth techniques for performing style transfer from multiple supplied style images to a supplied content image to generate novel images that include style elements from the multiple supplied style images and content elements from the supplied content image. The techniques include guiding one or more self-attention and cross-attention layers included in a machine learning model based on the multiple supplied style images, such that content elements and style elements included in the style images are not entangled when generating the novel images. The techniques also distill a small subset of representative attention map values from multiple style images, improving performance while reducing computational costs compared to processing all attention map values from the multiple style images.
G06V 10/40 - Extraction of image or video features
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
A system may include an image plane, a scrim layer, and a lens stack positioned between the image plane and the scrim layer. The lens stack may be positioned at a first distance from the image plane to create a perceived image at a second distance away from the lens stack. The first and second distances may be equal. The image source may be spaced away from the lens stack and on a first side of the lens stack. The scrim layer may be positioned on a second side of the lens stack. The lens stack may include a pair of lenses (e.g., identical Fresnel lenses) stacked facing each other. A bounce mirror positioned between the image plane and the lens stack may fold an optical path from the image plane to the lens stack. Additional systems and associated methods are also disclosed.
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
G03H 1/02 - Holographic processes or apparatus using light, infrared, or ultraviolet waves for obtaining holograms or for obtaining an image from themDetails peculiar thereto Details
10.
Artificial Intelligence Based Character-Specific Speech Generation
A system includes a hardware processor and a memory storing software code, a character database, a language model and an artificial intelligence (AI) model trained to emulate speech by a character. The software code is executed to receive interaction data including a description of speech by a human to a performer impersonating the character and a description of a facial expression by the performer in response, obtain, from the character database, one or more communication trait(s) of the character, and generate, by the language model using the description of the speech and the communication trait(s) as inputs, a character-specific response to the speech. The software code is further executed to synthesize, by the AI model using the character-specific response and the description of the facial expression as inputs, audio data of the character-specific response in a voice of the character, and output the audio data for use by the performer.
Embodiments of the present disclosure provide techniques for restoring video content. An example method generally includes receiving a set of input video frames that include artifacts, generating one or more conditioning features based on the set of video frames, wherein the conditioning features represent content information included in the set of video frames while reducing representation of the artifacts, denoising, using a latent diffusion model and based on the conditioning features, a representation of the set of input video frames that includes noise, and generating a set of output frames based on the denoised representation, wherein the set of output video frames include fewer artifacts relative to the set of input video frames.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Kansy, Manuel Jakob
Naruniec, Jacek Krzysztof
Schroers, Christopher Richard
Weber, Romann Matthew
Abstract
One embodiment of the present invention sets forth a technique for performing motion transfer. The technique includes determining an embedding corresponding to a motion depicted in a first video. The technique also includes generating, via execution of a machine learning model based on the embedding and an appearance image, an output video that includes the motion depicted in the first video and an appearance depicted in the appearance image.
ETH Zurich (EIDGENOSSISCHETECHNISCHE HOCHSCHULE ZURICH) (Switzerland)
Inventor
Djelouah, Abdelaziz
Briedis, Karlis Martins
Schroers, Christopher Richard
Ortiz, Raphael Francois
Abstract
A system includes a processor and a memory storing software code including a video frame interpolation machine-learning (ML) model. The processor executes the software code to receive an input video sequence including a first video frame and a second video frame, obtain point tracks between the first video frame and the second video frame, identify a target position for an interpolated video frame and determine, using the point tracks, a first optical flow between the target position and the first video frame, and a second optical flow between the target position and the second video frame. The processor further executes the software code to warp, using the first optical flow and the second optical flow, respectively, the first video frame and the second video frame, respectively, and predict, using the video frame interpolation ML model, the warped first video frame and the warped second video frame, the interpolated video frame.
Beijing YoJaJa Software Technology Development Co., Ltd. (China)
Inventor
Huangfu, Xuchang
Xue, Yuanyi
Zhang, Wenhao
Zhang, Yang
Liu, Chen
Meng, Xuewei
Abstract
In some embodiments, a method receives an image to analyze for artifacts. Texture information that characterizes texture in the image is determined. The method merges the texture information with the image. The texture information is used to focus an analysis of artifacts in regions of the image. The method outputs a score based on the processing of the image that assesses the artifacts in the image.
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a patternLocating or processing of specific regions to guide the detection or recognition
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/36 - Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given pointNon-linear local filtering operations, e.g. median filtering
G06V 10/54 - Extraction of image or video features relating to texture
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes converting, via a trained variational autoencoder, a first set of features associated with a content sample into a second set of features from a feature space associated with one or more style samples. The technique also includes computing one or more losses based on the first set of features and the second set of features. The technique further includes generating a style transfer result based on the content sample and the one or more losses, where the style transfer result includes one or more content-based attributes of the content sample and one or more style-based attributes of the style sample.
One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes determining a first set of attribute values for a plurality of attributes associated with a content sample. The technique also includes computing one or more losses based on the content sample and one or more style samples and converting, based on the one or more losses, the first set of attribute values into a second set of attribute values for the plurality of attributes. The technique further includes generating a style transfer result based on a composite of the second set of attribute values.
One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes training a neural network based on (i) one or more supervised losses computed between a first set of training output produced by the neural network from a first set of training content samples and a set of stylized samples corresponding to the first set of training content samples, and (ii) one or more unsupervised losses computed using a second set of training output produced by the neural network from a second set of training content samples to generate a trained neural network. The technique also includes inputting a content sample into the trained neural network, and generating, via execution of the trained neural network, a style transfer result that comprises one or more content-based attributes of the content sample and one or more style-based attributes of the set of stylized samples.
A system may include a first modular tile floor configured to receive a first input from a first user associated with the first modular tile floor, a second modular tile floor separate from the first modular tile floor and configured to receive a second input from a second user associated with the second modular tile floor, and a display system configured to generate a visual content based on the first input, the second input, or a combination thereof. The first modular tile floor may induce or respond to a desired motion associated with the first user. The second modular tile floor may induce or respond to a desired motion associated with the second user. The display system may generate a virtual or augmented reality environment for viewing by the first user and the second user. The first and second inputs may adjust the virtual or augmented reality environment.
16 - Paper, cardboard and goods made from these materials
Goods & Services
Art prints; Photograph albums; Stamp albums; Arts and craft paint kits; Chalk; Crayons; Drawing rulers; Dry erase writing boards and writing surfaces; Erasers; Globes; Modeling clay; Protractors for use as drawing instruments; Printed publications, namely, books, magazines, newsletters, hand-outs, and workbooks in the field of entertainment, history, stories, artwork, and activities for children; Memo pads; Pen or pencil holders; Pencil sharpeners; Pen and pencil cases and boxes; Rubber stamps; Writing instruments; Pens; Pencils; Mechanical pencils; Markers; Highlighter pens; Folders being stationery; Paper clips; Writing grips; Glue for stationery use; Office staplers; Marking stamps; Stamp pads; Bookends; Office binders; Paper; Printed appointment books; Printed children's activity books; Series of printed children's books; Printed children's interactive educational books; Printed autograph books; Printed baseball cards; Bookmarks; Series of printed fiction books; Bumper stickers; Printed calendars; Printed coloring books; Printed comic books; Printed comic strips; Decals; Printed diaries; Printed flash cards; Printed greeting cards; Printed guest books; Printed maps; Printed notebooks; Printed photographs; Printed picture books; Printed postcards; Printed posters; Printed awards; Printed certificates; Printed invitations; Printed menus; Printed recipe books; Stickers; Paintings; Coasters made of paper; Decorative paper centerpieces; Gift wrapping paper; Paper cake decorations; Paper gift cards; Paper party decorations; Paper napkins; Paper gift wrap bows; Paper place mats; Party goodie bags of paper or plastic; Plastic shopping bags; Name badges of paper, metal, or plastic; Portraits in the nature of drawings; Stationery; Envelopes; Temporary tattoo transfers; Decorative glitter for stationery purposes
Children's multiple activity toys; Collectable toy figures; Disc toss toys; Electric action toys; Infant toys; Inflatable toys; Musical toys; Plush toys; Squeeze toys; Talking toys; Toy action figures and accessories therefor; Toy boats; Sand toys; Toy building blocks; Toy mobiles; Toy vehicles; Toy scooters; Toy figures; Toy imitation cosmetics; Toy banks; Toy watches; Toy weapons; Toy building structures and toy vehicle tracks; Mechanical toys; Toy trains and parts and accessories therefor; Toy aircraft; Role playing toys in the nature of play sets for children to imitate real life occupations; Basketballs; Baseball bats; Baseballs; Beach balls; Footballs; Golf balls; Golf gloves; Golf ball markers; Jump ropes; Roller skates; Skateboards; Soccer balls; Table tennis balls; Table tennis paddles; Table tennis rackets; Tennis balls; Tennis rackets; Volley balls; Action skill games; Jigsaw puzzles; Manipulative games; Parlor games; Party games; Target games; Action figures; Bean bags; Dolls; Doll clothing; Doll accessories; Playsets for dolls; Bobblehead dolls; Bubble making wand and solution sets; Puppets; Kites; Magic tricks; Marbles; Rubber balls; Play balls; Yo-yos; Teddy bears; Balloons; Christmas stockings; Christmas tree ornaments and decorations; Crib mobiles; Hand-held units for playing electronic games for use with external display screen or monitor; Hand held units for playing electronic games other than those adapted for use with an external display screen or monitor; Party favors in the nature of small toys; Paper party favors; Paper party hats; Snow globes; Fitted plastic films known as skins for covering and protecting electronic game playing apparatus, namely, video game consoles and hand-held video game units; Video game machines for use with televisions; Toy for pets
09 - Scientific and electric apparatus and instruments
Goods & Services
Apparatus for recording, transmission, processing and reproduction of sound, images or data; Digital media, namely, pre-recorded video cassettes, digital video discs, digital versatile discs, downloadable audio and video recordings, DVDs, and high definition digital discs featuring live-action entertainment, animated entertainment, music, stories, children’s programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Digital media, namely, pre-recorded DVDs, downloadable audio and video recordings, and CDs featuring and promoting live-action entertainment, animated entertainment, music, stories, children’s programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Downloadable fiction books on a variety of topics; Downloadable non-fiction books on a variety of topics; Non-fiction audio books on a variety of topics; Visual and audio recordings featuring live-action entertainment, animated entertainment, music, stories, children’s programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Downloadable podcasts in the field of comedy, culture and arts, current events, education, entertainment, history, interviews, music, news, and stories; Musical recordings; Downloadable electronic publications in the nature of books, comics, and illustrations in the field of fiction and children's stories; Downloadable series of children's books; Downloadable e-books in the field of fiction and children's stories; Downloadable computer game software; Downloadable video game software; Downloadable computer software for the administration of learning activities for children; Computer hardware and computer peripheral devices; Mouse pads; Cameras; Digital cameras; Blank recordable optical disc; DVD recording apparatus; Compact disc recorders; Audio speakers; Digital photo frames; Headphones; Earphones; Ear buds; Walkie-talkies; Telephones; Headsets for cellular or mobile phones; Cell phone cases; Screen protectors for cellular phones; Protective covers for cell phones; Protective covers and cases for cell phones; Eyeglasses; Sunglasses; Cases for eyeglasses and sunglasses; Binoculars; Decorative magnets; Microphones; Protective covers and cases for tablet computers; Radio frequency identification tag readers; Radio-frequency identification (RFID) tags; Radio-frequency transmitters; Smart watches; Fitted plastic films known as skins for covering and protecting electronic apparatus, namely, mobile phones, portable music players, mobile computers, and tablet computers; Karaoke machines; Bicycle helmets; Flotation vests; Protective helmets for sports; Snorkels; Swimming goggles; Swim masks; Downloadable game software
22.
SYSTEM AND METHOD FOR DETECTING AND PREVENTING ANOMALOUS BEHAVIOR OF NON-INTERACTIVE MACHINE TO MACHINE DATABASE ACCOUNTS
A responding computing device may receive, from a querying computing device, a database query intended for the responding computing device. The querying computing device may generate and submit database queries autonomously as part of scheduled tasks. The responding computing device may determine one or more templates of database queries associated with the querying computing device. The responding computing device may perform a validation of the database query to determine whether the querying computing device is authorized to submit the database query to the responding computing device. The validation may be performed based on the one or more templates of database queries. The responding computing device may selectively enable the database query to be executed on the responding computing device or prevent the database query from being executed on the responding computing device.
An interaction system may receive trigger information associated with a facilitator device of a facilitator at a venue. The trigger information may be transmitted based on the facilitator operating the facilitator device during an interaction between the facilitator and one or more participants. The interaction system may determine, based on receiving the trigger information, that interaction information has been provided to the one or more participants. The interaction system may determine, in response to the trigger information, that the facilitator was located at a particular sensing location. The interaction system may obtain, using sensing location information identifying the particular sensing location, participant device information regarding a participant device located at the particular sensing location during the interaction. The interaction system may modify a record associated with the participant device to reflect that the interaction information was provided to a participant associated with the participant device during the interaction.
A device for use by a user includes a power source, a compartment enclosing a prize, a transceiver powered by the power source, the transceiver configured to receive data identifying a progress by the user toward a completion of an activity, and a progress indicator powered by the power source, the progress indicator configured to display the progress by the user toward the completion of the activity, in accordance with the received data. The transceiver is further configured to receive additional data indicating the completion of the activity by the user. The device is configured to open the compartment enclosing the prize, in response to the transceiver receiving the received additional data indicating the completion of the activity by the user.
A system includes a hardware processor configured to execute a machine learning (ML) model training pipeline to train an ML model using data relevant to a world of a digital persona to provide a dialogue model, generate, using the dialogue model, first conversational outputs, train the dialogue model, based on the first conversational outputs, to avoid hallucinations and/or undesirable expressions to provide a guardrailed dialogue model, generate, using the guardrailed dialogue model, second conversational outputs, train the guardrailed dialogue model, based on the second conversational outputs and persona data identifying interaction characteristics of the digital persona to provide a persona-specific model, generate, using the persona-specific model, a response to a scripted question, determine a quality score for the response, and further train the persona-specific model or validate the persona-specific model for human interaction, depending upon whether the quality score fails to satisfy or satisfies a quality criterion.
A system includes a computing platform having processing hardware, and a memory storing software code. The software code is executed to receive digital content indexed to a timeline, receive insertion data identifying a timecode of the timeline, and encode the digital content using the insertion data to provide segmented content having a segment boundary at the timecode, and first and second segments adjoining the segment boundary, wherein the first segment precedes, and the second segment succeeds, the segment boundary. The software code also re-processes the first and second segments to apply a fade-out within or to the first segment and a fade-in within or to the second segment, wherein re-processing the first and second segments provides encoded segments having the segment boundary configured as an insertion point for supplemental content.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Naruniec, Jacek Krzysztof
Bionda, Andrea
Kansy, Manuel Jakob
Schroers, Christopher Richard
Weber, Romann Matthew
Abstract
The present invention sets forth a technique for performing visual dubbing on an audiovisual sequence. The technique includes identifying, based on an actor frame included in the audiovisual sequence, one or more regions of an actor's face included in the actor frame, identifying, based on a dubber frame included in a visual recording of a dubber's performance, one or more regions of a dubber's face included in the dubber frame, generating a plurality of latent vectors based on at least one identified region of the actor's face and at least one identified region of the dubber's face, and generating, via the machine learning model, an output image based on the plurality of latent vectors.
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
A system for distributing sign language enhanced content includes a computing platform having processing hardware and a system memory storing a software code. The processing hardware is configured to execute the software code to receive content including at least one of a sequence of audio frames or a sequence of video frames, perform an analysis of the content, and identify, based on the analysis, a message conveyed by the content. The processing hardware is further configured to execute the software code to generate a sign language translation of the content, the sign language translation including one or more of a gesture, body language, or a facial expression communicating the message conveyed by the content.
G06V 20/40 - ScenesScene-specific elements in video content
G09B 21/00 - Teaching, or communicating with, the blind, deaf or mute
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G10L 21/055 - Time compression or expansion for synchronising with other signals, e.g. video signals
G10L 25/57 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for processing of video signals
G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
A system includes a hardware processor and a system memory storing software code. The hardware processor is configured to execute the software code to receive a plurality of video sequences each generated by a respective one of a plurality of cameras, each of the video sequences depicting a respective one of a plurality of three-dimensional (3D) movements of a subject from a different perspective, extract, from each of the video sequences, a respective set of two-dimensional (2D) motion features, to provide a plurality of 2D motion feature sets, and synchronize, using the 2D motion feature sets, the plurality of video sequences.
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video streamElementary client operations, e.g. monitoring of home network or synchronizing decoder's clockClient middleware
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
31.
Signal sequence-based time alignment of media content
A system includes a computing platform having a hardware processor, a memory storing software code and one or more signal emission device(s) controlled by the computing platform. The software code is executed to determine, using at least one predetermined signal frequency, a calibration signal sequence including an alignment initiation signal and a unique sequence of synchronization signals identified with a unique time interval, emit during the unique time interval, the alignment initiation signal and emit during the unique time interval after emitting the alignment initiation signal, the unique sequence of synchronization signals. The software code is further executed to receive first and second media content produced by first and second recording devices, the first and second media content produced while the first and second recording devices are situated so as to detect the calibration signal sequence, and time align, using the calibration signal sequence, the first and second media content.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Distribution of motion picture films; Entertainment services in the nature of production of motion pictures, television shows, and multimedia entertainment content; Providing online non-downloadable visual and audio recordings featuring action, adventure, animated content, comedy, drama, music, live-action, and news; Providing online computer games; Providing online non-downloadable e-books in the field of fiction and children’s stories; Providing entertainment information via a website; Providing entertainment information in the fields of motion picture films, television programs, musical videos, film clips, history, technology, artwork, and photographs via a website; Amusement park and theme park services; Entertainment in the nature of theater productions; Entertainment services in the nature of an amusement park show; Entertainment services, namely, live appearances by a celebrity, musician, athlete, actor, or costumed character; Theatrical shows provided at performance venues; Presentation of live show performances; Entertainment, namely, production of plays, musicals, and theater productions; Theatre productions
33.
SYSTEM AND METHOD FOR PRE-IDENTIFYING DELAY FACTORS TO MITIGATE PROCESS DELAYS
A process improvement platform may receive an indication of a delay in a process and information identifying one or more candidate delay factors associated with the delay; and analyze current video data of the process using a machine learning model trained to identify different candidate delay factors that cause different delays in different processes. The platform may detect, based on analyzing the current video data, the delay and a candidate delay factor of the one or more candidate delay factors and analyze historical video data of the process to determine whether a correlation exists between the candidate delay factor and the delay. The platform may predict, based on analyzing the historical video data, that the candidate delay factor causes delays in the process; and determine, based on predicting that the candidate delay factor causes the delays, an action to be performed to mitigate subsequent delays.
H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
34.
Secure Digital Registration of Contracts Governing Media Content Use
A system includes a memory storing software code, and a hardware processor executing the software code to receive media content including a media content ID, identify, using the media content ID, one or more rule(s) governing use of the media content, the rule(s) including at least one of: (a) one or more first rules restricting use of the media content, or (b) one or more second rules permitting use of the media content, and generate, based on the rule(s), a digital contract specifying at least one of: (i) one or more constraints on use of the media content, or (ii) one or more enablements on use of the media content. The software code also registers the digital contract with the media content ID in a secure database, generates, using data identifying the secure database, a first watermark, and embeds the first watermark identifying the secure database into the media content.
In one implementation, a content distribution network includes a first plurality of communication paths for Internet Protocol (IP) signals, a router providing a second plurality of communication paths for non-IP signals, and a plurality of signal sources and a plurality of signal destinations. Each signal source is linked to a respective one signal destination by a respective first communication path of the first plurality of communication paths. Each of one or more of the signal sources designated as a protected source is further linked to the respective one signal destination by a second communication path of the second plurality of communication paths. The content distribution network distributes content in one of a first mode in which the first communication path and the second communication path carry the same content contemporaneously, or a second mode in which only the second communication path carries that content.
The present disclosure describes a method for providing an interactive experience. The method includes illuminating, by an enhancer, an area with light having wavelength invisible to humans. A plurality of image detectors capture at least two images of a person in the area including a portion of the light reflected from the person. A processing element determines a first skeletal feature of the person based on the at least two images. The processing element determines a position characteristic of the first skeletal feature; constructs, from the first skeletal feature, a vector in three dimensions corresponding to the position characteristic; and outputs an interactive effect based on the position characteristic.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
G06V 10/143 - Sensing or illuminating at different wavelengths
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
37.
SYSTEM FOR GENERATING COLLIMATED NON-COHERENT BRIGHT LIGHT USING A PLURALITY OF OPTICAL ELEMENTS
A system, comprising: a first array of lasers; a first linear polarizer; a second array of lasers; a second linear polarizer provided at an angle with respect to the first linear polarizer; a first beamsplitter coupled to the first linear polarizer; a phosphor assembly; a collimator lens; a third linear polarizer; a second beamsplitter coupled to the third linear polarizer; and a telephoto lens.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Kansy, Manuel Jakob
Neuner-Jehle, Joel
Naruniec, Jacek Krzysztof
Weber, Romann Matthew
Abstract
In some embodiments, a method generates a first representation of a first image including a first facial identity and generates an identity representation from a second image that describes a second facial identity of the second image. The identity representation is mapped to a set of low-rank adaptation weights. The method adapts the first representation to an adapted first representation using the set of low-rank adaptation weights that are applied to a layer in a model. Decoder input values are generated based on the adapted first representation. The method performs decoding using the decoder input values to generate an output image. The output image swaps the first facial identity of the first image with the second facial identity of the second image..
A simulation platform may execute a computer model to cause a set of devices to perform different sets of tasks during different steps, of a simulation, to simulate operations of the set of devices. The set of devices may include one or more physical devices and one or more simulated devices. The simulation platform may determine, during a step of the simulation, that the set of devices have not completed a set of tasks that are to be completed. The simulation platform may cause the simulated controller to suspend time elapsed during the simulation based on determining that the set of devices have not performed the set of tasks. The simulation platform may determine, during the step of the simulation, that the set of devices have completed the set of tasks and cause the simulated controller to resume time.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Chandran, Prashanth
Bradley, Derek Edward
Urnau Gotardo, Paulo Fabiano
Zoss, Gaspard
Abstract
A technique for performing style transfer between a content sample and a style sample is disclosed. The technique includes applying one or more neural network layers to a first latent representation of the style sample to generate one or more convolutional kernels. The technique also includes generating convolutional output by convolving a second latent representation of the content sample with the one or more convolutional kernels. The technique further includes applying one or more decoder layers to the convolutional output to produce a style transfer result that comprises one or more content-based attributes of the content sample and one or more style-based attributes of the style sample.
In some implementations, a ride control simulation system may receive a request to simulate a ride control system controlling a movement of a passenger vehicle on a ride system, wherein the request includes vehicle information regarding the passenger vehicle and ride system information regarding the ride system. The ride control simulation system may execute, based on the request, a computer model to simulate the ride control system controlling the movement of the passenger vehicle on the ride system. Executing the computer model comprises randomly selecting values from one or more data distributions. The computer model is executed using the first value, the second value, and the third value as inputs. The ride control simulation system may cause an adjustment to an operation of the ride system based on a result of executing the computer model.
A system includes a hardware processor and a memory storing software code. The hardware processor is configured to execute the software code to receive operational data including multiple timestamps each corresponding to an operating state of an apparatus, receive, from a system user, a query identifying at least one of an apparatus event or one of the timestamps, and obtain, based on the operational data and in response to the query, a software design model of the apparatus. The hardware processor is further configured to execute the software code to identify, using the operational data and the software design model, an operational status of the apparatus at the identified at least one of the apparatus event or timestamp, generate, based on the software design model, a visual representation of that operational status, and provide an output data including the visual representation to the system user.
Systems and methods to generate narratives based on a comprehensive set of experiences of users are disclosed. Exemplary implementations may: receive user authorization to obtain, from information sources, pieces of information that define transportation and/or occurrences of a user that comprise an adventure of the user; determine, based on the pieces of information, aspect values that define the aspects of the transportation and the aspects of the occurrences; input the aspect values that define the aspects of the transportation and the aspects of the occurrences to the content generation model so that the content generation model determines the adventure of the user and generates, based on the aspect values, a narrative that features the aspects of the occurrences and one or more aspects of the transportation to represent the adventure of the user; obtain, from the content generation model, the narrative; and effectuate presentation of the narrative to the user.
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/787 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
G06V 20/40 - ScenesScene-specific elements in video content
44.
SYSTEM AND METHOD FOR DETERMINING NEEDS OF GUESTS IN ADVANCE OF AN EMERGENT EVENT
An assistance platform may analyze video data of an environment. Based on analyzing the video data, the assistance platform may recognize an individual out of a plurality of objects in the environment and may recognize a plurality of characteristics exhibited by the individual. The recognized characteristics may affect ability of the individual to respond to an event. The assistance platform may store assistance information identifying the plurality of characteristics; detect that an event has occurred in the environment; and retrieve the assistance information in response to detecting that the event has occurred. The assistance platform may determine, based on the assistance information, an action to be performed to assist the individual The assistance platform may cause the action to be performed to assist the individual; and delete the assistance information after causing the action to be performed.
16 - Paper, cardboard and goods made from these materials
18 - Leather and imitations of leather
25 - Clothing; footwear; headgear
28 - Games; toys; sports equipment
Goods & Services
(1) Paper; cardboard; address books; almanacs; appointment books; art prints; arts and craft paint kits; autograph books; baby books; baseball cards; binders; bookends; bookmarks; a series of fiction books; books; bumper stickers; calendars; Christmas cards; chalk; children's activity books; coasters made of paper; coin albums; coloring books; printed children's coloring pages; comic books; comic strips; coupon books; crayons; decals; decorative paper centerpieces; diaries; drawing rulers; dry erase writing boards and writing surfaces; envelopes; erasers; foam stamps; flash cards; gift wrapping paper; globes; greeting cards; guest books; magazines; maps; memo pads; modeling clay; newsletters; newspapers; notebooks; paintings; paper flags; paper cake decorations; paper gift cards; paper party decorations; paper napkins; paper party bags; paperweights; paper gift wrap bows; paper pennants; paper place mats; pen or pencil holders; pencil sharpeners; pen and pencil cases and boxes; photograph albums; photographs; photo-engravings; pictorial prints; picture books; plastic party goodie bags; plastic name badges; plastic shopping bags; plastic sandwich bags; portraits; postcards; posters; printed awards; printed certificates; printed invitations; printed menus; recipe books; rubber stamps; school supply kits containing combinations of school supplies, namely, writing instruments, pens, pencils, mechanical pencils, erasers, markers, crayons, highlighter pens, folders, notebooks, paper, protractors, paper clips, pencil sharpeners, writing grips, glue for stationery purposes and book marks; score cards; stamp albums; staplers; stationery; stickers; trading cards; temporary tattoos; writing implements; children's interactive educational books
(2) All-purpose carrying bags; athletic bags; baby backpacks; backpacks; beach bags; book bags; calling card cases; change purses; coin purses; diaper bags; duffel bags; fanny packs; gym bags; handbags; knapsacks; key cases; luggage; luggage tags; overnight bags; purses; satchels; shopping bags; tote bags; umbrellas; waist packs; wallets; collars for pets
(3) Clothing, namely, aprons, beachwear, belts, bottoms, cloth bibs, coats, costumes for use in role-playing games, beach cover-ups, dresses, ear muffs, gowns, gloves, Halloween costumes, hosiery, infantwear, jackets, leotards, lingerie, loungewear, mittens, overalls, pants, ponchos, rainwear, scarves, shirts, shorts, skirts, sleepwear, socks, suits, sweaters, sweatshirts, swimwear, ties, tops, underwear, wrist bands; footwear; headwear
(4) Action skill games; action figures; board games; card games; children's multiple activity toys; badminton sets; balloons; basketballs; bath toys; baseball bats; baseballs; beach balls; bean bag toys; bean bag dolls; bobblehead dolls; bowling balls; bubble making wand and solution sets; chess sets; toy imitation cosmetics; Christmas stockings; Christmas tree ornaments and decorations; collectable toy figures; crib mobiles; crib toys; disc toss toys; dolls; doll clothing; doll accessories; doll playsets; electric action toys; equipment sold as a unit for playing card games; fishing tackle; fishing rods; footballs; golf balls; golf gloves; golf ball markers; hand-held units for playing electronic games for use with or without an external display screen or monitor; hockey pucks; hockey sticks; infant toys; jigsaw puzzles; jump ropes; kites; magic tricks; marbles; manipulative games; mechanical toys; music box toys; musical toys; parlor games; party favors in the nature of small toys; paper party favors; paper party hats; party games; playing cards; plush toys; puppets; roller skates; role playing toys in the nature of play sets for children to imitate fantasy characters; rubber balls; skateboards; snow boards; snow globes; soccer balls; spinning tops; squeeze toys; stuffed toys; table tennis balls; table tennis paddles and rackets; table tennis tables; talking toys; target games; teddy bears; tennis balls; tennis rackets; toy action figures and accessories therefor; toy boats; toy bucket and shovel sets in the nature of sand toys; toy building blocks; toy mobiles; toy vehicles; toy scooters; toy cars; toy figures; toy banks; toy vehicles in the nature of trucks; toy watches; toy weapons; toy building structures and toy vehicle tracks; video game machines for use with televisions; volley balls; wind-up toys; yo-yos; toy trains and parts and accessories therefor; toy aircraft; fitted plastic films known as skins for covering and protecting electronic game playing apparatus, namely, video game consoles, and hand-held video game units; toys for pets
Children's multiple activity toys; Disc toss toys; Electric action toys; Infant toys; Musical toys; Plush toys; Squeeze toys; Talking toys; Toy action figures and accessories therefor; Toy boats; Sand toys; Toy building blocks; Toy mobiles; Toy vehicles; Toy scooters; Toy figures; Toy imitation cosmetics; Toy banks; Toy watches; Toy weapons; Toy building structures and toy vehicle tracks; Mechanical toys; Toy trains and parts and accessories therefor; Toy aircraft; Role playing toys in the nature of play sets for children to imitate real life occupations; Fantasy character toys; Basketballs; Baseball bats; Baseballs; Beach balls; Footballs; Golf balls; Golf gloves; Golf ball markers; Jump ropes; Roller skates; Skateboards; Soccer balls; Table tennis balls; Table tennis paddles; Table tennis rackets; Tennis balls; Tennis rackets; Volley balls; Jigsaw puzzles; Playing cards; Action figures; Bean bags; Dolls; Doll clothing; Doll accessories; Playsets for dolls; Bobblehead dolls; Bubble making wand and solution sets; Puppets; Kites; Magic tricks; Marbles; Rubber balls; Play balls; Yo-yos; Teddy bears; Balloons; Christmas stockings; Christmas tree ornaments and decorations; Crib mobiles; Party favors in the nature of small toys; Paper party favors; Paper party hats; Snow globes; Toy for pets
50.
VIDEO QUALITY METRIC FOR FRAME INTERPOLATED CONTENT
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Zhang, Yang
Çökmez, Göksel Mert
Aydin, Tunc Ozan
Schroers, Christopher Richard
Abstract
In some embodiments, a method receives a first video. The first video includes frames that were generated using frame interpolation. A feature extractor extracts first features from frames of the first video. The first features are extracted from a plurality of levels of a network of the feature extractor. A spatio-temporal processing system analyzes the first features spatially and temporally to determine spatial and temporal features for the plurality of levels. The method combines the spatial and temporal features from the plurality of levels to determine a score that measures a quality of the first video.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Aydin, Tunc Ozan
Salehi, Farnood
Mohammadi, Arad
Abstract
The present invention sets forth techniques for generating attention values via a modulated softmax attention mechanism. The techniques include calculating key, query, and value matrices associated with an input matrix including one or more input tokens, calculating a first vector including one or more per-token scaling values and a second vector including one or more per-token bias values. The techniques also include generating an attention prior matrix based at least on the first and second vectors, and calculating, for each of the one or more input tokens, a modulated attention score associated with the input token. The techniques further include calculating a matrix including one or more modulated attention values associated with the one or more input tokens, and transmitting the one or more modulated attention values to at least one stage included in a transformer network.
A social agent system includes a computing platform having processing hardware and a system memory storing a social agent software code. The processing hardware is configured to execute the social agent software code to receive, from a client system, input data provided by a user of the client system when the user is interacting with the social agent system using the client system, and to select, using the input data, a dialogue template for responding to the user. The dialogue template includes one or more placeholder fields to be filled by the client system to create a personalized dialogue for responding to the user. The processing hardware is further configured to execute the social agent software code to deliver, to the client system, the dialogue template including the one or more placeholder fields to be filled by the client system to create the personalized dialogue for responding to the user.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Bächer, Moritz Niklaus
Chandran, Prashanth
Serifi, Agon
Abstract
In one embodiment, a method for generating an output sequence of data utilizing a spline-based transformer is disclosed. The method may include encoding, via a processing element, an input sequence of data using an artificial neural network encoder to generate a plurality of input tokens; processing, via the processing element, the plurality of input tokens and a plurality of control tokens with a transformer encoder into a latent space to generate a plurality of control points; defining, via the processing element, a spline based on the plurality of control points; sampling, via the processing element, a plurality of interpolated control points based on the spline; and decoding, via the processing element, the interpolated control points with an artificial neural network decoder to generate the output sequence of data.
41 - Education, entertainment, sporting and cultural services
Goods & Services
(1) Entertainment services in the nature of a program series featuring live action, comedy and drama provided through cable television, broadcast television, internet, video-on-demand, and through other forms of transmission media; providing online information in the field of entertainment concerning television programs; Entertainment services, namely, the development, creation, production, and distribution of digital multimedia and audio and visual content, namely, motion picture films, television programs, and multimedia entertainment and educational content; development, creation, production, distribution, and rental of audio and visual recordings; production of entertainment shows and interactive programs for distribution via audio and visual media, audio and video streaming, and electronic means; production and provision of motion pictures, entertainment news and entertainment information via electronic communication networks and audio and video streaming; providing online computer games; providing websites featuring a wide variety of general interest entertainment information relating to motion picture films, television programs, musical videos, film clips, photographs, and other multimedia materials; amusement park and theme park services; educational and entertainment services rendered in or relating to theme parks, namely, live stage shows, live amusement park shows, live performances by costumed characters, and production and presentation of live theatrical performances; production and presentation of live shows; theater productions; entertainer services, namely, live appearances by a professional entertainer
41 - Education, entertainment, sporting and cultural services
Goods & Services
(1) Development, creation, production, and distribution of digital multimedia and audio and visual content, namely, motion picture films, television programs, and multimedia entertainment and educational content; development, creation, production, distribution, and rental of audio and visual recordings; production of entertainment shows and interactive programs for distribution via audio and visual media, and electronic means; production and provision of entertainment news and entertainment information via electronic communication networks; providing online computer games, websites and applications featuring a wide variety of general interest entertainment information relating to motion picture films, television programs, musical videos, film clips, photographs, and other multimedia materials; entertainment services, namely, amusement park and theme park services; educational and entertainment services rendered in or relating to theme parks, namely, live stage shows, live amusement park shows, live performances by costumed characters, and production and presentation of live theatrical performances; production and presentation of live shows; theater productions; entertainer services, namely, live appearances by a professional entertainer.
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Scientific, research, navigation, surveying, photographic, cinematographic, audiovisual, optical, weighing, measuring, signalling, detecting, testing, inspecting, life-saving and teaching apparatus and instruments; Apparatus and instruments for conducting, switching, transforming, accumulating, regulating or controlling the distribution or use of electricity; Apparatus and instruments for recording, transmitting, reproducing or processing sound, images or data; Recorded and downloadable media, computer software, blank digital or analogue recording and storage media; Mechanisms for coin-operated apparatus; Cash registers, calculating devices; Computers and computer peripheral devices; Diving suits, divers' masks, ear plugs for divers, nose clips for divers and swimmers, gloves for divers, breathing apparatus for underwater swimming; Fire-extinguishing apparatus; Audio discs; audio recordings; audio and video recordings; audio speakers; binoculars; calculators; camcorders; cameras; CD-ROMs; CD-ROM drives (as part of the computer); CD-ROM writers (as part of the computer); cellular telephones; cellular telephone accessories; cellular telephone cases; cellular telephone covers; chips containing musical recordings; face plates for cellular telephones; compact disc players; compact disc recorders; compact discs; computer game programs; computer game cartridges and discs; computers; computer hardware; computer keyboards; computer monitors; computer mouse; computer disc drives; computer software; cordless telephones; decorative magnets; digital cameras; digital video and audio players; DVDs; DVD players; DVD recorders; digital versatile discs; digital video discs; electrical and optical cables; pre-recorded optical and magneto-optical discs; optical and magneto-optical disc players and recorders for audio, video and computer data; downloadable electronic publications; downloadable software applications for mobile devices; electronic personal organizers; eyeglass cases; eyeglasses; flotation vests; graduated rulers for office and stationery; headphones; karaoke machines; microphones; MP3 players; modems (as part of a computer); mouse pads; motion picture films; musical recordings; pagers; personal stereos; personal digital assistants; printers; protective helmets for sports; radios; snorkels; swimming goggles; swim masks; sunglasses; telephones; television sets; video cameras; video game cartridges; video game discs; videophones; video recordings; walkie-talkies; wrist and arm rests for use with computers. Education; Providing of training; Entertainment; Sporting and cultural activities; Entertainment services in the nature of a program series provided through cable television, broadcast television, internet, video-on-demand, and through other forms of transmission media; providing online information in the field of entertainment concerning television programs.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment services in the nature of a television series in the field of live action, drama, and comedy; Entertainment services, namely, an ongoing multimedia program featuring live action distributed via various platforms across multiple forms of transmission media; Providing television programs, not downloadable, via video-on-demand transmission services; Providing entertainment information via a website
09 - Scientific and electric apparatus and instruments
Goods & Services
Apparatus for recording, transmission, processing and reproduction of sound, images or data; Digital media, namely, pre-recorded video cassettes, digital video discs, digital versatile discs, downloadable audio and video recordings, DVDs, and high definition digital discs featuring live-action entertainment, music, stories, children’s programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Digital media, namely, pre-recorded DVDs, downloadable audio and video recordings, and CDs featuring and promoting live-action entertainment, music, stories, children's programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Downloadable fiction books on a variety of topics; Downloadable non-fiction books on a variety of topics; Non-fiction audio books on a variety of topics; Visual and audio recordings featuring live-action entertainment, music, stories, children’s programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Downloadable podcasts in the field of comedy, culture and arts, current events, education, entertainment, history, interviews, music, news, and stories; Musical recordings; Downloadable electronic publications in the nature of books, comics, and illustrations in the field of fiction and stories; Downloadable series of children's books; Downloadable e-books in the field of fiction and stories; Downloadable computer game software; Downloadable mobile applications for viewing, playing, and purchasing animated entertainment and electronic games; Downloadable video game software; Downloadable computer software for the administration of learning activities for children; Computer hardware and computer peripheral devices; Mouse pads; Cameras; Digital cameras; Blank recordable optical disc; DVD recording apparatus; Compact disc recorders; Audio speakers; Digital photo frames; Headphones; Earphones; Ear buds; Walkie-talkies; Telephones; Headsets for cellular or mobile phones; Cell phone cases; Screen protectors for cellular phones; Protective covers for cell phones; Protective covers and cases for cell phones; Eyeglasses; Sunglasses; Cases for eyeglasses and sunglasses; Binoculars; Decorative magnets; Microphones; Protective covers and cases for tablet computers; Radio frequency identification tag readers; Radio-frequency identification (RFID) tags; Radio-frequency transmitters; Smart watches; Fitted plastic films known as skins for covering and protecting electronic apparatus, namely, mobile phones, portable music players, mobile computers, and tablet computers; Karaoke machines; Bicycle helmets; Flotation vests; Protective helmets for sports; Snorkels; Swimming goggles; Swim masks
A method for generating an animation for an animatronic based on one or more creative requirements where the generated animation may imitate one or more input motions and is limited by the one or more creative requirements. The generated animation may then be retargeted onto the animatronic based on mechanical constraints, such that the animation is physically feasible on the animatronic. In some instances, the animation may be generated based on received animation source content performed by an actor that defines one or more input motions. The generation of the animation is done by, for example, a generative AI model, or by imitation learning. Additionally, the method may be performed in real-time. Associated systems and methods are also disclosed.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Abecassis, Eitan
Meyer, David
Labrador, Clara Fernandez
Schroers, Christopher Richard
Labrozzi, Scott
Abstract
In some embodiments, a method analyzes a first sample of a first audio signal to determine a first representation in a space. A plurality of second samples for a second audio signal is analyzed to determine a plurality of second representations in the space. The method compares the first representation and the plurality of second representations in the space to select a second representation. An offset is determined between the first sample and a second sample that is associated with the second representation. The offset is output.
A method includes determining an orientation of a mechanically animated surface relative to one or more projectors configured to project content onto the mechanically animated surface, dynamically rendering the content based on one or more parameters derived from the orientation of the mechanically animated surface relative to the one or more projectors, and presenting the content on the mechanically animated surface using the one or more projectors.
Children's multiple activity toys; Disc toss toys; Electric action toys; Infant toys; Inflatable toys; Musical toys; Plush toys; Squeeze toys; Talking toys; Toy action figures and accessories therefor; Toy boats; Sand toys; Toy building blocks; Toy mobiles; Toy vehicles; Toy scooters; Toy figures; Toy imitation cosmetics; Toy banks; Toy watches; Toy building structures and toy vehicle tracks; Mechanical toys; Toy trains and parts and accessories therefor; Toy aircraft; Basketballs; Baseball bats; Baseballs; Beach balls; Footballs; Golf balls; Golf gloves; Golf ball markers; Jump ropes; Roller skates; Skateboards; Soccer balls; Tennis balls; Tennis rackets; Volley balls; Jigsaw puzzles; Action figures; Bean bags; Dolls; Doll clothing; Doll accessories; Playsets for dolls; Bobblehead dolls; Bubble making wand and solution sets; Puppets; Kites; Magic tricks; Rubber balls; Play balls; Yo-yos; Teddy bears; Balloons; Christmas stockings; Christmas tree ornaments and decorations; Crib mobiles; Party favors in the nature of small toys; Paper party favors; Paper party hats; Snow globes; Toy for pets
09 - Scientific and electric apparatus and instruments
Goods & Services
Apparatus for recording, transmission, processing and reproduction of sound, images or data; Digital media, namely, pre-recorded video cassettes, digital video discs, digital versatile discs, downloadable audio and video recordings, DVDs, and high definition digital discs featuring animated entertainment, music, stories, children's programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Digital media, namely, pre-recorded DVDs, downloadable audio and video recordings, and CDs featuring and promoting animated entertainment, music, stories, children's programming, dramatic performances, non-dramatic performances, learning activities for children, and games; Downloadable fiction books on a variety of topics; Downloadable non-fiction books on a variety of topics; Non-fiction audio books on a variety of topics; Visual and audio recordings featuring animated entertainment, music, stories, and games for children; Downloadable podcasts in the field of comedy, culture and arts, current events, education, entertainment, history, interviews, music, news, and stories; Musical recordings; Downloadable series of children's books; Downloadable e-books in the field of fiction and stories; Downloadable computer game software; Downloadable mobile applications for viewing, playing, and purchasing animated entertainment and electronic games; Downloadable video game software; Downloadable computer software for the administration of learning activities for children; Computer hardware and computer peripheral devices; Mouse pads; Cameras; Digital cameras; Blank recordable optical disc; DVD recording apparatus; Compact disc recorders; Audio speakers; Digital photo frames; Headphones; Earphones; Ear buds; Walkie-talkies; Telephones; Headsets for cellular or mobile phones; Cell phone cases; Screen protectors for cellular phones; Protective covers for cell phones; Protective covers and cases for cell phones; Eyeglasses; Sunglasses; Cases for eyeglasses and sunglasses; Binoculars; Decorative magnets; Microphones; Protective covers and cases for tablet computers; Radio frequency identification tag readers; Radio-frequency identification (RFID) tags; Radio-frequency transmitters; Smart watches; Fitted plastic films known as skins for covering and protecting electronic apparatus, namely, mobile phones, portable music players, mobile computers, and tablet computers; Karaoke machines; Bicycle helmets; Flotation vests; Protective helmets for sports; Snorkels; Swimming goggles; Swim masks; Downloadable electronic publications in the nature of books, comics, and illustrations in the field of fiction and stories
09 - Scientific and electric apparatus and instruments
16 - Paper, cardboard and goods made from these materials
25 - Clothing; footwear; headgear
28 - Games; toys; sports equipment
41 - Education, entertainment, sporting and cultural services
Goods & Services
Recorded data; Information technology, audiovisual, multimedia, and photographic equipment; Magnets, magnetization, and demagnetization devices; Scientific equipment and laboratory equipment for physical reactions using electricity; Electrical apparatus, instruments, and cables; Optical devices and equipment, amplifiers, and correctors; Safety, security, protection, and signaling devices and equipment; Diving equipment; Navigation, orientation, location tracking, target tracking, and mapping devices; Measuring, detection, and monitoring instruments, devices, and controllers; Apparatus for scientific research and laboratories, teaching apparatus, and simulators; Parts and accessories for all the foregoing goods, insofar as they are included in this class; Apparatus for recording sound; apparatus for recording images; apparatus for recording data; apparatus for transmission of sound; apparatus for transmission of images; apparatus for the transmission of data; apparatus for reproduction of sound; apparatus for the reproduction of images; Apparatus for the reproduction of data; digital recording media; CDs; DVDs; downloadable video recordings; audio discs; audio books; audio recordings; video recordings; audio speakers; binoculars; calculators; camcorders; cameras; chargers for mobile phones; batteries for mobile phones; mobile phone cases; keyboards for mobile phones; straps for mobile phones; carriers adapted for mobile phones; wireless headsets for mobile phones; mobile phone docking stations; dashboard mounts for mobile phones; protective cases for mobile phones; carrying cases for mobile telephones; auxiliary speakers for mobile phones; auxiliary batteries for mobile phones; flip covers for mobile phones; stands adapted for mobile phones; Battery chargers for mobile phones; hands free devices for mobile phones; display screen protectors in the nature of films for mobile phones; face plates for cellular telephones; chips containing musical recordings; compact disc players; compact disc recorders; compact discs; computer game programs; computers; computer hardware; computer keyboards; computer monitors; computer mouse; computer disc drives; computer software; decorative magnets; digital cameras; digital video players; audio players; DVD players; DVD recorders; digital versatile discs; digital video discs; downloadable software applications for mobile phones; downloadable computer software applications; downloadable electronic publications; electronic personal organizers; eyeglass cases; eyeglasses; life jackets; graduated rulers; headphones; karaoke machines; microphones; MP3 players; mouse pads; motion picture films; musical recordings; personal digital assistants; printers; protective helmets for sports; bicycle helmets; radios; snorkels; swimming goggles; swim masks; sunglasses; telephones; television sets; video cameras; downloadable video game software; video game software; walkie-talkies; wrist rests for use with computers; arm rests for use with computers; protective covers for tablet computers; cases for tablet computers; downloadable ringtones for mobile phones; downloadable emoticons for mobile phones; downloadable graphics for mobile phones; podcasts. Paper and cardboard; Works of art and figures made of paper or cardboard, as well as architectural models; Decorative and artistic supplies and materials; Paper filter material; Bags, pouches, and goods for packaging, wrapping, and storage purposes made of paper, cardboard, or plastic; Paper and stationery, as well as teaching and educational materials; Adhesives for paper and stationery or for household purposes; Printed matter; Parts and accessories for all the aforementioned goods, insofar as they are included in this class; Address books; almanacs; appointment books; art prints; arts and craft paint kits; autograph books; baby books; ball point pens; baseball cards; binders; bookends; bookmarks; books; bumper stickers; calendars; cartoon strips; Christmas cards; chalk; chalk boards; children’s activity books; coasters made of paper; coin albums; coloring books; coloring pages; color pencils; comic books; comic strips; coupon books; crayons; decals; decorative paper centerpieces; diaries; drawing rulers; dry erase writing boards; envelopes; erasers; magazines; maps; markers; memo pads; modeling clay; newsletters; newspapers; note paper; notebooks; notebook paper; paintings; paper flags; paper cake decorations; paper party decorations; paper napkins; paper party bags; paperweights; paper gift wrap bows; gift wrapping paper; paper pennants; paper place mats; paper table cloths; plastic party bags; pen or pencil holders; pencils; pencil sharpeners; pen and pencil cases and boxes; pens; periodicals; photograph albums; photographs; pictorial prints; picture books; plastic materials for packaging (not included in other classes); portraits; postcards; posters; printed awards; printed certificates; printed invitations; printed menus; publications; recipe books; rubber stamps; sandwich bags; score cards; stamp albums; stationery; staplers; stickers; trading cards; ungraduated rulers; writing paper; writing implements; letter openers of precious metal; transfers [decalcomanias]. Headgear; Articles of apparel; Parts and accessories for all the foregoing goods, included in this class; Aprons; athletic shoes; bandanas; baseball caps; beach cover-ups; beachwear; belts; bibs; bikinis; blazers; boots; bow ties; bras; caps; chaps; cloth bibs; coats; dresses; ear muffs; footwear; gloves; golf shirts; Halloween costumes; hats; head bands; headwear; hosiery; infantwear; jackets; jeans; jerseys; kerchiefs; leotards; leg warmers; mittens; neckties; night shirts; night gowns; overalls; pajamas; pants; panty hose; polo shirts; ponchos; rainwear; robes; sandals; scarves; shirts; shoes; skirts; shorts; slacks; slippers; sleepwear; socks; stockings; sweaters; sweat pants; sweat shirts; swimsuits; tank tops; tights; T-shirts; underwear; vests; wrist bands. Sports goods and equipment; Festive decorations and artificial Christmas trees; Equipment for fairs and playgrounds; Toys, games, playthings, and curiosities; Parts and accessories for all the aforementioned goods, insofar as they are included in this class; Action skill games; action figures; board games; children's multiple activity toys; badminton sets; balloons; basketballs; bath toys; baseball bats; baseballs; beach balls; bean bag dolls; bobblehead dolls; bowling balls; bubble making wand and solution sets; card games; chess sets; toy imitation cosmetics; Christmas stockings; Christmas tree ornaments and decorations; collectable toy figures; crib mobiles; crib toys; disc toss toys; dolls; doll clothing; doll accessories; doll playsets; electric action toys; fishing tackle; fishing rods; footballs; golf balls; golf gloves; golf ball markers; hand-held units for playing electronic games for use with or without an external display screen or monitor; hockey pucks; hockey sticks; infant toys; inflatable toys; inflatable pool toys; jigsaw puzzles; jump ropes; kites; magic tricks; marbles; manipulative games; mechanical toys; music box toys; musical toys; parlor games; party favors in the nature of small toys; paper party favors; paper party hats; party games; plush toys; puppets; roller skates; role playing toys in the nature of play sets for children to imitate real life occupations; rubber balls; skateboards; snow boards; snow globes; soccer balls; spinning tops; squeeze toys; stuffed toys; table tennis balls; table tennis paddles and rackets; table tennis tables; talking toys; target games; teddy bears; tennis balls; tennis rackets; toy action figures and accessories therefor; toy boats; toy bucket and shovel sets in the nature of sand toys; toy building blocks; toy mobiles; toy vehicles; toy scooters; toy cars; toy figures; toy banks; toy vehicles in the nature of trucks; toy watches; toy weapons; toy building structures and toy vehicle tracks; video game machines for use with televisions; volley balls; wind-up toys; yo-yos; toy trains and parts and accessories therefor; toy aircraft; fitted plastic films known as skins for covering and protecting electronic game playing apparatus, namely, video game consoles, and hand-held video game units. Publishing and reporting; Education, training, entertainment, and sports; Translation and interpreting; Rental and leasing of items related to the provision of the aforementioned services, insofar as included in this class; Consulting and information related to the aforementioned services, insofar as included in this class; Entertainment services; Development, creation, production, and distribution of digital multimedia and audio and visual content, namely, motion picture films, television programs, and multimedia entertainment and educational content; development, creation, production, distribution, and rental of audio and visual recordings; production of entertainment shows and interactive programs for distribution via audio and visual media, and electronic means; production and provision of entertainment news and entertainment information via electronic communication networks; providing online computer games, websites and applications featuring a wide variety of general interest entertainment information relating to motion picture films, television programs, musical videos, film clips, photographs, and other multimedia materials; entertainment services, namely, amusement park and theme park services; educational and entertainment services rendered in or relating to theme parks, namely, live stage shows, live amusement park shows, live performances by costumed characters, and production and presentation of live theatrical performances; production and presentation of live shows; theater productions; entertainer services, namely, live appearances by a professional entertainer.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Distribution of motion picture films; Distribution of television programs for others; Production and distribution of television shows and movies; Entertainment services in the nature of production of motion pictures, television shows, and multimedia entertainment content; Entertainment services, namely, an ongoing multimedia program featuring action, adventure, animated content, comedy, drama, music, and news distributed via various platforms across multiple forms of transmission media; Rental of motion picture films; Provision of non-downloadable films and television programmes via a video-on-demand service; Provision of non-downloadable films and movies via a video-on-demand service; Providing films, not downloadable, via video-on-demand transmission services; Providing television programs, not downloadable, via video-on-demand transmission services; Providing online non-downloadable visual and audio recordings featuring action, adventure, animated content, comedy, drama, music, and news; Providing online computer games; Providing online non-downloadable e-books in the field of fiction and stories; Providing entertainment news and information specifically in the field of motion picture films, musical videos, film clips, and photographs via a website; Providing a website featuring entertainment information in the fields of motion picture films, musical videos, film clips, and photographs; Providing entertainment information via a website; Providing a website featuring entertainment information; Providing entertainment information in the fields of motion picture films, musical videos, film clips, history, technology, artwork, and photographs via a website; Providing online non-downloadable videos in the field of action, adventure, animated content, comedy, drama, music, and news; Providing online non-downloadable images of graphics, photographs, artwork and text; Providing online non-downloadable audio recordings in the field of action, adventure, animated content, comedy, drama, music, and news; Providing online non-downloadable comic books and graphic novels; Entertainment services, namely, providing podcasts in the field of comedy, culture and arts, current events, education, entertainment, history, interviews, music, news, and stories; Amusement park and theme park services; Entertainment in the nature of theater productions; Entertainment services in the nature of an amusement park show; Entertainment services, namely, live appearances by a celebrity, musician, athlete, actor, or costumed character; Theatrical shows provided at performance venues; Providing education in the field of entertainment and music rendered through live theater performances; Presentation of live show performances; Entertainment, namely, production of plays, musicals, and theater productions; Theatre productions
69.
INDUSTRIAL CONTROL SYSTEM DATA TAP AND MODEL FOR MANAGING INDUSTRIAL CONTROL SYSTEM
Some implementations of the disclosure are directed to a method including: receiving tapped data that was tapped from a controller of an industrial control system (ICS) while the controller controlled one or more ICS devices of the ICS, the tapped data comprising input data obtained from one or more input components of the controller communicatively coupled to the one or more ICS devices, or output data obtained from one or more output components of the controller communicatively coupled to the one or more ICS devices; and generating, using at least a trained model and the tapped data, a prediction output by the trained model that indicates a future failure of the ICS, a future maintenance event of the ICS, or a prescribed maintenance of the ICS.
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
Administration of consumer loyalty programs; providing incentive reward programs to enable participants to obtain discounts for general consumer merchandise and entertainment services; arranging and conducting incentive reward programs to promote the sale of general consumer merchandise and entertainment services; Advertising, marketing and sales promotion; Services relating to customer loyalty, incentive and bonus programs; Organization of customer loyalty programs for commercial, promotional or advertising purposes.
71.
METHOD OF PREDICTING MICROCLIMATE CONDITIONS BASED ON GLOBAL WEATHER
A method of training a microclimate machine learning model includes: receiving a regional weather data for a microclimate environment; detecting one or more microclimate conditions via one or more sensors positioned within the microclimate environment; determining one or more microclimate area characteristic of the microclimate environment at the time of the detected one or more microclimate conditions; generating the microclimate machine learning model based on the regional weather data, the one or more microclimate conditions, and the one or more microclimate area characteristic.
A system may include a first portion providing a first scene (e.g., a day scene) of an image, a second portion providing a second scene (e.g., a night scene) of the image, and means (e.g., a backlight) for transitioning the system from the first scene to the second scene. The first portion may be a first mode stack, and the second portion may be a second mode stack. The second mode stack may include a greater number of layers compared to the first mode stack. A filter may be positioned between the stacks. The first mode may be visible based on light directed onto the first mode stack. Selective activation of the backlight may transition the system from the first mode to the second mode, such as shifting or augmenting the image from the first mode to the second mode. Additional systems and associated methods are also disclosed.
G09G 3/34 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment; amusement park and theme park services; entertainment services, namely, providing podcasts in the field of comedy, culture and arts, current events, education, entertainment, history, interviews, music, news, and stories; entertainer services, namely, live appearances by a professional entertainer; production of entertainment shows and interactive programs for distribution via audio and visual media, and electronic means; production and presentation of live shows; production, and distribution of digital multimedia and audio and visual content, namely, motion picture films, television programs, and multimedia entertainment content; production, and distribution, of audio and visual recordings; providing films and television programmes, non-downloadable, via video-on-demand services; providing online non-downloadable visual and audio recordings featuring comedy, adventure, drama, and animated content; providing online computer games, websites and applications featuring a wide variety of general interest entertainment information relating to motion picture films, television programs, musical videos, film clips, photographs, and other multimedia materials; providing online interactive games accessible via mobile phones and wireless devices; Providing non-downloadable electronic publications in the nature of books, magazines, journals and newspapers in the field of novels, comics and cartoons.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Production and distribution of television shows and movies; Entertainment services, namely, an ongoing multimedia program featuring comedy, adventure, drama, and animated content distributed via various platforms across multiple forms of transmission media; Provision of non-downloadable films and television programmes via a video-on-demand service; Providing television programs, not downloadable, via video-on-demand transmission services; Providing online non-downloadable visual and audio recordings featuring comedy, adventure, drama, and animated content; Providing online computer games; Providing online non-downloadable e-books in the field of fiction; Providing a website featuring entertainment information in the fields of mobile games and television programs; Providing entertainment information in the fields of mobile games and television programs via a website; Providing online non-downloadable videos in the field of comedy, adventure, drama, and animated content; Providing online non-downloadable images of graphics, photographs, artwork, and text; Providing online non-downloadable audio recordings in the field of comedy, adventure, drama, and animated content; Entertainment services, namely, providing podcasts in the field of comedy, culture and arts, current events, education, entertainment, history, interviews, music, news, and stories; Amusement park and theme park services; Presentation of live show performances
A computer implemented method includes receiving, by a processor, an image including one or more subjects and one more obstructions. The method further includes partitioning, by the processor, the image into a plurality of image segments, where the one or more subjects and one or more obstructions are represented as separate image segments of the plurality of image segments. The method further includes obtaining, by the processor, depth information for the plurality of image segments. The method further includes identifying one or more focal image segments of the plurality of image segments based on the depth information of the plurality of image segments and modifying the image based on the one or more focal image segments to generate a modified image.
A system includes an operating apparatus, a control system processor configured to control the operating apparatus by dynamically providing control data to the operating apparatus, the control system processor being further configured to dynamically receive status data from the operating apparatus, wherein an operational state of the control system processor is defined in accordance with the status data received from the operating apparatus. The system also includes a data acquisition module configured to also dynamically receive the status data from the operating apparatus, as well as a proxy processor communicatively coupled to the data acquisition module. The proxy processor is configured to obtain the status data from the data acquisition module, and mimic, using the status data obtained from the data acquisition module, the operational state of the control system processor.
A system includes a computing platform having a hardware processor and a memory storing software code and a content delivery application deployable to multiple personal communication devices, as well as multiple directional antennas communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to, transmit, using a first directional antenna of the multiple directional antennas, a content activation signal to a predetermined area within a venue, transmission of the content activation signal being limited to the predetermined area. The content activation signal causes content relevant to an event at the predetermined area to be played out by the content delivery application deployed to one of the multiple personal communication devices to a user of that personal communication device, only when that personal communication device is present in the predetermined area.
H04N 21/233 - Processing of audio elementary streams
G06F 16/487 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
H04N 21/439 - Processing of audio elementary streams
H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
A system includes a computing platform having a hardware processor and a memory storing software code and a content delivery application deployable to multiple personal communication devices, as well as multiple directional antennas communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to, transmit, using a first directional antenna of the multiple directional antennas, a content activation signal to a predetermined area within a venue, transmission of the content activation signal being limited to the predetermined area. The content activation signal causes content relevant to an event at the predetermined area to be played out by the content delivery application deployed to one of the multiple personal communication devices to a user of that personal communication device, only when that personal communication device is present in the predetermined area.
H04N 21/214 - Specialised server platform, e.g. server located in an airplane, hotel or hospital
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
H04N 21/6334 - Control signals issued by server directed to the network components or client directed to client for authorisation, e.g. by transmitting a key
79.
WAVELET-BASED AUTOENCODERS FOR LATENT DIFFUSION MODELS
ETH Zürich (Eidgenössische Technische Hochschule Zürich), (Switzerland)
Inventor
Sadat, Seyedmorteza
Buhmann, Jakob Joachim
Weber, Romann Matthew
Bradley, Derek Edward
Abstract
The computational requirements of an encoder of an autoencoder can be reduced by pre-processing the images using a discrete wavelet transform (DWT). In one embodiment, the encoder uses a multi-level DWT to extract multiscale information from the input images. If using a learned encoder, performing the multi-level DWT enables the encoder to have less complex feature extraction and aggregation networks (e.g., convolution neural networks (CNNs)) than a standard encoder for an autoencoder. This means the VAE can execute faster, use less computational resources (such as GPU memory), and use less power than traditional VAEs. If using a non-learned encoder, the result of the multi-level DWT can be used as the latent code without using feature extraction and aggregation networks.
G06T 5/10 - Image enhancement or restoration using non-spatial domain filtering
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/52 - Scale-space analysis, e.g. wavelet analysis
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
80.
TECHNIQUES FOR RE-AGING FACES IN IMAGES AND VIDEO FRAMES
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Zoss, Gaspard
Bradley, Derek Edward
Chandran, Prashanth
Urnau Gotardo, Paulo Fabiano
Sifakis, Eftychios Dimitrios
Abstract
Techniques are disclosed for re-aging images of faces and three-dimensional (3D) geometry representing faces. In some embodiments, an image of a face, an input age, and a target age, are input into a re-aging model, which outputs a re-aging delta image that can be combined with the input image to generate a re-aged image of the face. In some embodiments, 3D geometry representing a face is re-aged using local 3D re-aging models that each include a blendshape model for finding a linear combination of sample patches from geometries of different facial identities and generating a new shape for the patch at a target age based on the linear combination. In some embodiments, 3D geometry representing a face is re-aged by performing a shape-from-shading technique using re-aged images of the face captured from different viewpoints, which can optionally be constrained to linear combinations of sample patches from local blendshape models.
Systems and methods for color space mapping are disclosed. First image data is received comprising first color information in a first color space, which can be a pixel-based color space. Second image data is generated based on the first image data, comprising converted second color information in a second color space based on the first color information. The second color space can be a pigment-based color space. The second image data is modified in the second color space using at least the converted second color information. Supplemental color data is generated based on the modified second image data. A modified image is generated in the first color space using the modified second image data and the generated supplemental color data, the modified image comprising modified color information in the first color space based at least in part on the supplemental color data. The second image data and the supplemental color data are generated using machine learning.
Beijing YoJaJa Software Technology Development Co., Ltd. (China)
Inventor
Liu, Chen
Zhang, Wenhao
Xue, Yuanyi
Huangfu, Xuchang
Abstract
In some embodiments, a method receives a plurality of representations of a relationship between bitrate and quality for a first portion of content. Representations in the plurality of representations are based on respective second portions of the content that is included in the first portion of the video. Clusters of the plurality of representations are generated and the clusters are analyzed to determine a first list of encoding operational points for respective clusters. The method analyzes the first list of encoding operational points for respective cluster to determine a second list of encoding operational points. The second list of encoding operational points is output for use in encoding the first portion of content.
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
H04N 19/119 - Adaptive subdivision aspects e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
83.
Multi-Sourced Machine Learning Model-Based Artificial Intelligence Character Training and Development
A system includes a hardware processor configured to execute software code to receive interaction data identifying an action and personality profiles corresponding respectively to multiple participant cohorts in the action, generate, using the interaction data, an interaction graph of behaviors of the participant cohorts in the action, simulate, using a behavior model, participation of each of the participant cohorts in the action to provide a predicted interaction graph, and compare the predicted and generated interaction graphs to identify a similarity score for the predicted interaction graph relative to the generated interaction graph. When the similarity score satisfies a similarity criterion, the software code is executed to train, using the behavior model, an artificial intelligence character for interactions. When the similarity score fails to satisfy the similarity criterion, the software code is executed to modify the behavior model based on one or more differences between the predicted and generated interaction graphs.
Techniques to manage media flows for broadcast with public cloud networks. A request for a media flow is received. A location of the media flow is identified based on a registration database configured to maintain locations of media flows across a media production facility and a public cloud network facility. The media flow is accessed, and at least one of the receiving entity or the requesting entity is configured to generate media based on the media flow.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
H04L 41/04 - Network management architectures or arrangements
H04L 41/069 - Management of faults, events, alarms or notifications using logs of notificationsPost-processing of notifications
H04L 41/342 - Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 43/04 - Processing captured monitoring data, e.g. for logfile generation
H04L 43/062 - Generation of reports related to network traffic
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 69/04 - Protocols for data compression, e.g. ROHC
H04N 21/266 - Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system or merging a VOD unicast channel into a multicast channel
The present invention sets forth a technique for automatically managing access control authorization in a distributed computing system. This technique includes receiving an access request from a requesting entity and recording the access request in an audit log. The technique also includes retrieving access control policies associated with the access request and retrieving attribute data values from an entity data store. The technique further includes generating an access request evaluation based on the access request, the access control policies, and the attribute data values. The technique further includes transmitting the access request evaluation to the requesting entity.
Techniques for generating modified images using content information and style information are disclosed. First image data comprising image content information is received, and a content encoder generates a first embedding by extracting the image content information from the first image data. A second embedding generated by a style encoder is received, the second embedding comprising style information of second image data. The style information comprises color information and texture information. A decoder generates a modified image using the first embedding and the second embedding, the modified image comprising the image content information of the first image data and the style information of the second image data.
Techniques for generating modified images using facial content information are disclosed. First image data comprising first facial content information is received, and a facial content encoder generates a first embedding by extracting the first facial content information from the first image data. Second image data comprising second facial content information and non-facial content information (e.g., style information, pose, facial expression) is received, and a non-facial content encoder generates a second embedding comprising the non-facial content information. A decoder generates a modified image using the first embedding and the second embedding, the modified image comprising the first facial content information of the first image data and the non-facial content information of the second image data.
In some embodiments, a method determines a disparity value from a plurality of disparity values in a current frame of a stereoscopic video. The disparity value is based on a difference of a value for a pixel between a first video and a second video of the stereoscopic video. A location is determined in a current frame that include the disparity value. The method analyzes first frames prior to the current frame to adjust disparity values in the first frames to generate one or more adjusted first disparity values. Also, the method analyzes second frames after the current frame to adjust disparity values in the second frames to generate one or more adjusted second disparity values. The one or more adjusted first disparity values and the one or more adjusted second disparity values are output for use in displaying captions in the first video or the second video.
A system includes a hardware processor and a memory storing software code and a style transfer machine learning (ML) model. The hardware processor is configured to execute the software code to receive an image and a style sample of a selected stylization for an original surface mesh depicted by the image, perform a view-independent reparametrization of the original surface mesh to provide a reparametrized surface mesh, render a three-dimensional (3-D) representation of the reparametrized surface mesh, and generate, using a plurality of virtual cameras, a plurality of perspective images of the 3-D representation. The hardware processor is further configured to execute the software code to stylize, using the style transfer ML model, the style sample and the plurality of perspective images of the 3-D representation, the original surface mesh, to provide a stylized version of the original surface mesh having the selected stylization.
BEIJING YOJAJA SOFTWARE TECHNOLOGY DEVELOPMENT CO. LTD. (China)
Inventor
Xue, Yuanyi
Bracco, Michael John
Labrozzi, Scott Christopher
Schroers, Christopher Richard
Zhang, Wenhao
Abstract
One embodiment of the present invention sets forth a technique for inserting content into a media program. The technique includes determining a plurality of markers corresponding to a plurality of locations within a media program. The technique also includes for each marker included in the plurality of markers, automatically analyzing a first set of intervals within the media program that lead up to the marker and a second set of intervals within the media program that immediately follow the marker and determine a set of audio characteristics associated with the first set of intervals and the second set of intervals. The technique further includes generating a plurality of scores for the plurality of markers based on the set of audio characteristics for each marker and inserting additional content at one or more markers included in the plurality of markers based on the plurality of scores.
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/439 - Processing of audio elementary streams
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
A system for providing a three-dimensional effect include a display that generates emitted light having a first characteristic; a selective light filter that passes or reflects the emitted light based on the first characteristic; a physical prop at least partially positioned between the selective light filter and a viewing location, wherein incident light from the viewing location is reflected by the physical prop and the selective light filter based on a second characteristic of the incident light.
G02B 30/52 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 30/56 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
H04N 13/337 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
Techniques for identifying a plurality of scenes relating to a first content item and selecting, using a computing system, two or more scenes, of the plurality of scenes, for a customized content summary for a user are disclosed. This includes identifying a point in time when the user stopped consuming the first content item and selecting the two or more scenes based on the identified point in time. The selected two or more scenes were previously presented to the user, prior to the point in time when the user stopped consuming the first content item. The techniques further include generating the customized content summary, using the computing system, by combining the selected two or more scenes.
A method may include receiving a 3D point cloud of a space, identifying points of the 3D point cloud at selective locations of the 3D point cloud, and comparing the points to a map of the space to localize an AGV within the space. A method may include receiving a 3D point cloud of a space, identifying multiple points of the 3D point cloud at respective beam angles from a sensor, and comparing the multiple points to a map of the space to localize an AGV within the space. A method may include receiving a 3D point cloud of a space, identifying first and second sets of points at respective first and second 2D planes, and comparing the sets of points to a map of the space to localize an AGV within the space. Additional methods and associated systems are also disclosed.
A method for content modification executed by a computer including a processor and a memory includes receiving an initial image that includes light emitted from a representation of a light source. The method further includes analyzing the initial image, with a delighting model, to predict a second image comprising negative light of the initial image. The method further includes subtracting pixel values of the predicted second image comprising negative light of the initial image from pixel values of the initial image. The method further includes outputting a third image comprising the initial image without the emitted light from the representation of the light source. A second method includes training a delighting model with the initial image with an added predetermined light source and the predetermined light source, to predict an image comprising negative light of the initial image.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Bradley, Derek Edward
Yang, Lingchen
Zoss, Gaspard
Chandran, Prashanth
Solenthaler, Barbara
Sifakis, Eftychios Dimitrios
Abstract
The present invention sets forth techniques for generating a facial animation. The techniques include receiving a latent identity code including a first set of features describing a neutral facial depiction associated with an identity and receiving a latent expression code including a second set of features describing a facial expression associated with the identity. The techniques also include generating, via a first machine learning model, an identity-specific facial representation based on a canonical facial representation and the latent identity code and generating, via a second machine learning model and based on the latent identity code, the latent expression code, and the identity-specific facial representation, a muscle actuation field tensor and one or more bone transformations associated with the deformed canonical facial representation. The techniques further include generating, via a physics-based simulator, a facial animation based on at least the muscle actuation field tensor and the one or more bone transformations.
ETH Zürich (Eidgenössische Technische Hochschule Zürich) (Switzerland)
Inventor
Djelouah, Abdelaziz
Bozic, Vukasin
Schroers, Christopher Richard
Timofte, Radu
Zhang, Yang
Gross, Markus Hans
Abstract
Embodiments of the present disclosure provide techniques for colorizing visual content using artificial intelligence models. An example method generally includes receiving an image and an input prompt specifying a colorization to apply to the image. Based on an encoded version of the image and a textual description of the image input into a machine learning model, one or more color maps associated with the specified colorization to apply to the image are generated. A colorized version of the image is generated by a generative artificial intelligence model based on combining a grayscale version of the image and the one or more color maps, and the colorized version of the image is output.
A system for providing a motion for a robotic device includes: a memory storing: a plurality of animation styles, and a sample motion for the robotic device. The sample motion complies with kinematic and dynamic constraints associated with the robotic device. The system includes a processing element in communication with the memory; an animation engine executed by the processing element. The animation engine receives: a target kinematic state of at least a portion of the robotic device, a selection of at least one of the plurality of animation styles, and a real or simulated state of the robotic device. The animation engine generates at least one actuator command configured to modify the sample motion based on the target kinematic state, the selected at least one of the plurality of animation styles, and the real or simulated state of the robotic device.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
B25J 5/00 - Manipulators mounted on wheels or on carriages
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06F 30/20 - Design optimisation, verification or simulation
41 - Education, entertainment, sporting and cultural services
Goods & Services
Entertainment services in the nature of a program series provided through cable television, broadcast television, internet, video-on-demand, and through other forms of transmission media; providing online information in the field of entertainment concerning television programs.
99.
SYSTEMS AND METHODS TO DEFINE A CREATIVE ASSET FOR PRESENTATION WITH STREAMING VIDEO CONTENT THROUGH DIFFERENT VERSIONS OF A STREAMING VIDEO PLAYER APPLICATION
Systems and method to generate creative asset files for rendering creative assets with streaming video content through different versions of a streaming video player application. Exemplary implementations may: receive an arrangement of items of content for a creative asset, determine a creative asset specification that conveys the arrangement of the items of content, analyze the creative asset specification to verify that the creative asset specification is compliant with a player-compliant schema for encoding media, generate a creative asset file that facilitates rendering of the creative asset by the different versions of the streaming video player application for presentation with streaming video, generate multiple renderings of the creative asset from the creative asset file that reflect expected renderings to be generated by the different versions of the streaming video player application, effectuate presentation of the multiple renderings through the design user interface, or other exemplary implementations.
Techniques relating to streaming video are disclosed. These techniques include identifying one or more streaming video sessions for one or more users based on a plurality of events relating to streaming video for the one or more users. The techniques further include storing data for the one or more streaming video sessions in an electronic database, based on the plurality of events, identifying a plurality of metadata relating to the events, and determining, based on a threshold value, a time to store at least a portion of the plurality of metadata in the electronic database, the time occurring after the storing the data for the one or more streaming video sessions. The techniques further include responding to a query for metrics relating to the one or more streaming video sessions by aggregating at least a portion of the stored data.