A data encoding method is provided, the method comprising the steps of: acquiring a block of image data to be encoded; selectively encoding a flag when an encoded data stream corresponding to the block of image data should be prepared for machine processing; converting the block of image data to a 4:4:4 format on the basis of the flag; and encoding the converted block of image data in order to generate the encoded data stream. A data encoding apparatus is also provided. Furthermore, a corresponding data decoding method and data decoding apparatus is also provided.
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
2.
METHOD FOR DATA SEGMENTATION OF A DATA TRANSMISSION AND RELATED DEVICES
A method for data segmentation of a data transmission is disclosed. The method is performed by a radio network node. The method comprises obtaining a device configuration comprising one or more device parameters of a wireless device including a first device parameter indicative of a data segmentation capability of the wireless device. The method comprises transmitting, to the wireless device, a data transmission configuration comprising one or more data transmission parameters indicative of data segmentation for the wireless device. The data transmission configuration is based on the first device parameter.
A filter assembly that includes a flange assembly and a neck assembly. The flange assembly has a first flange component and a second flange component, wherein at least a portion of the first flange component and the second flange component are configured to seat within a feed hole. The neck assembly has a first neck component configured to be coupled to a first filter cloth and mountable on a first side of the filter plate and a second neck component configured to be coupled to a second filter cloth and mountable on a second opposed side of the filter plate. Further, at least a portion of the first neck component is configured to be coupled to the first flange component in the feed hole and at least a portion of the second neck component is configured to be coupled to the second neck component in the feed hole.
B01D 25/164 - Chamber-plate presses, i.e. the sides of the filtering elements being clamped between two successive filtering plates
B01D 25/176 - Filter presses, i.e. of the plate or plate and frame type attaching the filter element to the filter press plates, e.g. around the central feed hole in the plates
A machine learning (ML) model is used to automatically generate haptics signals to actuate a haptics generator in a computer game controller. The haptics signal is generated based on audio from the game input to the ML model. Current controller operation and other parameters also may be input to the M model to modify the haptics signal. Category importance and frequency may be applied to the loss function of the ML model to further refine haptics generation. Post-filtering may be used to reduce false positives. Game genre may be used to reduce the number of candidate haptics signals for generation.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/42 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
A device comprising: an image capturing unit configured to capture an image; and circuitry configured to: analyse the captured image to identify the presence of a human appendage in the captured image; authenticate the identified appendage; and in the event of a positive authentication, output a control signal based upon the pose of the authenticated appendage.
A data processing apparatus comprising circuitry configured to: receive input video and audio data; modify a region of a video frame of the input video data to generate a modified video frame, the region being modified based on an audio sample of the input audio data temporally associated with the video frame; and output modified video data representing the modified video frame.
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
G11B 27/28 - IndexingAddressingTiming or synchronisingMeasuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
7.
SENSOR DEVICE AND METHOD FOR OPERATING A SENSOR DEVICE
A sensor device (100) comprises an imaging unit (110) that is configured to capture a plurality of raw images of a scene with different sensitivities for brightness and an image processing unit (120) that is configured to generate high dynamic range, HDR, images from the raw images. The imaging unit (110) is configured to capture a first set (112) of raw images of the scene with different sensitivities for brightness using a first exposure period (t1) and to capture a second set of raw images (114) of the scene with different sensitivities for brightness using a second exposure period (t2). The image processing unit (120) is configured to generate a first HDR image (122) from the first set (112) of raw images and a second HDR image (124) from the second set (114) of raw images. The first exposure period (t1) is longer than the second exposure period (t2). The highest sensitivity for brightness of the raw images of the first set is higher than the highest sensitivity for brightness of the raw images of the second set.
H04N 25/589 - Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
H04N 25/59 - Control of the dynamic range by controlling the amount of charge storable in the pixel, e.g. modification of the charge conversion ratio of the floating node capacitance
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
H04N 23/741 - Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
H04N 23/80 - Camera processing pipelinesComponents thereof
H04N 23/951 - Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
8.
ADJUSTING COMMUNICATIONS INCLUDING MESSAGE TIME SHIFTING AND SUMMARIZATION FOR OPTIMUM PRESENTATION TO PLAYER
A method for managing incoming communications for a player of a video game is provided, including: during gameplay of the video game by the player, receiving a plurality of communications for the player; responsive to the communications being received at substantially similar times, then analyzing the communications to determine their content; based on the content of the communications, then rendering a first one of the communications to the player in substantial real-time, and delaying rendering a second one of the communications to the player.
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
There is provided a method of controlling displayed content based on user input. The method comprises receiving one or more user inputs from a peripheral device operated by a user, the user inputs comprising one or more involuntary user inputs; modifying the user inputs to reduce a contribution of the involuntary user inputs; and adapting the displayed content in dependence on the modified user inputs.
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
A63F 13/22 - Setup operations, e.g. calibration, key configuration or button assignment
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
G06F 3/0338 - Pointing devices displaced or positioned by the userAccessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
A data processing apparatus comprises allocation circuitry to allocate a graphical feature for a virtual environment to a respective layer from a plurality of respective layers in dependence upon one or more properties for the graphical feature, wherein the plurality of respective layers comprise a mobile layer and at least one of a mobile re-projection layer and a static re-projection layer, rendering circuitry to render graphical features allocated to respective different layers of the plurality of respective layers with a different rendering quality, and an output image generator to generate an image for display according to a viewpoint of a virtual camera, wherein the image comprises graphical features allocated to respective different layers of the plurality of layers.
An attribute inter-prediction framework using motion search criterion, proper reference, attribute projection and in-loop filter is described herein. An alternative motion estimation strategy incorporates color distance into the motion search criterion where geometry and attribute motion are jointly searched which leads to a single motion vector associated with each predicted block. The composed distance is the weighted sum of color and geometry distance. With the composed distance, two new types of search criterion are used to estimate the motion vectors: using the nearest neighbor composed distance, using the minimum composed distance based on the search of composed distances associated with each reference point within the search window. A dedicated attribute coding scheme using a two-pass motion search scheme, where the first pass is based on geometry only, and the second pass targets on eliminating temporal attribute redundancy. An in-loop filter can be applied to the projected attributes.
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/82 - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Khan, Muhammad Umar Karim
Chadha, Aaron
Anam, Mohammad Ashraful
Andreopoulos, Ioannis
Abstract
A method of processing image data, comprising receiving, at a pre-processing artificial neural network, ANN, image data of one or more images, pre-processing the received image data at the pre-processing ANN to generate pre-processed image data of the one or more images, encoding and decoding, in accordance with an image or video codec, the pre-processed image data to generate decoded image data of the one or more images, and post-processing the decoded image data at a post-processing ANN to generate post-processed image data of the one or more images. The pre-processing ANN and the post-processing ANN are jointly trained in an end-to-end manner using a neural codec model arranged between the pre-processing ANN and the post-processing ANN, the neural codec model acting as a proxy for the image or video codec and comprising an ANN configured to emulate rate and distortion characteristics of the image or video codec.
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/192 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
13.
INTERRUPT NOTIFICATION PROVIDED TO COMMUNICATOR INDICATING PLAYER RECEPTIVENESS TO COMMUNICATION
A method for indicating receptiveness of a player of a video game to incoming communication is provided, including: monitoring gameplay activity occurring in the video game; monitoring activity of the player during the video game; using the monitored gameplay activity and the monitored activity of the player to determine a level of receptiveness of the player to incoming communication; rendering a visual indicator that is responsive to the level of receptiveness of the player.
A63F 13/537 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
This operation device includes a plurality of mutually separated and independent modules, the plurality of modules each being connected to information processing devices. At least some of the plurality of modules include an operation member for accepting operation input from a user. Each of the plurality of modules includes at least one flat linking section provided with a linking mechanism for linking to other modules. A plurality of light-emitting units are arranged along the outer periphery of the flat linking sections. When any of the plurality of modules is to be linked, in response to an instruction from the information processing device, each of the two modules to be linked lights up at least some of the plurality of light-emitting units arranged on the flat linking sections to be linked.
A controller for an avalanche photodiode sensor, including circuitry configured to: control a latch circuit to store one or more least significant bits of a counter value of a counter circuit at the end of a first frame, wherein the counter circuit is configured to count exposure time intervals of a frame in which an avalanche photodiode pixel has generated a light detection event; enable readout of the one or more least significant bits from the latch circuit in a second frame following the first frame; and enable readout of one or more most significant bits of the counter value from the counter circuit in the second frame.
This operation device includes a plurality of modules which are separated from and independent of each other. At least some of the plurality of modules include an operation member for receiving an operation input from a user. Each of the plurality of modules includes at least one connection plane part comprising a connection mechanism for connection with other modules.
A first communication device that is configured to communicate with a second communication device comprises circuitry configured to generate data units for transmission to the second communication device, a data unit comprising a preamble and a data portion, the preamble including one or more channel estimation sequences; map the generated data units onto a plurality of spatial streams; and transmit the generated data units. Generating the data units and transmitting the generated data units includes generating a first data unit including in its preamble a plurality of channel estimation sequences transmitted on the plurality of spatial streams, wherein the plurality of channel estimation sequences includes at least one channel estimation sequence per spatial stream; and generating one or more subsequent data units transmitted after the first data unit, wherein each of the subsequent data units includes in its preamble channel estimation sequences transmitted on a subset of the plurality of spatial streams, wherein the number of channel estimation sequences transmitted on the subset of the plurality of spatial streams is zero or more but less than the total number of spatial streams.
A technique to determine an optimum number of reserved bits for use in video encoding. The bits per coding unit is calculated as a function of bitrate, frame rate and frame size. Then for each coding unit, N reserved bits candidates are selected. For each test video the number of packets per frame is identified and the average number of packets per frame is calculated. The best reserved bits candidate out of the N reserved bits candidates is identified, and then a best fitted linear model between best reserved bits candidate and bits per coding unit is determined.
H04N 19/149 - Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
H04N 19/169 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
22.
INFORMATION PROCESSING DEVICE, SYSTEM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND COMPUTER SYSTEM
Provided is an information processing device including a detection section, a setting section, a counting section, an image generation section, and a calculation section. The detection section detects an object according to a first image obtained using a frame-based vision sensor. The setting section sets, in the first image, at least one region of interest including at least a portion of the object. The counting section counts event volume of an event signal in a region of attention corresponding to the region of interest according to an event signal generated by an event-based sensor. The image generation section builds a second image according to the event signal in a case where a predetermined condition is satisfied by the event volume counted by the counting section. The calculation section calculates a motion vector of the region of attention in the second image.
A medical light source device includes a visible light source that emits normal light in a visible wavelength band, and a plurality of excitation light sources that emits a plurality of kinds of excitation light corresponding to a plurality of kinds of drugs each emitting fluorescence upon irradiation with the excitation light.
A61B 5/1459 - Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value using optical sensors, e.g. spectral photometrical oximeters invasive, e.g. introduced into the body by a catheter
A61B 5/145 - Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value
Authenticating a user account to access a remote service from a user device includes: capturing a machine-readable image code displayed on the user device using a mobile device, wherein the image code is provided by the remote service; extracting and calling a corresponding session code embedded in the image code; determining whether a phone token is in storage; one of: (a) sending the phone token to the remote service, if the phone token is in the storage; or (b) prompting a user to enter credentials including username and password and sending the entered credentials to the remote service, if the phone token is not in the storage; and receiving access to the remote service for the user device, when the session code is valid and either the credentials are valid or the phone token is confirmed.
A filter assembly that includes a flange assembly and a neck assembly. The flange assembly has a first flange component and a second flange component, wherein at least a portion of the first flange component and the second flange component are configured to seat within a feed hole. The neck assembly has a first neck component configured to be coupled to a first filter cloth and mountable on a first side of the filter plate and a second neck component configured to be coupled to a second filter cloth and mountable on a second opposed side of the filter plate. Further, at least a portion of the first neck component is configured to be coupled to the first flange component in the feed hole and at least a portion of the second neck component is configured to be coupled to the second neck component in the feed hole.
A machine learning (ML) model is used to automatically generate haptics signals to actuate a haptics generator in a computer game controller. The haptics signal is generated based on audio from the game input to the ML model. Current controller operation and other parameters also may be input to the M model to modify the haptics signal. Category importance and frequency may be applied to the loss function of the ML model to further refine haptics generation. Post-filtering may be used to reduce false positives. Game genre may be used to reduce the number of candidate haptics signals for generation.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
27.
Display screen or portion thereof with transitional graphical user interface
An input system comprises an entertainment device comprising an input processor configured to receive inputs from a handheld controller; wherein the input processor is configured to map only a subset of possible phonemes to respective inputs received from the handheld controller, and wherein the subset of phonemes mapped to the respective inputs differs for different parts of a word or syllable.
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
An input system comprises a spoken input processor configured to receive spoken inputs from a microphone operably coupled to the input system; a physical input processor configured to receive physical inputs from a peripheral device operably coupled to the input system, the physical inputs having a timing relative to the spoken inputs; and a speech recognition processor configured to recognise speech from the speech inputs; wherein the speech recognition processor uses at least some of the received physical inputs, and their timing relative to the spoken inputs, as part of the speech recognition process.
A system for processing an audio signal received from a transducer during a field recording session, the system comprising: receiving circuitry configured to receive, from a user, sound source data indicating a desired sound source to be recorded during the field recording session; audio input circuitry configured to receive an audio signal from the transducer; determining circuitry configured to determine whether the audio signal comprises a desired sound corresponding to the desired sound source indicated in the sound source data; and output circuitry configured to output an indication for indicating whether the audio signal comprises the desired sound.
A computer implemented method for rendering a scene on a video gaming system comprising a graphics processing unit, the method comprising: determining a required shader for rendering the scene; determining that the required shader is not present within a shader memory, usable by the graphics processing unit; selecting a replacement shader from a plurality of loaded shaders present in the shader memory based on a computed similarity metric, the similarity metric defining the similarity of a rendering output of a loaded shader to a rendering output of the required shader; rendering the scene using the replacement shader.
A method of rendering graphics for a video game, which comprises: assigning, to each of a plurality of objects in a frame to be rendered, an importance level of the object; determining, based on the importance levels, one or more rendering quality resource restrictions to be applied to at least some of the plurality of objects; and rendering the frame, wherein the plurality of objects are rendered in accordance with the rendering quality resource restrictions.
A computer implemented method for compressing a refractive index curve for use in spectral rendering, the method comprising: providing a refractive index curve for a material to be rendered within a virtual environment, the refractive index curve encoding the variation of the refractive index with frequency; applying a Fourier transform to the refractive index curve to obtain a time domain refractive index curve; applying a compression technique to the time domain curve to obtain a compressed time domain curve; storing the compressed time domain curve for use in spectral rendering. Since most of the important relevant variation in a time domain refractive index curve is confined to a small part of the curve, compression techniques may be readily applied to reduce the amount of data that may be stored, freeing up memory for other memory intensive aspects of the graphics rendering pipeline.
Provided is an image display system (S) including a display control means (201) for displaying an image on a display panel of a head-mounted display (1) mounted on the head of a user (U). The head-mounted display 1 includes: a display panel; and a lens unit including a lens part for guiding image light emitted from a display area of the display panel to the pupil of the user and an extension lens part which is provided partially around the lens part and which extends the lens surface on which the image light is incident. The display control means (201) displays on the display panel a main image (A) to be displayed in an area that includes an effective area (A1) that corresponds to the lens part and is visible when the line of sight of the user (U) is directed straight ahead, and an extension image (C) to be displayed in an area that includes at least a portion of an extension effective area (C12) that corresponds to the extension lens part.
A multiple link device (MLD) for performing a wireless communications protocol to provide dynamic power saving (PS) on a wireless local area network (WLAN). The Active state (non-dozing) comprises both a full capability higher powered state, and a lower powered state in which it has less capability. Wherein stations of an MLD can be directed to listen using the lower power active mode; yet a transition to the higher power active mode can be made when the need arises for transmitting frames with another MLD.
A data processing apparatus comprising circuitry configured to: obtain data representing a boundary representing a virtual object in a virtual space; determine a signed distance field of the boundary in the virtual space; determine, by comparing a value of the signed distance field at each of a plurality of points of the virtual space with an output of one or more candidate signed distance functions at each of the plurality of points of the virtual space, a representative signed distance function representing the signed distance field; and output data representing the representative signed distance function.
An input system, comprises an entertainment device comprising an input processor configured to receive inputs from a handheld controller; wherein the input processor is configured to map only a subset of the letters of the alphabet to respective inputs received from the handheld controller; and wherein the subset of letters was selected responsive to their ability to discriminate between words in a corpus.
A63F 13/42 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
A multiple link device (MLD) for performing a wireless communications protocol to provide dynamic power saving (PS) on a wireless local area network (WLAN). The Active state (non-dozing) comprises both a full capability higher powered state, and a lower powered state in which it has less capability. Wherein stations of an MLD can be directed to listen using the lower power active mode; yet a transition to the higher power active mode can be made when the need arises for transmitting frames with another MLD.
An audio reproduction device (100) according to the present disclosure includes a reception unit (132) that receives, from a user, a request for reproducing a second audio signal that is an audio signal different from a first audio signal that is an original audio signal of content, and a reproduction unit (133) that localizes the second audio signal at an arbitrary position in an acoustic space including an azimuth direction and a height direction and outputs the first audio signal and the second audio signal in parallel when the reception unit receives the request.
An image sensor assembly includes a plurality of pixel circuits, wherein each pixel circuit is configured to convert incident radiation into a photocurrent and integrate the photocurrent. Each pixel circuit includes a pixel mode selector and an amplifier transistor. The pixel mode selector switches between a first state and a second state. The amplifier transistor outputs a local pixel supply voltage to a local pixel supply line when the pixel mode selector is in the first state in a regulator period. The amplifier transistor is connected between the local pixel supply line and a pixel signal line to output a pixel output voltage depending on the photocurrent on the pixel signal line when the pixel mode selector is in the second state in a selection period.
H04N 25/709 - Circuitry for control of the power supply
H04N 25/677 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction for reducing the column or line fixed pattern noise
41.
INFORMATION PROCESSING DEVICE, GAME VIDEO DISPLAY METHOD, AND PROGRAM
A video display unit 132 displays a game video in a first display region. A state value display unit 136 displays, in a second display region different from the first display region, information indicating the transition of a state value in the game. A seek bar display unit 134 displays a seek bar that represents the total time length of the game video by the length of the bar and that represents the playback position of the game video by the slider position on the bar. A state value display unit 136 displays, in association with the bar, the information indicating the transition of the state value.
A method of interactive control comprises the steps of establishing connection with two or more controllers, comprising at least a first controller and a second controller, receiving inputs from the first controller to interact with a single-user application, initializing a control swap scheme, the scheme comprising selecting inputs from the first controller to provide to the single-user application as inputs from the first controller, and in response to a predetermined criterion, selecting inputs from the second controller to provide to the single-user application as inputs from the first controller.
A63F 13/22 - Setup operations, e.g. calibration, key configuration or button assignment
A63F 13/327 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using local area network [LAN] connections using wireless networks, e.g. Wi-Fi® or piconet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
A63F 13/843 - Special adaptations for executing a specific game genre or game mode involving concurrently two or more players on the same game device, e.g. requiring the use of a plurality of controllers or of a specific view of game data for each player
A computer implemented method for rendering graphics data of a 3D scene on a video gaming system comprising a GPU, the method comprising: obtaining lightmap data of the 3D scene; identifying a plurality of scene wavelengths based on the lightmap data, wherein the plurality of scene wavelengths are wavelengths that meet a threshold contribution to the 3D scene; and rendering the scene based on the plurality of scene wavelengths.
An execution unit 112 executes an application on the basis of the operation of a user. An image generation unit generates a function selection image including an icon for realizing the function selected by the user, and a display processing unit 130 displays the function selection image. A procedure holding unit 116 holds a procedure for realizing the function associated with the icon. When a predetermined operation is performed on the icon in the displayed function selection image, a function realization unit 118 causes the execution unit 112 to execute the procedure held by the procedure holding unit 116 to realize the function associated with the icon.
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other playersGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for building a teamGame security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
48.
INTERACTIVE 3D CONTENT GENERATION AND SHARING ON VIDEO GAME MEDIA GALLERIES
Generating and sharing interactive three dimensional (3D) content/3D models captured from gaming sessions are described herein. The 3D content is able to be shared with mobile devices, televisions, gaming consoles, Virtual Reality (VR) devices or other devices. Since the content is rendered in 3D, a user is able to change the view direction, zoom in/out, and perform other functions. A framework enables video game media galleries to capture, view, edit, and share interactive, static or dynamic 3D media while keeping the structure of the original gaming assets inaccessible to the end-user.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/86 - Watching games played by other players
A method of operating a communications device configured to transmit signals to and/or to receive signals from an infrastructure equipment of a wireless communications network via a radio access interface between the communications device and the infrastructure equipment is provided. Here, the communications device is a sub-band full duplex, SBFD, capable communications device. The method comprises determining that the communications device is to perform a random access procedure with the infrastructure equipment, calculating an uplink transmission power at which to transmit a physical random access channel, PRACH, preamble in one or more PRACH, occasion, ROs, to the infrastructure equipment as part of the random access procedure, wherein the communications device determines whether to apply an offset to the uplink transmission power dependent on whether at least one of the ROs is an SBFD RO which is contained within SBFD symbols of the radio access interface or a non-SBFD RO which is contained within non-SBFD uplink symbols of the radio access interface, and transmitting the PRACH preamble to the infrastructure equipment in the at least one RO and using the calculated uplink transmission power.
H04W 52/36 - Transmission power control [TPC] using constraints in the total amount of available transmission power with a discrete range or set of values, e.g. step size, ramping or offsets
H04W 52/50 - TPC being performed in particular situations at the moment of starting communication in a multiple access environment
H04W 52/14 - Separate analysis of uplink or downlink
H04W 52/24 - TPC being performed according to specific parameters using SIR [Signal to Interference Ratio] or other wireless path parameters
H04W 52/48 - TPC being performed in particular situations during retransmission after error or non-acknowledgment
H04W 74/0833 - Random access procedures, e.g. with 4-step access
50.
CLOUD-BASED PLATFORM FOR REAL-WORLD EXPERIMENTATION DRIVEN GAME INCUBATION AT SCALE
A cloud-based platform significantly lowers the barrier to creating and testing gaming concepts. On the platform, game developers can publish and test a full game, or just a self-contained mission or activity, to allow maximum agility to test and get quick market feedback on a granular discrete set of gaming concepts.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
51.
DETECTING SUBTLE CONSUMER PREFERENCES WITH GRANULAR BROWSING BEHAVIORS ON CONSOLE/APP
Personalized experiences for a user are based on the input patterns of the user. Scrolling behavior on a touchscreen may be used to deliver personalized experiences. The point-by-point coarse scrolling data is aggregated and condensed on the user device being scrolled and the condensed data sent to a server for analysis to save bandwidth.
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
It is provided an apparatus comprising circuitry configured to identify a person in an input image. Further, the circuitry is configured to generate an output image based on the input image by an artificial neural network, ANN, the output image comprising a person based on the identified person of the input image. At least a first value of an attribute of the person from the input image being replaced by a second value of the attribute in the person in the output image.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
Goods & Services
Downloadable video game software; downloadable computer game software; downloadable virtual reality game software; downloadable game software all relating to and in the field of video games and computer games Entertainment services, namely, providing temporary use of non-downloadable video games and computer games; entertainment services, namely, providing online video games and computer games; entertainment services, namely, providing information relating to computer games and video games via computer networks and global communication networks; virtual reality game services provided on-line from a computer network; entertainment services, namely, providing virtual environments in which users can interact for recreational, leisure, or entertainment purposes
09 - Scientific and electric apparatus and instruments
Goods & Services
Audio speakers; wireless speakers; loudspeakers; speakers for playing video games; speakers for use with computers; speakers for smartphones; downloadable game software; recorded game software; downloadable computer game software; recorded computer game software; downloadable computer application software for mobile phones and handheld computers, namely, software for communicating with other users, viewing content in the field of video game, and managing and controlling video game consoles; computer software; computer hardware; computer peripheral apparatus; chargers for electric batteries; wireless chargers; conductors, electric; computers and computer peripheral devices; power adapters; battery chargers for speakers for playing video games; battery chargers for speakers for use with computers; battery chargers for speakers for smartphones; wireless adapters for speakers; wireless adapters for speakers for playing video games; wireless adapters for speakers for use with computers; wireless adapters for speakers for smartphones; stands adapted for speakers; stands for speakers for playing video games; stands for speakers for use with computers; stands for speakers for smartphones; audio cables; audio adapter cables
57.
OVERLAPPING RENDERING, STREAMOUT, AND DISPLAY AT A CLIENT OF RENDERED SLICES OF A VIDEO FRAME
A method of cloud gaming is disclosed. The method including receiving an encoded video frame at a client, wherein a server executes an application to generate a rendered video frame which is then encoded at an encoder at the server as the encoded video frame, wherein the encoded video frame includes one or more encoded slices that are compressed. The method including decoding the one or more encoded slices at a decoder of the client to generate one or more decoded slices. The method including rendering the one or more decoded slices for display at the client. The method including begin displaying the one or more decoded slices that are rendered before fully receiving the one or more encoded slices at the client.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/335 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
G07F 17/32 - Coin-freed apparatus for hiring articlesCoin-freed facilities or services for games, toys, sports, or amusements
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
H04L 67/131 - Protocols for games, networked simulations or virtual reality
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video streamElementary client operations, e.g. monitoring of home network or synchronizing decoder's clockClient middleware
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
H04N 21/8547 - Content authoring involving timestamps for synchronizing content
This electronic device comprises: a first antenna 20 and a second antenna 30 that each perform wireless communications; and a conductive member 40 that is electrically connected to the ground of a circuit built into the electronic device and has a rod-shaped portion of which the tip is an open end that is not electrically connected to another conductive member, wherein the conductive member 40 having the rod-shaped portion is disposed at an intermediate position between the first antenna 20 and the second antenna 30 such that the rod-shaped portion extends in a direction intersecting a direction in which the first antenna 20 and the second antenna 30 are connected.
Artificial intelligence (AI) models are disclosed to customize audio video (AV) content, such as video game content, based on user-provided content filtering parameters. Accordingly, in one aspect an apparatus may include at least one processor system programmed with instructions to access data related to first AV content. The at least one processor system may also be programmed with instructions to execute a model to identify, from the data, one or more aspects of the first AV content that have been indicated for obfuscation via end-user input. The at least one processor system may also be also programmed with instructions to, responsive to the identification, take one or more actions to obfuscate the one or more aspects of the first AV content. Thus, the first AV content may be edited and tailored to particular users in light of their personal content sensitivities.
H04N 21/454 - Content filtering, e.g. blocking advertisements
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
G06V 20/40 - ScenesScene-specific elements in video content
Artificial intelligence (AI) models are disclosed to customize audio video (AV) content, such as video game content, based on user-provided content filtering parameters. Accordingly, in one aspect an apparatus may include at least one processor system programmed with instructions to access data related to first AV content. The at least one processor system may also be programmed with instructions to execute a model to identify, from the data, one or more aspects of the first AV content that have been indicated for obfuscation via end-user input. The at least one processor system may also be also programmed with instructions to, responsive to the identification, take one or more actions to obfuscate the one or more aspects of the first AV content. Thus, the first AV content may be edited and tailored to particular users in light of their personal content sensitivities.
A computer simulation controller changes the texture of its surface dynamically over time. Textures may include bumpy, rough, spiky, smooth, rippled, etc. according to received data that not only conveys the textures themselves, but also the location of the textures on the surface of the controller. Similarly, different areas of the controller can change to different temperatures according to received data that not only indicates the temperatures but also their spatial dependency on the controller. Information regarding these two haptics may be combined in a single data structure, with texture and temperature both being expressed at the same time, with changes over time.
Techniques for managing snapshot storage on computer game servers use a hybrid filesystem which combines the best parts of raw disk access and a filesystem. The format is defined so that storage is not limited to the limited feature-set of legacy-but-commonly supported filesystems. Storage “blocks” are allocated in an optimal manner. Because the hybrid filesystem has access to low-level storage information, certain currently impossible features such as storage redundancy or data striping across disks can be implemented. Flexible metadata that can be safely used by multiple systems simultaneously or make live updates to content can be stored without having to shut down one or more systems.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
Disclosed herein is a computer-implemented method for generating a set of graphics shaders for rendering a required set of graphical components, comprising: providing a first set of graphics shaders, wherein each graphics shader in the first set is configured to render a different graphical component; and combining a plurality of the graphics shaders in the first set to produce a second set of graphics shaders, such that the second set comprises fewer graphics shaders than the first set, wherein the plurality of graphics shaders are combined into the second set such that a measured property of the second set meets a predetermined criterion.
Generating and sharing interactive three dimensional (3D) content/3D models captured from gaming sessions are described herein. The 3D content is able to be shared with mobile devices, televisions, gaming consoles, Virtual Reality (VR) devices or other devices. Since the content is rendered in 3D, a user is able to change the view direction, zoom in/out, and perform other functions. A framework enables video game media galleries to capture, view, edit, and share interactive, static or dynamic 3D media while keeping the structure of the original gaming assets inaccessible to the end-user.
09 - Scientific and electric apparatus and instruments
Goods & Services
Audio speakers; wireless speakers; loudspeakers; speakers for playing video games; speakers for use with computers; speakers for smartphones; downloadable game software; recorded game software; downloadable computer game software; recorded computer game software; downloadable computer application software for mobile phones and handheld computers, namely, software for communicating with other users, viewing content in the field of video game, and managing and controlling video game consoles; computer software; computer hardware; computer peripheral apparatus; chargers for electric batteries; wireless chargers; conductors, electric; computers and computer peripheral devices; power adapters; battery chargers for speakers for playing video games; battery chargers for speakers for use with computers; battery chargers for speakers for smartphones; wireless adapters for speakers; wireless adapters for speakers for playing video games; wireless adapters for speakers for use with computers; wireless adapters for speakers for smartphones; stands adapted for speakers; stands for speakers for playing video games; stands for speakers for use with computers; stands for speakers for smartphones; audio cables; audio adapter cables.
Tracking and displaying personalized views of participants including: a plurality of pairs of active shutter glasses to be worn by the participants and is synchronized to an LED screen, wherein a shutter for each eye of each pair of the active shutter glasses is configured to allow light to transmit to each eye only during a correct sub-frame and block the light during all other sub-frames; a motion capture system to detect positions of the participants by tracking the plurality of pairs of active shutter glasses to generate tracking data; a plurality of render nodes coupled to the LED screen; a central controller to receive the tracking data to trigger discrete render processes of the plurality of render nodes for unique perspective of each participant, wherein the triggered discrete render processes render and transmit video in parallel to display video streams as sub-frames on the LED screen.
H04N 13/368 - Image reproducers using viewer tracking for two or more viewers
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 3/32 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
The present disclosure relates to an image decoding device capable of recognizing performance necessary for decoding more accurately and a method. Coded data of image data and decoding load definition information for defining a magnitude of a load of a decoding process of a partial region of an image of the image data are acquired; decoding of the acquired coded data is controlled based on the acquired decoding load definition information; and the acquired coded data is decoded according to the controlling. The present disclosure can be applied to an information processing device such as an image coding device that scalably codes image data or an image decoding device that decodes encoded data obtained by scalably coding image data.
H04N 19/33 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
H04N 19/43 - Hardware specially adapted for motion estimation or compensation
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
A fluorescence intensity calculating apparatus, includes a measuring section configured to receive fluorescences generated from plural fluorescent dyes excited by radiating a light to a microparticle multiply-labeled with the plural fluorescent dyes having fluorescence wavelength bands overlapping one another by photodetectors which correspond to different received light wavelength bands, respectively, and whose number is larger than the number of fluorescent dyes, and obtain measured spectra by collecting detected values from the photodetectors, and a calculating section configured to approximate the measured spectra based on a linear sum of single-dyeing spectra obtained from the microparticle individually labeled with the fluorescent dyes, thereby calculating intensities of the fluorescences generated from the fluorescent dyes, respectively.
A block set includes a smallest size block that has a cubic shape, with one side exceeding 2.5 cm, and a plurality of blocks that have a larger volume than the smallest size block, are circumscribed by a rectangle with a minimum side exceeding 2.5 cm, and have the same height from a bottom surface to a top surface. The smallest size block and the blocks each have a plurality of magnets that are disposed on lateral surfaces with different poles facing outward.
A63H 33/04 - Building blocks, strips or similar building parts
G09B 1/38 - Manually- or mechanically-operated educational appliances using elements forming or bearing symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support the elements being connectible magnetically
There is provided a game set including a plurality of pieces that have p types and includes at least n pieces for each of the types, and a board. On a surface of the board, guide images are formed so as to be arranged on lattice points of r rows and c columns and give guidance on placement locations of the pieces. The number p of types is greater than or equal to a number of r or c, whichever is greater. The number n of pieces of each type is greater than or equal to a number of r or c, whichever is smaller. The at least p×n pieces have a plurality of faces that are distinguishable from one another.
A computer-implemented method for animating a model in a video game, the method comprising: obtaining an activity status of the model; obtaining location information of the model, the location information comprising an in-game distance between the model and a viewpoint of the user; selecting, from a database of animations and based on the activity status and the location information, a first animation corresponding to the activity status of the model, the database of animations comprising a plurality of animations corresponding to the activity status of the model, wherein the plurality of animations are different complexity animations; and determining a pose of the model in a scene using the selected first animation. The method prevents unnecessarily complex animations being used and so increases the processing and resource efficiency of animating a model in a video game.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
75.
AUTOMATIC CINEMATIC MODE FALLBACK FOR HIGH LATENCY CONNECTIONS DURING XR STREAMING
Techniques for streaming (and non-streaming) XR applications that, when high latency is detected, automatically and naturally falls back on a cinematic experience in VR. For example, the game may be shown on a movie theater screen in a locally rendered environment where the game is played.
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
G06T 19/00 - Manipulating 3D models or images for computer graphics
76.
SYSTEM AND METHOD FOR DELIVERING PERSONALIZED READING EXPERIENCE
To ensure that individuals gain as much as they can from a piece of text by adapting the text according to their specific reading needs and behaviors, metrics such as the position of a reader's gaze, pupil dilation, squinting behavior, fixation durations etc., are used to adapt the text according to reader's reading needs or habits and improve the reading experience. Machine learning may be used for this.
Personalized experiences for respective users sharing the same computer game system account are provided based on their interactions with games by analyzing deep signals such as browsing history, active times, playing history, and playing style during active sessions. To verify the detection, each user can be automatically prompted to confirm his or her identity, such as through notifications or on-screen prompts. Once verified, the experience can be tailored to each user, including content and artistic styles, to maximize the user's enjoyment and satisfaction.
A head-mounted display (1) mounted on the head of a user (U) comprises: a first display panel (10); and a lens unit (30) including a first lens part (31) that guides, to a pupil of the user (U), first image light (L1) emitted from a display region of the first display panel (10), and a first enlargement lens part (E11) that is provided partially around the first lens part (31) and that enlarges the lens surface through which the first image light (L1) enters.
This invention improves a cushioning function of a pulp mold cushioning material. A pulp mold cushioning material (10) may have: a storage recess (11) into which at least a part of a stored object (90) is fitted; a support wall (13) that constitutes a part of the storage recess (11) and is in contact with the stored object (90); and a first groove part (21) and a second groove part (22) that are recessed from the support wall (13). The first groove part (21) may have a first side wall (21b) extending from a first edge (13a) of the support wall (13) to a bottom part (21a) of the first groove part (21). The second groove part (22) may have a second side wall (22b) extending from a second edge (13b) of the support wall (13) to a bottom part (22a) of the second groove part (22). The second side wall (22b) may be longer than the first side wall (21b).
B65D 81/05 - Containers, packaging elements, or packages, for contents presenting particular transport or storage problems, or adapted to be used for non-packaging purposes after removal of contents specially adapted to protect contents from mechanical damage maintaining contents at spaced relation from package walls, or from other contents
The present technique relates to an image processing device and method which can suppress an increase in an operation time. The image processing device has: an encoding control unit which, upon encoding independently performed per slice for dividing a picture into a plurality of pictures, controls whether or not to adopt for motion information a merge mode of merging a relevant region of an operation target with a surrounding region positioned in a surrounding of the relevant region, based on information of surrounding regions which belong to a relevant slice to which the relevant region belongs; and an encoding unit which encodes the relevant region in the merge mode or a mode other than the merge mode under control of the encoding control unit. The present disclosure is applicable to the image processing device.
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/119 - Adaptive subdivision aspects e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
There is provided a band that is formed by an elastic material, the band including a plug body, and a plurality of receptacle bodies. The plug body and the receptacle bodies are coupled and aligned via a coupling body.
A system comprising a server configured to stream video content comprising a plurality of rendered image frames to a client device and a geometry processing unit that performs a geometry pass on a rendering scene to generate geometric information. The geometry processing unit outputs one or more motion vectors. A transmitting unit transmits the one or more motion vectors to the client device. A lighting processing unit performs a lighting pass on the scene being rendered in dependence upon the generated geometric information. A residual calculation unit generates residual information of a difference between an image frame rendered on the basis of the geometry and the lighting passes and a preceding rendered image frame in the video content after being motion compensated by applying the one or more motion vectors to the preceding rendered image frame. The transmitting unit is configured to transmit the residual information to the client device.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
There is provided a computer-implemented method of rendering an image using a point cloud, the method comprising: receiving a point cloud comprising a plurality of points, each point comprising an extent defined by a three-dimensional extent function, centred on a centre point; for each of a plurality of points in the point cloud, determining a bounding box enclosing the point; performing ray tracing from a camera view from which the image is to be rendered; determining one or more contributing points, which contribute to the color of a pixel in the image, by determining an intersection of a ray with one or more bounding boxes enclosing the contributing points; and determining the color of the pixel based on the contributing points.
A method of operating a communications device to assist at least a first set of a plurality of radio access nodes of a wireless communications network to calibrate coherent joint transmissions (CJT) is provided. The method comprises transmitting a calibration assistance report. The calibration assistance report comprises information for determining one or more confidence metrics associated with one or more of an estimate of a time difference parameter and an estimate of a phase difference parameter for each of at least a subset of the radio access nodes in the first set. The time difference parameter for at least each radio access node in the subset depends at least in part on a time difference between a transmission chain and a reception chain of the radio access node. The phase difference parameter for at least each radio access node in the subset depends at least in part on a phase difference between the transmission chain and the reception chain of the radio access node.
H04B 17/21 - MonitoringTesting of receivers for calibrationMonitoringTesting of receivers for correcting measurements
H04B 7/024 - Co-operative use of antennas at several sites, e.g. in co-ordinated multipoint or co-operative multiple-input multiple-output [MIMO] systems
H04L 5/00 - Arrangements affording multiple use of the transmission path
A method of generating a training set for a machine learning model to upscale volumetric effect froxel grids comprises, for a source of input data for the training set, generating a low-resolution froxel grid for respective ones of a plurality of frames in sequence, the generating comprises time-averaging values contributing to the froxel grid; assigning for a given frame in the sequence the corresponding generated low-resolution froxel grid as a source of input data. For a source of target data for the training set, at the given frame in the sequence, freezing the state of a scene that is being rendered; generating a high-resolution froxel grid for repeated instances of the given frame and scene state; selecting a high resolution froxel grid generated after a predetermined number of repeated instances; and assigning the selected generated high-resolution froxel grid for the given frame as a source of target data.
G06T 3/4076 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
A method for indicating real and virtual objects during display of a virtual environment on a virtual reality, VR, headset, the method comprising: obtaining real-time data derived from monitoring a real-world environment associated with the VR headset; obtaining virtual display data representing one or more virtual objects within the virtual environment; analysing at least one of the real-time data and virtual display data to identify at least one target object in the data; classifying the object type of the at least one target object, the object type being either a real object or a virtual object; providing to the user an indication of the object type of the at least one target object, during display of a virtual environment.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
A data processing apparatus rendering circuitry to render content images for a virtual environment, and image processing circuitry to generate one or more output images in response to one or more of the content images, wherein the image processing circuitry is configured to input at least one 2D volumetric effect image and one or more of the content images to a neural style transfer “NST” model, the NST model being trained to generate one or more of the output images using the at least one 2D volumetric effect image as a style image.
A streaming system is disclosed, comprising: a server computer configured to send a stream of video data; a streaming device configured to receive the video data and provide the video data to a display for displaying the video data to a viewer; and a sensor in communication with the streaming device, configured to measure a quantity that enables a viewing distance (D) between the display and the viewer to be determined; wherein the streaming device is configured to use the quantity to determine the viewing distance and send the viewing distance to the server computer; wherein the server computer is configured to adjust one or more features of the video data based on the viewing distance.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/22 - Setup operations, e.g. calibration, key configuration or button assignment
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
A computer-implemented method of rendering an image using a point cloud, the method comprising: receiving a plurality of points in the point cloud, each point comprising an extent defined by a three-dimensional extent function, centred on a centre point; determining a clipping surface for a point within the point cloud, wherein the clipping surface defines a boundary of the extent; and rendering the image by rendering a portion of the extent of each point within the boundary defined by its respective clipping surface.
A machine learning-based model is configured to make inferences about computer game actions to execute based on dynamic, varying player gestures and to translate those game actions into input sequence macros. In some instances, the button sequence mapping for the macros can even dynamically change based on game state so that different macros for the same computer game action might be inferred by the model depending on game state.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/40 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
97.
AUTOMATED PROCEDURAL GENERATION OF 3D ASSETS THROUGH GEOMETRIC VARIATIONS USING SHAPE ANALYSIS AND SHAPE SYNTHESIS
A machine learning (ML) pipeline works solely on 3D assets without any meta data such as part labels or part structure or part similarity metrics given explicitly alongside the 3D assets. The time required to generate assets is reduced by splitting the process into an online and offline stage. A ML retrieval model based on text and/or image input selects a template asset and candidate parts for generation. A ranking metric is formulated after generating variations from a shape synthesis module using part similarity metrics to rank generated assets. The ranking score closely matches the human perception. The metric can also be used to weed out defective assets generated without human intervention.
Provided is an information processing device 10 that controls a controller 20 provided with a housing and a force sense presentation circuit that presents a force sense in at least a portion of the housing. The information processing device 10 comprises a processor, and the processor estimates at least one among a gripping state, an arrangement position, and an orientation of the housing of the controller 20, and controls the force sense presentation by means of the force sense presentation circuit on the basis of the result of the estimation.
An information processing device 10 controls a controller 20 including: a housing 21; and at least one operation element 22. The information processing device 10 comprises a processor. The processor acquires information indicating at least any of the following: the gripping state, placement position, and posture of the housing 21 of the controller 20; and a user of the controller 20. In a case where the acquired information satisfies a prescribed condition, the processor performs prescribed correction processing on the content of an operation input from at least a portion of the operation elements.
This information processing device: acquires, as a target image, an image that is rendered in order to be presented to a user by an application program; acquires translation data obtained by translating a character string in a prescribed first language included in the target image into a second language different from the first language; and displays on a screen the acquired translation data together with the target image.