A video decoder may be configured to generate a first ordering of reference pictures in a reference picture list for a first block of a slice, wherein generating the first ordering of the reference pictures for the first block comprises assigning indexes to the reference pictures; and generate a second ordering of the reference pictures in the reference picture list for a second block of the slice based on an adaptive reference picture reordering process, wherein the first ordering is different than the second ordering and generating the second ordering of reference pictures for the second block comprises assigning at least some of the indexes to different reference pictures than in the first ordering; decode the first block using the first ordering of the reference pictures; and decode the second block using the second ordering of the reference pictures.
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/51 - Motion estimation or motion compensation
2.
CONTEXT ADAPTIVE BINARY ARITHMETIC CODING (CABAC) PROBABILITY ESTIMATION FOR VIDEO CODING
A video decoder is configured to receive bin values according to a context model, wherein the context model includes one or more probability states and the bin values correspond to one or more syntax elements of the video data; determine a first shift value based on the received bin values; determine a second shift value based on the received bin values; update a first state variable based on the first shift value; update a second state variable based on the second shift value; determine, for a next bin to be decoded, a probability state for the context model based on the first state variable and the second state variable; context decode the next bin to be decoded based on the probability state; and determine values for the one or more syntax elements based on the received bin values and the context decoded next bin value.
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
H04N 19/42 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
3.
INTER PREDICTION CODING WITH RADIUS INTERPOLATION FOR PREDICTIVE GEOMETRY-BASED POINT CLOUD COMPRESSION
Example devices and techniques for coding point cloud data are described. An example device includes memory configured to store the point cloud data and one or more processors communicatively coupled to the memory. The one or more processors are configured to determine least two reference points in a reference point cloud frame of the point cloud data. The one or more processors are configured to apply radius interpolation to the at least two reference points to obtain at least one radius inter predictor for at least one current point in a current point cloud frame of the point cloud data. The one or more processors are configured to code the current point cloud frame based on the at least one radius inter predictor for the at least one current point in the current point cloud frame.
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
4.
MOTION VECTOR DIFFERENCE SIGN PREDICTION FOR VIDEO CODING
A video decoder may be configured to construct motion vector candidates using possible sign values, respective magnitudes of motion vector difference components, and a motion vector predictor for a block of video data, wherein the possible sign values include a positive sign value and a negative sign value, sort the motion vector candidates based on a cost for each of the motion vector candidates to create a sorted list, determine a respective motion vector difference sign for each motion vector difference coordinate based on a motion vector sign predictor index and the sorted list, and decode the block of video data using the respective magnitudes of motion vector difference coordinates and the respective motion vector difference sign for each motion vector difference component.
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/54 - Motion estimation other than block-based using feature points or meshes
H04N 19/56 - Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
H04N 19/567 - Motion estimation based on rate distortion criteria
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
5.
ADAPTIVE BILATERAL MATCHING FOR DECODER SIDE MOTION VECTOR REFINEMENT FOR VIDEO CODING
An example method of encoding or decoding video data includes determining a motion vector for a block of video data using a decoder side motion vector derivation process that includes performing an iterative search process, wherein performing the iterative search process includes: determining a minimum cost through a previous search iteration; updating an overall minimum cost through a current search iteration; and terminating the iterative search process early based on a comparison of the minimum cost through the previous search iteration and the overall minimum cost through the current search iteration; and encoding or decoding the block of video data using the motion vector.
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/557 - Motion estimation characterised by stopping computation or iteration based on certain criteria, e.g. error magnitude being too large or early exit
H04N 19/56 - Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
6.
ADAPTIVE BILATERAL MATCHING FOR DECODER SIDE MOTION VECTOR REFINEMENT
Systems and techniques are provided for processing video data. For example, the systems and techniques can include obtaining a current picture of video data and obtaining reference pictures for the current picture from the video data. A merge mode candidate can be determined for the current picture. First and second motion vectors can be identified for the merge mode candidate. A motion vector search strategy can be selected for the merge mode candidate from a plurality of motion vector search strategies. The selected motion vector search strategy can be associated with one or more constraints corresponding to at least one of the first motion vector or the second motion vector. The selected motion vector search strategy can be used to determine refined motion vectors based on the first motion vector, the second motion vector, and the reference pictures. The merge mode candidate can be processed using the refined motion vectors.
H04N 19/44 - Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/577 - Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Aspects relate to component carriers that are associated with an uplink control channel group. In some examples, data may be communicated on one of the component carriers and feedback for the data may be communicated on another one of the component carriers.
An example device for decoding video data includes a memory configured to store video data; and one or more processors implemented in circuitry and configured to: determine a size of a current block of video data; determine an intra-prediction mode for the current block of video data; determine a mode group including the determined intra-prediction mode, the mode group being one of a plurality of mode groups, each including respective sets of intra-prediction modes; determine a set of available multiple transform selection (MTS) schemes for the current block according to the size and the intra-prediction mode for the current block; determine an MTS scheme from the set of available MTS schemes according to the determined mode group; apply transforms of the MTS scheme to a transform block of the current block to produce a residual block for the current block; and decode the current block using the residual block.
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
9.
ADAPTIVELY CODING MOTION INFORMATION FOR MULTIPLE HYPOTHESIS PREDICTION FOR VIDEO CODING
An example device for decoding video data includes one or more processors configured to: generate a first prediction block for a current block of video data using a base inter-prediction mode; code a merge mode syntax element for a second prediction block representing an additional prediction hypothesis, the merge mode syntax element indicating whether motion information for the second prediction block is coded using merge mode; code the motion information for the second prediction block according to the merge mode syntax element, wherein to code the motion information, the one or more processors are configured to form a merge candidate list including merge candidates representing respective sets of uni-prediction motion information; generate the second prediction block for the current block of video data using the motion information; form a multi-hypothesis prediction block from the first and second prediction blocks; and decode the current block using the multi-hypothesis prediction block.
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
10.
CONTEXT MODELING FOR SIGN PREDICTION FOR VIDEO CODING
A video coder may code a sign prediction syntax element that indicates whether a sign prediction hypothesis is correct for a transform coefficient. The video coder may code the sign prediction syntax element using a context-based coding process. The video coder may determine a context for coding the sign prediction syntax element based on a position of the transform coefficient in the block of video data. The context may be further based on a coding mode used to code the block.
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/129 - Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/18 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
H04N 19/184 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
H04N 19/60 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
11.
DISCONNECTION ARC PREVENTION IN CABLE-SUPPLIED POWER CONNECTION
Power delivery may be controlled to help prevent arcing when a data cable supplying power from a power source device to a power sink device is disconnected. The presence of a user in proximity to a connection between a cable plug and a cable receptacle may be detected. The level of a power signal being conveyed from the power source to the power sink may be reduced in response to the detection.
An example device for coding point cloud data includes a memory configured to store data representing points of a point cloud, and one or more processors implemented in circuitry and configured to: determine height values of points in a point cloud; classify the points into a set of ground points or a set of object points according to the height values; and code the ground points and the object points according to the classifications. The one or more processors may determine top and bottom thresholds and classify the ground and object points according to the top and bottom thresholds. The one or more processors may further code a data structure, such as a geometry parameter set (GPS), including data representing the top and bottom thresholds.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
A video decoder may be configured to determine a motion vector and a motion vector precision for a current block; identify a current block template within the current picture; search within a search area for a final reference block template that corresponds to the current block template, wherein to search within the search area, the one or more processors are further configured to: identify an initial reference block template based on the motion vector, search other reference block templates around the initial reference block template using a step size that is set to an initial step size, and iteratively reduce the step size from the initial step size until the step size is set to a final step size that equals the motion vector precision; determine a prediction block for the current block based on the final reference block template.
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/192 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
An example of processing a point cloud includes responsive to determining to predict a current point in the point cloud using predictive geometry coding, selecting, from a set of prediction modes, a prediction mode for the current point, wherein the set of prediction modes includes at least an intra prediction mode and an inter prediction mode; and responsive to selecting the inter prediction mode for the current point, predicting the current point of the point cloud using inter prediction.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Systems and techniques are provided for overlapped block motion compensation (OBMC). A method can include determining an OBMC mode is enabled for a current subblock of video data; for a neighboring subblock(s) adjacent to the current subblock, determining whether a first, second and third condition are met, the first condition comprising that all reference picture lists for predicting the current subblock are used to predict the neighboring subblock; the second condition comprising that identical reference pictures are used to determine motion vectors associated with the current subblock and the neighboring subblock, andthe third condition comprising that a difference between motion vectors of the current subblock and the neighboring subblock do not exceed a threshold; and based on determining that the OBMC mode is enabled and the first, second, and third conditions are met, determining not to use motion information of the neighboring subblock for motion compensation of the current subblock.
H04N 19/583 - Motion compensation with overlapping blocks
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
An example device for decoding video data includes one or more processors configured to: determine that a first weight and a second weight are specified for a bi-prediction mode predicted current block of video data; determine whether the current block is to be predicted using multi-hypothesis prediction (MHP) mode with the bi-prediction mode as a base mode; in response to determining that the current block is to be predicted using the MHP mode with the bi-prediction mode as the base mode, determine an additional inter-prediction mode of the MHP mode; generate a first prediction block according to the bi-prediction mode; generate a second prediction block according to the additional inter-prediction mode; generate a final prediction block for the current block according to the MHP mode using the first prediction block and the second prediction block; and decode the current block using the final prediction block.
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/34 - Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/523 - Motion estimation or motion compensation with sub-pixel accuracy
H04N 19/573 - Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
H04N 19/577 - Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
Example devices and techniques for multi-pass decoder-side motion vector refinement (DMVR) are disclosed. An example device includes memory configured to store video data and one or more processors coupled to the memory. The one or more processors are configured to apply a multi-pass DMVR to a motion vector for a block of the video data to determine at least one refined motion vector and decode the block based on the at least one refined motion vector. The multi-pass DMVR includes a block-based first pass, a sub-block-based second pass, and a sub-block-based third pass.
Techniques are described herein for processing video data using enhanced interpolation filters for intra-prediction. For instance, a device can determine an intra-prediction mode for predicting a block of video data. The device can determine a type of smoothing filter to use for the block of video data, wherein the type of the smoothing filter is determined based at least in part on comparing at least one of a width of the block of video data and a height of the block of video data to a first threshold. The device can further perform intra-prediction for the block of video data using the determined type of smoothing filter and the intra-prediction mode.
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
19.
DECODER SIDE INTRA MODE DERIVATION FOR MOST PROBABLE MODE LIST CONSTRUCTION IN VIDEO CODING
An method of decoding video data includes deriving, for a current block of video data and using decoder side intra mode derivation (DIMD), a list of intra modes using reconstructed samples of neighboring blocks; constructing, for the current block, a most probable mode (MPM) list, wherein constructing the MPM list comprises inserting, into the MPM list, at least one intra mode from the derived list of intra modes; and predicting, using a candidate selected from the constructed MPM list, the current block.
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
20.
LOW COMPLEXITY HISTORY USAGE FOR RICE PARAMETER DERIVATION FOR HIGH BIT-DEPTH VIDEO CODING
A method of decoding video data comprises updating a coefficient statistic value based on one or more transform coefficients of a transform block (TB), wherein updating the coefficient statistic value comprises, for each respective transform coefficient of the one or more transform coefficients of the TB: performing a derivation process to determine a temporary value, wherein the derivation process is determined based at least in part on which encoding procedure of a plurality of encoding procedures is used to encode the respective transform coefficient, the plurality of encoding procedures including a context-based procedure for encoding the respective transform coefficient and encoding the respective transform coefficient as an absolute value; and setting the coefficient statistic value as an average of the coefficient statistic value and the temporary value; determining a history value based on the coefficient statistic value; determining a Rice parameter for a specific transform coefficient of the TB.
H04N 19/18 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/134 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
H03M 7/40 - Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
21.
LOW COMPLEXITY HISTORY USAGE FOR RICE PARAMETER DERIVATION FOR HIGH BIT-DEPTH VIDEO CODING
Techniques are described herein for processing video data using a history-based rice parameter derivation. For instance, a process can include obtaining a transform block including a plurality of samples. One or more parameters (e.g., rice parameters) can be determined for the plurality of samples by analyzing a local neighborhood of a current sample of the plurality of samples and determining that a number of neighboring transform coefficients of the current sample is less than a threshold amount. A historic parameter value (e.g., a historic rice parameter value) determined from one or more previously decoded transform blocks can be obtained and, based at least in part on the historic parameter value, a parameter (e.g., a rice parameter) can be determined for the current sample. The current sample can be decoded based on the determined parameter for the current sample.
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/169 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
A method of encoding point cloud data comprises storing, in a buffer, a maximum coordinate of a pair of coordinates of an applicable node, wherein the applicable node is a most-recently encoded node with a same position as a current node along an applicable axis and the pair of coordinates are for axes different from the applicable axis; determining a context for a planar mode plane position of the current node, wherein determining the context for the planar mode plane position comprises determining, based on the maximum coordinate of the pair of coordinates of the applicable node, a distance value representing a distance between the current node and the applicable node; determining an increment value that indicates whether the distance value is greater than a threshold; and determining the context index based on the increment value; and entropy encoding the planar mode plane position using the determined context.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
23.
CODING OF COMPONENT OF COLOR ATTRIBUTES IN GEOMETRY-BASED POINT CLOUD COMPRESSION (G-PCC)
A device for decoding encoded point cloud data can be configured to: for a point of a point cloud, determine a first attribute value for a first color component based on a first predicted value and a first residual value; apply a scaling factor to the first residual value to determine a predicted second residual value, wherein the scaling factor has one or both of a non-integer value or an absolute value greater than one; for the point of the point cloud, receive a second residual value in the encoded point cloud data; determine a final second residual value based on the predicted second residual value and the received second residual value; and for the point of the point cloud, determine a second attribute value for a second color component based on a second predicted value and the final second residual value.
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
24.
METHOD AND APPARATUS FOR MEDIA APPLICATION FUNCTION EXPOSURE FUNCTIONALITY
Embodiments include methods and devices for media application function (AF) exposure functionality. Various aspects may include receiving an access configuration message for a service or application provider, the access configuration message including an indication of one or more event types and parameters, one or more exposure levels, and aggregation rules. Various embodiments may include an AF server subscribing other computing devices to event types and parameters for the service or application based on the exposure level of the other computing device.
An example method of encoding a point cloud includes determining one or more attribute parameters of an attribute of a point in the point cloud of a frame, wherein the one or more attribute parameters define how to determine or use a value of the attribute and are applicable to a plurality of points in the point cloud; and signaling, in a bitstream indicative of the point cloud, the one or more attribute parameters in a syntax structure that is specific to the frame.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Systems and techniques are described herein for processing video data. For example, a process can include obtaining a video bitstream, the video bitstream including adaptive loop filter (ALF) data. The process can further include determining a value of an ALF chroma filter signal flag from the ALF data, the value of the ALF chroma filter signal flag indicating whether chroma ALF filter data is signaled in the video bitstream. The process can further include processing at least a portion of a slice of video data based on the value of the ALF chroma filter signal flag.
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
27.
CODING OF LASER ANGLES FOR ANGULAR AND AZIMUTHAL MODES IN GEOMETRY-BASED POINT CLOUD COMPRESSION
A method comprises obtaining a first laser angle; obtaining a second laser angle; obtaining a laser angle difference for a third laser angle; determining a predicted value based on the first laser angle and the second laser angle; and determining the third laser angle based on the predicted value and the laser angle difference for the third laser angle.
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
28.
STORAGE AND DELIVERY OF VIDEO DATA FOR VIDEO CODING
A device for processing video data can be configured to receive a configuration record for decoding a bitstream of the video data, wherein the configuration record for the bitstream includes a toolset indication syntax element that includes information identifying, from a set of video decoding tools, tools that are required to decode the bitstream associated with the configuration record; based on the toolset indication syntax element, determine whether to retrieve the bitstream associated with the configuration record; and based on the determination to retrieve the bitstream associated with the configuration record, retrieve the bitstream and outputting the bitstream to a video decoder for decoding.
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
A video decoder for decoding multi-layer video data can be configured to maintain a decoded picture buffer (DPB) for storing reference pictures for a plurality of layers, wherein the plurality of layers comprise at least a first layer and a second layer; prior to decoding a current picture of an access unit of the first layer, perform a picture output and removal process on the DPB, wherein to perform the picture output and removal process on the DPB, the one or more processors are further configured to remove from the DPB only decoded pictures that belong to the first layer; and after removing a last decoding unit of the current picture from a coded picture buffer (CPB), perform a picture bumping process across all layers of the DPB.
H04N 19/423 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04W 4/18 - Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
30.
COEFFICIENT GROUP BASED RESTRICTION ON MULTIPLE TRANSFORM SELECTION SIGNALING IN VIDEO CODING
A video coder may determine, for a transform block of video data, that at least one coefficient group, of the transform block, that comprises a non-zero transform coefficient is outside of a lowest frequency region of the transform block, wherein the at least one coefficient group is one of a plurality of coefficient groups that each comprise transform coefficients. The video coder may determine not to code a syntax element indicative of a multiple transform selection (MTS) for the transform block based at least in part on the determination of that the at least one coefficient group is outside of the lowest frequency region of the transform block. The video coder may code the video data based at least in part on the determination not to code the syntax element indicative of the multiple transform selection for the transform block.
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/18 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a primary user equipment (UE) may transmit, on a sidelink, information indicating a set of sidelink random access configurations corresponding to a set of uplink random access configurations; receive, on the sidelink and from a secondary UE, a random access signal based at least in part on a selected sidelink random access configuration of the set of sidelink random access configurations, wherein the selected sidelink random access configuration corresponds to a selected uplink random access configuration of the set of uplink random access configurations; and selectively performing a random access procedure in accordance with the selected uplink random access configuration. Numerous other aspects are provided.
Methods, systems, and devices are provided for streaming service in a fifth generation (5G) system (5GS) network. Various embodiments may provide for selecting appropriate network slices for provisioning media content and streaming service over the network. Desired network slice features may be indicated that correspond to service information.
Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a user equipment (UE) may receive, from a base station, a random access channel (RACH) occasion (RO) configuration that includes information related to ordered preamble resources. The UE may generate a preamble resource index according to various parameters selected based on the RO configuration information that relates to the ordered preamble resources. The UE may generate an uplink RACH message that includes a preamble based at least in part on the preamble resource index and a payload associated with a physical uplink shared channel (PUSCH) resource unit (PRU) mapped to the preamble resource index. After decoding the uplink RACH message, the base station may send a response message including scrambled downlink control information (e.g., based on a scrambling identifier or payload information associated with the uplink RACH message). Numerous other aspects are provided.
Methods, systems, and devices are provided for supporting user plane integrity protection (UP IP) for communications with a radio access network (RAN). Various embodiments may include indicating whether or not a wireless device supports UP IP over Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (eUTRA) by including UP IP support indications in user equipment (UE) security capability information elements (IEs).
A video coder is configured to use bi-directional optical flow (BDOF) to determine, based on a first reference picture and a second reference picture, a prediction block for a current block of a current picture of the video data. The first reference picture is a first picture order count (POC) distance from the current picture. The second reference picture is a second POC distance from the current picture. A constraint is imposed by a video coding standard, the constraint requiring the first POC distance to be equal to the second POC distance. The video coder codes, according to the video coding standard, the current block based on the prediction block for the current block.
H04N 19/577 - Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
H04N 19/557 - Motion estimation characterised by stopping computation or iteration based on certain criteria, e.g. error magnitude being too large or early exit
36.
REFERENCE PICTURE RESAMPLING WITH SWITCHABLE FILTERS
Techniques are described for video encoding and decoding using reference picture resampling with switchable filters. One example involves obtaining a current picture and a reference picture, identifying filter index information for a current block of the current picture, and determining that a first picture size value of the current picture and a second picture size value of the reference picture are different. Based on the determining that the first picture size value of the current picture and the second picture size value of the reference picture are different, performing a resampling process using a default filter index in place of a current filter index identified by the filter index information. Additional examples can use the current filter index identified by the filter index information in subsequent blocks. In various examples, the current filter index can be derived or signaled.
The disclosure describes examples for determining samples to use for DC intra mode prediction, such as where the samples are in a row or column that is not immediately above or immediately left of the current block. The samples may be aligned with the current block such that a last sample in the samples in a row above the current block is in same column as last column of the current block and such that a last sample in the samples in a column left of the current block is in the same row as the last row of the current block.
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
38.
CLIPPING INDICES CODING FOR ADAPTIVE LOOP FILTER IN VIDEO CODING
A video coder is configured to code an adaptive loop filter (ALF) clipping index as a fixed-length unsigned integer. The video coder may apply, based on the ALF clipping index, an ALF to a block of a picture of the video data.
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
39.
GRADIENT-BASED PREDICTION REFINEMENT FOR VIDEO CODING
This disclosure describes gradient-based prediction refinement. A video coder (e.g., video encoder or video decoder) determines one or more prediction blocks for inter-predicting a current block (e.g., based on one or more motion vectors for the current block). In gradient-based prediction refinement, the video coder modifies one or more samples of the prediction block based on various factors such as displacement in a horizontal direction, the horizontal gradient, a displacement in the vertical direction, and a vertical gradient. This disclosure provides for gradient-based prediction refinement where a precision level of the displacement (e.g., at least one of the horizontal or vertical displacement) is unified (e.g., the same) for different prediction modes (e.g., including an affine mode and a bi-directional optical flow (BDOF) mode).
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
40.
AFFINE LINEAR WEIGHTED INTRA PREDICTION IN VIDEO CODING
A video coder performs a Most-Probable Mode (MPM) derivation process that derives one or more MPMs for a current block that is not coded using affine linear weighted intra prediction (ALWIP). As part of performing the MPM derivation process, the video coder determines whether a neighboring block of the current block is an ALWIP-coded neighboring block. Based on the neighboring block being an ALWIP-coded neighboring block, the video coder determines that a value of an intra prediction mode of the neighboring block is a value indicating a planar mode. The video coder codes the current block based on one of the MPMs for the current block.
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/463 - Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
41.
PREDICTION SIGNAL FILTERING IN AFFINE LINEAR WEIGHTED INTRA PREDICTION
A device for decoding video data determine that a current block of video data is encoded in an affine linear weighted intra prediction (ALWIP) mode; derives, based on a set of left edge neighboring samples of the current block and a set of top edge neighboring samples of the current block, a subset of left edge samples and a subset of top edge samples; applies an affine model to the subset of left edge samples and the subset of top edge samples to generate an intermediate block of intermediate samples; filters the intermediate samples to generate a final prediction block; decodes the current block of video data based on the final prediction block.
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
42.
BLOCK-BASED QUANTIZED RESIDUAL DOMAIN PULSE CODE MODULATION ASSIGNMENT FOR INTRA PREDICTION MODE DERIVATION
Techniques are described for improving video coding. For example, a first block of a picture included in an encoded video bitstream can be obtained. A second block of the picture can be determined as being coded (e.g., encoded) using a type of block-based quantized residual domain pulse code modulation (BDPCM) mode, such as vertical BDPCM mode or horizontal BDPCM mode. In the event the second block is coded using the vertical BDPCM mode, a vertical intra-prediction mode can be determined for an intra-prediction mode list for the first block. The vertical intra-prediction mode can be added to the intra-prediction mode list for the first block. In the event the second block is coded using the horizontal BDPCM mode, a horizontal intra-prediction mode can be determined for the intra-prediction mode list for the first block and the horizontal intra-prediction mode can be added to the intra-prediction mode list.
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
A device for decoding video data determines, for a residual block of video data encoded using a transform skip mode, a value for a first neighboring coefficient of a coefficient currently being decoded; determines a value for a second neighboring coefficient of the coefficient currently being decoded; determines a context offset for the coefficient currently being decoded based on the value for the first neighboring coefficient and the value for the second neighboring coefficient; and decodes a value for the coefficient currently being decoded based on the determined context offset.
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/18 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
A video encoder and/or video decoder are configured to apply an adaptive loop filter to a reconstructed block of video data. The video encoder and/or video decoder may be configured to determine a value of a gradient for a sample in the reconstructed block of video data, including performing a gradient calculation for the sample using an unavailable sample that is outside of one of: a picture, a slice, a tile, or a tile group that includes the reconstructed block of samples, determine an adaptive loop filter for the reconstructed block of video data based at least in part on the determined value of the gradient for the sample, and apply the determined adaptive loop filter to the reconstructed block of video data to generate a filtered block of video data.
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
An example method for entropy decoding of video data includes retrieving a pre-defined initialization value for a context of a plurality of contexts used in a context-adaptive entropy coding process to entropy code a value for a syntax element for an independently codable unit of video data; determining, based on the pre-defined initialization value and in a linear domain, an initial probability state of the context; and entropy decoding, from a bitstream and based on the initial probability state of the context, a bin of the value for the syntax element.
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
46.
COEFFICIENT DOMAIN BLOCK DIFFERENTIAL PULSE-CODE MODULATION IN VIDEO CODING
A video decoder may determine, based on syntax elements in a bitstream that comprises an encoded representation of the video data, residual quantized samples of a block of the video data. Additionally, the video decoder may determine quantized residual values based on the residual quantized samples. After determining the quantized residual values, the video decoder may inverse quantize the quantized residual values. The video decoder may generate predicted values by performing intra prediction for the block using unfiltered samples from above or left block boundary samples. Furthermore, the video decoder may reconstruct original sample values of the block based on the inverse-quantized quantized residual values and the prediction values.
H04N 19/48 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
An example method includes inferring, for a current transform block of a current video block, a transform type from a plurality of transform types that includes one or more discrete cosine transforms (DCTs) and one or more discrete sine transforms (DSTs), wherein inferring the transform type comprises: determining a size of the current transform block; determining whether the current video block is partitioned using intra-subblock partitioning (ISP); and responsive to determining that the size of the current transform block is less than a threshold and that the current video block is partitioned using ISP, selecting a particular DST of the one or more DSTs as the selected transform type; transforming, using the selected transform type, the current transform block to obtain a block of reconstructed residual data for the video block; and reconstructing, based on the reconstructed residual data for the video block, the video block.
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/625 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
48.
DYNAMIC HYBRID AUTOMATIC REPEAT REQUEST (HARQ) CODEBOOK FOR MULTI-TRANSMIT RECEIVE POINT (TRP) COMMUNICATION
This disclosure provides systems, methods and apparatus for wireless communication. In one aspect, a multi-transmit-receive point (TRP) approach for hybrid automatic repeat request (HARQ) acknowledgment (ACK) feedback using counter downlink assignment indicators (DAIs) (cDAIs) and total DAIs (tDAIs) is provided. For example, some techniques and apparatuses described herein may provide a joint counting method in which cDAIs and tDAIs are implemented and tracked jointly between the TRPs of a multi-TRP group. This may be useful in the ideal backhaul scenario when the multi-TRP group is jointly scheduled, and may be more robust against errors than a separate counting method. Some techniques and apparatuses described herein may provide a separate counting method, in which cDAIs and tDAIs are implemented and tracked separately by the respective TRPs of a multi-TRP group.
Some techniques and apparatuses described herein provide for the determination of a gap for a scheduled communication involving multiple, different TBs. For example, some techniques and apparatuses described herein provide for the determination of a gap between two or more PDSCHs that are scheduled by a same DCI based at least in part on a UE capability, such as a processing time or a processing mode. Some techniques and apparatuses described herein provide for the determination of a gap between a last PDSCH and a next PDCCH based at least in part on the UE capability. The gap(s) may permit the UE to successfully receive and/or decode multiple PDSCHs and/or PDCCHs in view of the limitations imposed by the UEs processing time or processing mode. In this way, efficiency of PDSCH/PDCCH resource utilization may be improved, the likelihood of a failed PDCCH transmission is reduced, and network efficiency is improved.
Methods, systems, and devices for wireless communications are described. A user equipment (UE) may use easily detectable transmission parameters to infer difficult-to- detect parameters for communications with a base station. For example, the base station may provide the UE with multiple sets of transmission parameters, and the UE 115 may choose from among these sets of transmission parameters for the communications. In some cases, the UE may be configured with a reference set of transmission parameters, receive one or more transmission parameters that are different relative to the reference set of transmission parameters, and then determine to use the different transmission parameters for the communications. Additionally or alternatively, the multiple sets of transmission parameters may be specified for a certain uplink message (e.g., random access channel messages), where the UE selects the transmission parameter set based on one or more characteristics of the uplink message.
An example device for coding (encoding or decoding) video data includes a memory for storing video data and one or more processors implemented in circuitry and configured to form an inter-prediction block for a current chrominance block of the video data; form an intra-prediction block for the current chrominance block of the video data; determine a number of neighboring blocks to a luminance block corresponding to the current chrominance block that are intra-prediction coded; determine a first weight and a second weight according to the number of neighboring blocks that are intra-prediction coded; apply the first weight to the inter-prediction block and the second weight to the intra-prediction block; combine the first weighted inter- prediction block and the second weighted intra-prediction block to form a prediction block for the current chrominance block; and code the current chrominance block using the prediction block.
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
52.
FEEDBACK TRANSMISSION USING MULTIPLE ACCESS SIGNATURES
Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a user equipment (UE) may receive, from another UE, a particular transmission on a particular transmission resource. The UE may transmit, on a feedback resource selected based at least in part on the particular transmission resource, a feedback message using a multiple access signature determined based at least in part on a characteristic of the particular transmission. Numerous other aspects are provided.
A video decoder may receive, in a bitstream that comprises an encoded representation of video data, information indicating whether a residual block is partitioned and information indicating a partition tree type for the residual block based on the residual block being partitioned, wherein the residual block is indicative of a difference between a current block and a prediction block. The video decoder may determine, based on the received information that the residual block is partitioned and the partition tree type for the residual block, a plurality of residual sub-blocks into which the residual block is partitioned according to the partition tree type. The video decoder may produce the residual data for the current block based at least in part on the residual block being partitioned according to the partition tree type into the plurality of residual sub-blocks and may decode the current block using the residual data.
H04N 19/119 - Adaptive subdivision aspects e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
In order to overcome problems that a UE may face in determining CORESETs and beams for communication with the base station during a COT, a method, apparatus, and computer-readable medium are provided for a base station to indicate to the UE which CORESET(s), QCL relationships, UL resources, and/or spatial relationships are selected for a COT. A UE receives an indication of a set of CORESETs selected from a plurality of CORESETs or a set of QCL relationships selected from a plurality of QCL relationships. The UE determines the set of CORESETs from the plurality of CORESETs or the set of QCL relationships from the plurality of QCL relationships based on the indication received. A UE receives an indication of a set of UL resources or a set of spatial relations. The UE determines the set of UL resources or the set of spatial relations based on the indication received.
A video coder may be configured to determine a value for a zero parameter based on the Rice parameter, wherein the value for the zero parameter identifies a coded value that corresponds to a coefficient level of zero; receive a first coded value for a first coefficient of the second set of coefficients; and based on the value for the zero parameter and the first coded value for the first coefficient, determine a level for the first coefficient.
H04N 19/13 - Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
H04N 19/122 - Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
H04N 19/18 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/60 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
A user equipment (UE) may receiving, from a base station, a synchronization signal block (SSB) of a set of quasi-collocated (QCL) SSBs, the SSB comprising an indication of a parameter indicating information associated with a plurality of downlink control channel locations corresponding to the set of QCL SSBs. The UE may determine, based at least in part on the parameter, the plurality of downlink control channel locations corresponding to the set of QCL SSBs. The UE may receive a downlink grant for a system information based at least in part on monitoring one or more downlink control channel locations of the plurality of downlink control channel locations. The UE may receive the system information based at least in part on the downlink grant. The UE may establish a connection with the base station based at least in part on the SSB and the received system information.
Systems and techniques for processing video data include a history-based motion vector predictor (HMVP) table with two or more HMVP candidates, where a first entry of the HMVP table associated with a first index value includes a first HMVP candidate and a second entry of the HMVP table associated with a second index value includes a second HMVP candidate, the first index value being lower than the second index value. For an advanced motion vector prediction (AMVP) candidate list, one or more HMVP candidates from the HMVP table are selected in a reverse order, where the second HMVP candidate is selected before the first HMVP candidate according to the reverse order. The selected one or more HMVP candidates are added to the AMVP candidate list. The AMVP candidate list can be used to perform AMVP for the one or more blocks of video data.
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/423 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
Methods, systems, and devices for using a common physical downlink control channel (PDCCH) to indicate transmission beams to be used by a base station during a transmission opportunity or a periodicity to monitor search spaces are described. A user equipment (UE) may monitor a search space (in some cases, using a periodicity) for the common PDCCH to determine if transmission beams associated with the UE are to be used during the transmission opportunity. If so, the UE may wake-up during at least a portion of the transmission opportunity to receive additional information. If the common PDCCH indicates that no transmission beams associated with the UE are to be used during the transmission opportunity, the UE may enter or re-enter the sleep state. The UE may switch to a new periodicity for monitoring the search space based on the information transmitted within the common PDCCH.
H04B 7/0408 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas using two or more beams, i.e. beam diversity
A device for decoding video data includes a memory configured to store video data and one or more processors implemented in circuitry and configured to store sets of adaptive loop filter (ALF) parameters in a one-dimensional array in the memory, the one-dimensional array having a predefined size of N memory elements, N being a positive integer value, wherein the one or more processors are configured to store, in one or more of the memory elements of the array, both corresponding ALF parameters and a temporal layer identifier (ID) value indicating a temporal layer from which the corresponding ALF parameters are to be estimated; decode one or more blocks of the video data; and filter the one or more blocks using the ALF parameters of the one-dimensional array. The device may further encode the one or more blocks prior to decoding the one or more blocks.
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/423 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
A video decoder configured to obtain a first reconstructed block of video data that includes a first sample with a first value; apply deblocking filtering to the first reconstructed block to create a deblocking filtered block; determine a first clipping value for the first sample based on a location of the first sample relative to a boundary of the reconstructed block and based on a size of the reconstructed block; compare the first clipping value to an amount of modification to the first sample caused by the deblocking filtering; in response to the amount of modification to the first sample caused by the deblocking filtering being greater than the first clipping value, modify the first value by the first clipping value to determine a first filtered value for the first sample; and output a deblocking filtered block of video data with the first sample having the first filtered value.
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
H04N 19/187 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
H04N 19/80 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
H04N 19/86 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Techniques are described for history-based candidate list operations in video coding for determining motion information for a current block. In one example, a device for decoding video data includes a memory configured to store a history-based candidate list and a video decoder. The video decoder is configured to construct the history-based can- didate list by storing, in the memory, motion information of reconstructed blocks into the history-based candidate list as candidates of the history-based candidate list, identify a subset of candidates of the history- based candidate list, generate a candidate list based on the identified subset of candidates of the history-based candidate list, and reconstruct a current block based on the generated candidate list.
Techniques are described using Position Dependent Intra Prediction Combination (PDPC) with wide angle intra prediction. For example, a size of the current block of video data can be determined. Based on the size, a wide angle intra-prediction mode can be determined for the current block. A prediction block for the current block can be determined using the wide angle intra-prediction mode. A prediction sample from the prediction block can be modified to generate a modified prediction sample using PDPC, which can include determining one or more reference samples that are external to the current block based on the wide angle intra-prediction mode, and modifying the prediction sample to generate the modified prediction sample based on the determined one or more reference samples.
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/119 - Adaptive subdivision aspects e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
63.
MULTIPLE HISTORY BASED NON-ADJACENT MVPS FOR WAVEFRONT PROCESSING OF VIDEO CODING
An example device for coding video data includes a memory configured to store video data; and one or more processing units implemented in circuitry and configured to: store motion information for a first coding tree unit (CTU) line of a picture in a first history motion vector predictor (MVP) buffer of the memory; reset a second history MVP buffer of the memory; and after resetting the second history MVP buffer, store motion information for a second CTU line of the picture in the second history MVP buffer, the second CTU line being different than the first CTU line. Separate threads of a video coding process executed by the one or more processors may process respective CTU lines, in some examples.
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
64.
EFFICIENT WEIGHTED PROBABILITY ESTIMATION FOR BINARY ARITHMETIC CODING
An example coding device, such as a video coding (encoding or decoding) device is configured to determine a plurality of estimator functions to be used to calculate a probability of a symbol having a particular value, wherein the symbol is to be binary arithmetic coded according to the probability, and wherein the estimator functions do not include division operations; determine a plurality of weights for the plurality of estimator functions such that when the respective weights are applied to the estimator functions, a resulting sum of outputs of the plurality of estimator functions yields the probability; calculate the probability of the symbol having the particular value, including: execute the estimator functions without using the division operations to determine the outputs; and calculate a sum of the outputs as the probability; and binary arithmetic code the symbol according to the probability of the symbol having the particular value.
A video encoder and video decoder may determine a set of adaptive loop filters, from among a plurality of sets of adaptive loop filters, on a per-block basis. Each set of adaptive loop filters may include filters from a previous picture, filters signaled for the current picture, and/or pre-trained filter. By varying the set of adaptive loop filters on a per-block basis, the adaptive loop filters available for each block of video data may be more adapted to local statistics of the video data.
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 19/463 - Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
66.
POSITION DEPENDENT INTRA PREDICTION COMBINATION EXTENDED WITH ANGULAR MODES
Techniques are described of using Position Dependent Intra Prediction Combination (PDPC). A video coder such as a video encoder or a video decoder utilizes PDPC in cases where a current block intra mode predicted using an angular intra prediction mode.
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/11 - Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
67.
SIGNALING MECHANISMS FOR EQUAL RANGES AND OTHER DRA PARAMETERS FOR VIDEO CODING
Dynamic Range Adjustment can be used to correct distortions that can occur when the dynamic range of the colors in video are transformed. In various examples, Dynamic Range Adjustment can be performed using a piecewise linear function that takes as input a range of color values. Parameters describing the piecewise linear function can be encoded into a bitstream, and the parameters can be used by a decoding process to reconstruct the piecewise linear function. To improve encoding efficiency, techniques can be applied by which redundant values in the parameters need not be encoded when the range of input values for the piecewise linear function can be divided into portions having equal lengths. The decoding process can derive the omitted values from values that are provided, and can apply the piecewise linear function to decoded video data to perform Dynamic Range Adjustment.
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/36 - Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
H04N 19/80 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
68.
SYSTEM AND METHOD OF FLOATING POINT MULTIPLY OPERATION PROCESSING
A processor includes an integer multiplier configured to execute an integer multiply instruction to multiply significand bits of at least one floating point operand of a floating point multiply operation. The processor also includes a floating point multiplier configured to execute a special purpose floating point multiply accumulate instruction with respect to an intermediate result of the floating point multiply operation and the at least one floating point operand to generate a final floating point multiplication result.
Methods, systems, and devices for wireless communications are described that provide for signaling and switching of beam pair links (BPLs) for directional transmission beams between a base station and a user equipment (UE). A threshold value may be determined, which corresponds to an amount of time for a UE to receive and decode control information, and apply a different BPL than a current BPL that that is in use. The UE may maintain a BPL for data, which is used during data transmission time intervals (TTIs) until an indication is received to change the BPL for data. The UE and the base station may determine to change between BPLs based at least in part on the threshold value and a scheduling offset between a control channel transmission that allocates resources for a data TTI and a start of the data TTI.
H04B 7/06 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
H04B 7/08 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the receiving station
70.
MULTI-SLOT SCHEDULING WITH REPETITIVE TRANSMISSION OF A TRANSPORT BLOCK WITH DIFFERENT REDUNDANCY VERSIONS
Methods, systems, and devices for wireless communications are described. A wireless device may receive downlink control information corresponding to a transmission of a transport block over a plurality of transmission time intervals (TTIs). The wireless device may identify a redundancy version sequence for the transport block. The redundancy version may be a pre-defined sequence or may be configured by higher layer signaling based on sequence criterion. The wireless device may transmit or receive a plurality of redundancy versions of the transport block in the plurality of TTIs based at least in part on the redundancy version sequence. In some examples, the plurality of redundancy versions may be determined based at least in part on a starting redundancy version identified by another wireless device.
The various aspects include transmitting, by a transceiver of the wireless communications device, an indication of a primary service preference for a primary service and a secondary service preference for a secondary service, receiving, by the transceiver, a radio access technology support configuration from the network at least in part in response to the transmitted indication of the primary service preference and the secondary service preference, and supporting the primary service or the secondary service based at least in part on the channel layer configuration. In this manner, the wireless communications device provides the network with multiple ranked service preferences, so that if a network down selection is needed, the network may select a lower ranked service preference rather than generating a channel layer configuration without knowledge of which settings are most efficient for the wireless communications device.
Certain aspects of the present disclosure relate to communication systems, and more particularly, to determining a timing advance granularity for uplink communications associated with different numerologies.
Certain aspects of the present disclosure provide techniques and apparats for remaining minimum system information (RMSI) physical downlink control channel (PDCCH) monitoring. A method for wireless communications by a user equipment (UE) is provided. During initial access of a cell, the UE determines a default first periodicity to monitor for at least a first PDCCH scheduling RMSI. After the initial access, the UE determines a second periodicity to monitor at least a second PDCCH scheduling RMSI based on the default first periodicity or based on an indication received in a previous RMSI from the cell. The UE monitors for at least the first PDCCH during initial access at the default first periodicity and monitors for at least the second PDCCH at the second periodicity after the initial access.
Aspects of the present disclosure include methods, apparatuses, and computer readable media for inserting an offset between a channel resource element of a channel resource block and a synchronization resource element of a synchronization signal block, transmitting a bandwidth value of the offset to a user equipment.
Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a transmitter user equipment may determine a beam to be used to transmit a vehicle-to-everything (V2X) communication to a receiver UE; may determine one or more eligible resource blocks of the beam that are candidates for the V2X communication based at least in part on a schedule associated with the beam; may transmit, to the receiver UE, a proposed schedule that indicates the one or more eligible resource blocks of the beam that are candidates for the V2X communication; and may transmit the V2X communication to the receiver UE via the beam based at least in part on transmitting the proposed schedule. Numerous other aspects are provided.
Certain aspects of the present disclosure provide techniques for determining when to transmit and transmitting an aperiodic tracking reference signal (TRS). Certain aspects provide a method for wireless communication. The method generally includes determining a channel condition change of a downlink channel between a first wireless device and a second wireless device. The method further includes determining whether the channel condition change satisfies a trigger condition. The method further includes triggering transmission of an aperiodic tracking reference signal when the channel condition change satisfies the trigger condition.
H04L 5/00 - Arrangements affording multiple use of the transmission path
H04B 7/06 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
77.
A MECHANISM TO ENABLE INTERWORKING BETWEEN NETWORK SLICING AND EVOLVED PACKET CORE CONNECTIVITY
Aspects of the present disclosure relate to a mechanism to enable interworking between fifth generation system (5GS) network slicing and evolved packet core (EPC) connectivity. In an example, techniques are provided for existing packet data unit (PDU) sessions that provide connectivity to a network slice from a set of network slices. Connectivity to the network slice is in response to a user equipment (UE), that uses network slices, moving between a 5G network and a 4G network. The existing PDU sessions are connected to a dedicated EPC core network that supports the same services provided by the network slice.
H04W 60/00 - Affiliation to network, e.g. registration; Terminating affiliation with the network, e.g. de-registration
H04W 8/02 - Processing of mobility data, e.g. registration information at HLR [Home Location Register] or VLR [Visitor Location Register]; Transfer of mobility data, e.g. between HLR, VLR or external networks
This disclosure provides systems, methods, and apparatus, including computer programs encoded on computer storage media, for multiplexing clients of different generations in trigger-based transmissions, including trigger-based transmissions in extremely-high throughput (EHT) Wi-Fi systems. An access point (AP) may generate a trigger frame compatible with two types of stations (STAs), such as EHT STAs and legacy (or high efficiency (HE)) STAs. The AP may transmit the trigger frame to a group of STAs, where legacy STAs may process the trigger frame a legacy trigger frame. EHT STAs may process the trigger frame to determine resource unit (RU) allocations for uplink transmissions in a bandwidth greater than a legacy bandwidth. An EHT STA may determine the resources in the larger bandwidth based on an EHT RU allocation table, a legacy RU allocation table and an additional bit in the trigger frame, or an ordering of RU allocations in the trigger frame.
Wireless communication apparatus and methods related to wireless communication, for example, device-to-device feedback are described. In aspects, a method of wireless communication may include communicating a sidelink traffic communication using a sidelink communication structure, and communicating an allotting for sidelink feedback using at least one feedback symbol of the sidelink communication structure. In aspects, the method includes communicating, by a first wireless communication device, a sidelink feedback communication using at least one feedback symbol of the sidelink communication structure, wherein the sidelink feedback communication is associated with the sidelink traffic communication. Numerous other aspects are provided.
Techniques and systems are provided for processing image data using one or more neural networks. For example, a patch of raw image data can be obtained. The patch can include a subset of pixels of a frame of raw image data, and the frame can be captured using one or more image sensors. The patch of raw image data includes a single color component for each pixel of the subset of pixels. At least one neural network can be applied to the patch of raw image data to determine a plurality of color component values for one or more pixels of the subset of pixels. A patch of output image data can then be generated based on application of the at least one neural network to the patch of raw image data. The patch of output image data includes a subset of pixels of a frame of output image data, and also includes the plurality of color component values for one or more pixels of the subset of pixels of the frame of output image data. Application of the at least one neural network causes the patch of output image data to include fewer pixels than the patch of raw image data. Multiple patches from the frame can be processed by the at least one neural network in order to generate a final output image. In some cases, the patches from the frame can be overlapping so that the final output image contains a complete picture.
Aspects of the disclosure relate to a method of operating a user equipment for wireless communication with a network. In some aspects, the UE establishes a connection to a network and obtains a control plane message from the network. The control plane message may include one or more types of policy information if a size of the one or more types of policy information is less than or equal to a maximum payload size of the control plane message, or information indicating at least a network location from where the one or more types of policy information may be obtained by the UE over a user plane if the size of the one or more types of policy information is greater than the maximum payload size of the control plane message, or a combination thereof. Other aspects, embodiments, and features are also claimed and described.
Wireless communication techniques that include beam training performed to aid mobile devices in identifying beams for use in communications (e.g., uplink communications) are discussed. A mobile device may transmit to a base station an indication of a level of beamforming reciprocity of the mobile device. A base station/gNB may transmit an indication of one or more transmission beams to be used by the mobile device for uplink wireless communication based on the level of beamforming reciprocity. The one or more transmission beams to be used by the mobile device for uplink communication may include a reciprocal transmission beam. A mobile device not capable of full beamforming reciprocity may determine a transmission beam through uplink beam training. A mobile device may transmit to a base station information using at least one of the one or more transmission beams. Other aspects and features are also claimed and discussed.
H04B 7/06 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
H04B 7/08 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the receiving station
H04L 5/00 - Arrangements affording multiple use of the transmission path
83.
CARRIER-DEPENDENT RANDOM ACCESS CHANNEL (RACH) RESPONSE SEARCH SPACE
Certain aspects of the present disclosure relate to communication systems, and more particularly, to random access channel (RACH) procedures in deployments where a RACH transmission may be sent on different UL carriers, which may include supplemental UL (SUL) carriers.
A device for video decoding can be configured to obtain, from a syntax structure in a bitstream comprising an encoded representation of the video data, a syntax element indicating whether 6-parameter affine prediction is enabled for blocks corresponding to the syntax structure, wherein the blocks corresponding to the syntax structure comprise a first block; based on the syntax element indicating that the 6-parameter affine prediction is enabled for the blocks corresponding to the syntax structure, use the 6-parameter affine prediction to generate a predictive block for the first block; and use the predictive block and residual data to reconstruct the first block.
H04N 19/54 - Motion estimation other than block-based using feature points or meshes
H04N 19/109 - Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
85.
CONFIGURATION ASPECTS OF A TRACKING REFERENCE SIGNAL IN NEW RADIO
Methods, systems, and devices for wireless communications are described for configuration aspects of a tracking reference signal in New Radio. A base station may select a first burst duration and a second burst duration for a tracking reference signal (TRS) burst, the first burst duration being different from the second burst duration, and may transmit configuration information indicating the first burst duration and the second burst duration to a user equipment (UE). The base station may transmit a first TRS burst having the first burst duration and a second TRS burst having the second burst duration. The UE may detect the first TRS burst having the first burst duration and the second TRS burst having the second burst duration based at least in part on the configuration information, and perform resource tracking based at least in part on the detected first TRS burst and the second TRS burst.
At a receiver, errors may occur in estimating phase trajectory based on PT-RS due to a window effect. In order to address the problem of such errors, a transmitter determines at least one location for inserting PT-RS samples into a sequence of a plurality of samples, wherein a first set of the samples comprises a first number of samples at a beginning of the sequence and/or a second number of samples at an end of the sequence, and wherein the at least one location for the PT-RS samples is within a second set of the plurality of samples. The apparatus inserts the PT-RS samples into the sequence based on the determined at least one location and transmits a signal based on the inserted PT-RS samples. A receiver extracts the PT-RS samples and estimates phase errors for data samples in the received transmission based on the extracted PT-RS samples.
Certain aspects of the present disclosure generally relate to wireless communication. In some aspects, a user equipment (UE) may receive scheduling information for a transmission associated with a particular radio access technology (RAT) of a 4G RAT or a 5G RAT, wherein the scheduling information identifies a particular resource of one of a first set of resources for the 4G RAT or a second set of resources for the 5G RAT, wherein one or more resources of the first set of resources are guaranteed for the 4G RAT based at least in part on a reference 4G time division duplexing (TDD) configuration, and wherein the one or more resources of the first set of resources and the second set of resources do not overlap in a time domain; and transmit or receive the transmission using the particular resource. Numerous other aspects are provided.
88.
TRANSFER OF PROTECTED CONFIGURATION DATA FROM HOME MOBILE NETWORK
Protected configuration data may be sent to user equipment subscribed to a first wireless communication network by a second wireless communication network with which the user equipment is registered. The first wireless communication network may protect the configuration data based on at least one first key when sending the data to the second wireless communication network. The second wireless communication network may send the configuration data to user equipment in messages protected by at least one second key. User equipment receives the message from the second wireless communication network, extracts the configuration data using the at least one second key, determines whether the configuration data has been altered at least in part based on the at least one first key, and applies the configuration data if the configuration data has not been altered. The protected configuration data may be sent in an information element of a control plane message.
Aspects of the present disclosure provide techniques for handling power transitions in transmissions in new radio (NR) devices. An exemplary method includes changing from using a first transmit power during a first portion of a transmission to a second transmit power during a second portion of the transmission, and taking action to mitigate a potential phase coherence loss associated with the changing from the first transmit power to the second transmit power.
Methods, systems, and devices for wireless communications are described. A wireless device may identify a default monitoring periodicity for monitoring a control channel for a slot format indicator (SFI). For example, the default monitoring periodicity may be based on semi-static control signaling received from a base station or another suitable network entity. The wireless device may subsequently receive an indication of a dynamic monitoring periodicity for monitoring the control channel for the SFI. In some cases, the dynamic monitoring periodicity indication may be included in the SFI. The wireless device may then monitor the control channel for the SFI based at least in part on the dynamic monitoring periodicity.
The present disclosure provides techniques that may be applied, for example, for providing network policy information in a secure manner. In some cases, a UE may receive a first message for establishing a secure connection with a network, wherein the first message comprises network policy information, generate a first key based in part on the network policy information, and use the first key to verify the network policy information.
Methods and techniques are described for limiting a size of LTE Positioning Protocol (LPP) messages in a location session between a user equipment and location server. In one embodiment, a first device sends a first LPP message to a second device, indicating that the first device is capable of receiving segmented LPP messages. Subsequently, the first device receives a plurality of LPP message segments from the second device comprising one or more non-final LPP message segments and a final LPP message segment, where each LPP message segment includes a "non-final" or "final" indication. The first device stores the non-final LPP message segments and processes the LPP message segments after receiving the final LPP message segment. Prior to sending the first LPP message, the first device may receive an LPP message from the second device indicating the second device is capable of sending segmented LPP messages.
Certain aspects of the present disclosure generally relate to wireless communication. In some aspects, a user equipment (UE) may determine a limit on a number of resource blocks (RBs) permitted to be used for a vehicle-to-everything (V2X) transmission by the UE; may determine, based at least in part on the limit, one or more parameters for the V2X transmission, wherein the one or more parameters include at least one of a modulation and coding scheme (MCS) for the V2X transmission, a number of transport blocks (TBs) for the V2X transmission, a number of RBs per TB for the V2X transmission, or a retransmission configuration for the V2X transmission; and may transmit the V2X transmission based at least in part on the one or more parameters. Numerous other aspects are provided.
An example device for coding video data includes a memory configured to store video data, and one or more processors implemented in circuitry and configured to code a first motion vector difference (MVD) representing a difference between a first motion vector of a current block of video data predicted using affine prediction and a first motion vector predictor (MVP) for the first motion vector, predict a second MVD from the first MVD for a second motion vector of the current block, and code the current block using affine prediction according to the first motion vector and the second motion vector. Predicting the second MVD from the first MVD in this may reduce bitrate of a bitstream including coded video data, as well as improve processing efficiency.
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/137 - Motion inside a coding unit, e.g. average field, frame or block difference
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/54 - Motion estimation other than block-based using feature points or meshes
A low noise amplifier (LNA) device includes a first transistor on a semiconductor on insulator (SOI) layer. The first transistor includes a source region, a drain region, and a gate. The LNA device also includes a first-side gate contact coupled to the gate. The LNA device further includes a second-side source contact coupled to the source region. The LNA device also includes a second-side drain contact coupled to the drain region.
H01L 27/12 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including integrated passive circuit elements with at least one potential-jump barrier or surface barrier the substrate being other than a semiconductor body, e.g. an insulating body
H01L 21/84 - Manufacture or treatment of devices consisting of a plurality of solid state components or integrated circuits formed in, or on, a common substrate with subsequent division of the substrate into plural individual devices to produce devices, e.g. integrated circuits, each consisting of a plurality of components the substrate being other than a semiconductor body, e.g. being an insulating body
96.
BULK LAYER TRANSFER PROCESSING WITH BACKSIDE SILICIDATION
A radio frequency integrated circuit (RFIC) includes a bulk semiconductor die. The RFIC also includes a first active/passive device on a first-side of the bulk semiconductor die, and a first deep trench isolation region extending from the first-side to a second-side opposite the first-side of the bulk semiconductor die. The RFIC also includes a contact layer on the second-side of the bulk semiconductor die. The RFIC further includes a second-side dielectric layer on the contact layer. The first deep trench isolation region may extend through the contact layer and into the second-side dielectric layer.
H01L 23/48 - Arrangements for conducting electric current to or from the solid state body in operation, e.g. leads or terminal arrangements
H01L 23/522 - Arrangements for conducting electric current within the device in operation from one component to another including external interconnections consisting of a multilayer structure of conductive and insulating layers inseparably formed on the semiconductor body
97.
TECHNIQUES FOR BEAM MANAGEMENT IN WIRELESS COMMUNICATIONS
Aspects of the present disclosure describe managing beams in wireless communications. A beam management event configuration indicating a type of at least one beam to measure in determining occurrence of a trigger condition for a beam management event can be received. A parameter of a signal received from one or more nodes can be measured, where the signal corresponds to the type of the at least one beam. The occurrence of the trigger condition for the beam management event can be determined based on the parameter of the signal. An indication of the occurrence of the trigger condition can be reported to the one or more nodes or a different node.
Certain aspects of the present disclosure relate to methods and apparatus relating to rate matching for new radio (NR) physical downlink shared channel (PDSCH) and physical uplink shared channel (PUSCH). In certain aspects, a method includes receiving a rate matching resource (RMR) configuration from a serving cell. The method also includes identifying one or more first resource elements (REs) to be rate matched around at least in part based on a transmission numerology associated with the RMR configuration, wherein the one or more first REs are used for reference signal (RS) transmission in the serving cell or a neighboring cell. The method further includes mapping a physical downlink shared channel (PDSCH) to one or more second REs not including the first REs.
A method, an apparatus, a base station, a user equipment, and a computer program product for wireless communication are provided. The base station may configure resource allocation for wakeup signaling based at least in part on one or more resource patterns corresponding to different user equipment groups and/or antenna port configurations. The user equipment may receive a wakeup signal based at least in part on the resource allocation, detect the wakeup signal based at least in part on a preamble of the wakeup signal, and receive a subsequent communication based at least in part on the wakeup signal. Numerous other aspects are provided.
Certain aspects of the present disclosure generally relate to wireless communication. In some aspects, a user equipment may determine a measurement regarding a plurality of bandwidth parts of a carrier of the user equipment based at least in part on a measurement object associated with the measurement; determine a cell quality regarding the plurality of bandwidth parts based at least in part on the measurement object; and/or transmit a measurement report identifying the measurement and/or the cell quality based at least in part on the measurement object. Numerous other aspects are provided.