The present disclosure relates to a livestreaming interaction method and apparatus, a readable medium, and an electronic device, for use in realizing reservation-based game interaction in a livestreaming room, dispersing audiences scrambling for game slots, and relieving the pressure on a game server. The method comprises: in response to a trigger of a user, acquiring livestreaming data of a target livestreaming room, the livestreaming data comprising game data of a target cloud game; determining a reservation state of the user, and on the basis of the reservation state, determining, in the game data, data to be displayed; when the reservation state is "reserved", displaying first game data to be displayed; when the reservation state is "not reserved", displaying second game data to be displayed, said second game data comprising a first game livestreaming list of the target livestreaming room; and in response to a selection operation of the user for at least one livestreaming session in the first game livestreaming list, updating the reservation state of the user in the livestreaming session to "reserved".
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: deriving, for a conversion between a video unit of a video and a bitstream of the video, a first target coding mode at decoder side based on coding information associated with the video unit, wherein the first target coding mode is a variant of a matrix weighted intra prediction (MIP) mode; and performing the conversion based on the first target coding mode.
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
3.
IMAGE PROCESSING METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM
Embodiments of the present disclosure provide an image processing method and apparatus, a device and a storage medium. The method comprises: determining, based on an original size and a processed target size corresponding to a first image to be processed, a target downsampling rate corresponding to the first image; determining a target downsampling network model corresponding to the first image based on the target downsampling rate, at least one pre-trained and obtained downsampling network model and a preset downsampling rate corresponding to the downsampling network model; determining a second image satisfying a preset downsampling condition based on a preset downsampling rate corresponding to the target downsampling network model, the target downsampling rate and the first image; downsampling the second image based on the target downsampling network model to obtain a target image having the target size.
The present disclosure relates to the technical field of communications, and relates to a data transmission method, device and system. The data transmission method includes: establishing a connection to a receiving end of data to be transmitted; and sending to the receiving end at least one of an expiration offset frame used for indicating data which is not retransmitted and is among the data to be transmitted, a boundary frame used for indicating the boundary position of a data block in a data stream of the data to be transmitted, and a related frame used for associating a plurality of data streams of the data to be transmitted which are created on the basis of the same request, such that the receiving end transmits, according to at least one of the expiration offset frame, the boundary frame and the related frame, the data to be transmitted.
A method of processing video data, comprising obtaining a non-adjacent reconstruction sample of a picture of a video, inputting the non-adjacent reconstruction sample of the picture as input for an adaptive loop filter (ALF), and performing a conversion between the video and a bitstream of the video based on the ALF.
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
7.
INTERACTION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
An interaction method and apparatus, an electronic device and a storage medium. The method comprises: displaying a live streaming page, the live streaming page comprising first-layer data and second-layer data; if the first-layer data comprises interactive video information, in response to a hiding instruction, hiding the second-layer data, and displaying the first-layer data; and executing a response of an interaction operation related to the first-layer data. The technical solution provided in the present application can assist users in switching interactions outside the game into interactions within the game. During the entire process of switching interaction modes, there is no need to switch a client, so that the interaction mode switching operation can be simplified, and the user experience is improved.
H04N 21/478 - Services additionnels, p. ex. affichage de l'identification d'un appelant téléphonique ou application d'achat
H04N 21/4788 - Services additionnels, p. ex. affichage de l'identification d'un appelant téléphonique ou application d'achat communication avec d'autres utilisateurs, p. ex. discussion en ligne
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
8.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current video block of a video and a bitstream of the video, at least one context model associated with the current video block based on initiation information of the at least one context model, the initiation information of the at least one context model being included in the bitstream; and performing the conversion based on the at least one context model.
H04N 19/13 - Codage entropique adaptatif, p. ex. codage adaptatif à longueur variable [CALV] ou codage arithmétique binaire adaptatif en fonction du contexte [CABAC]
H04N 19/159 - Type de prédiction, p. ex. prédiction intra-trame, inter-trame ou de trame bidirectionnelle
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
9.
INTERACTION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present disclosure relates to an interaction method and apparatus, an electronic device, and a storage medium. The method comprises: displaying a livestreaming page, the livestreaming page comprising game video information of a target game (S110); in response to a first trigger operation for the target game, determining a target audience client, wherein the target audience client and a host client are in a voice chat state (S120); and sending a game invitation to the target audience client; and if the target audience client confirms to accept the game invitation, the host client and the target audience client both controlling an interaction object of the target game in response to a trigger operation for the game video information (S130). By using the technical solution provided by the present application, a host and a target audience can jointly serve as game participants to control an interaction object of a game to play the game, so that the livestreaming experience of the host and the experience of watching the livestreaming game by the audience can be improved.
The embodiments of the present disclosure relate to a code editing method and apparatus, and a device and a storage medium. The method provided herein comprises: acquiring a first code snippet to be edited; on the basis of prompt information and the first code snippet, calling a trained machine learning model, so as to obtain a model output of the machine learning model, wherein the prompt information is used for instructing the machine learning model to determine an editing operation for the code snippet, and generate a code editing result on the basis of the determined editing operation; and on the basis of the model output obtained from the machine learning model, determining an edited second code snippet.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: obtaining, for a conversion between a video unit of a video and a bitstream of the video, one or more spatial geometric partitioning mode (SGPM) candidates for the video unit based on at least one of: history information or neighboring information; obtaining a prediction or construction of the video unit based on the one or more SGPM candidates; and performing the conversion based on the prediction or construction of the video unit.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
12.
VIDEO GENERATION METHOD AND APPARATUS, MEDIUM, AND ELECTRONIC DEVICE
The present disclosure relates to a video generation method and apparatus, a medium, and an electronic device. The method includes: splitting a target text to obtain a plurality of sub-texts corresponding to the target text; performing character feature extraction on the target text to obtain a character feature of each character in the target text; determining a plurality of text content features respectively corresponding to the plurality of sub-texts, where a text content feature corresponding to each sub-text is used to represent content described in the sub-text; respectively determining a target feature corresponding to each sub-text based on the character feature and a text content feature corresponding to each sub-text; generating a text image corresponding to each sub-text based on the target feature; and generating a target video corresponding to the target text based on the plurality of sub-texts and text images corresponding to the plurality of sub-texts.
The present disclosure provides a content generation method and apparatus, an electronic device, and a storage medium. The method includes: obtaining first text content and an image prompt, where the first text content is used to describe content information of an image to be generated, and the image prompt is used to describe a generation requirement of the image to be generated; performing semantic analysis on the first text content and the image prompt based on a generative model to obtain a description keyword of the image to be generated corresponding to the first text content; and generating a target image corresponding to the first text content based on the description keyword of the image to be generated.
Provided in the embodiments of the present disclosure are a method and apparatus for sample generation, and a device and a storage medium. The method comprises: determining at least one data sample, wherein the at least one data sample is classified into a first category; on the basis of feature information of the at least one data sample, generating a first assessment criterion for the first category; on the basis of the at least one data sample and the first assessment criterion, constructing a first prompt input, wherein the first prompt input is at least used for guiding a first machine learning model to generate a data sample which conforms to the first assessment criterion; and by means of providing the first prompt input to the first machine learning model, obtaining at least one further data sample which is output by the first machine learning model, wherein the at least one further data sample belongs to the first category.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: for a conversion between a current video block of a video and a bitstream of the video, prediction samples of the current video block are determined; the prediction samples of the current video block are updated based on a filtering process; and the conversion is performed based on the updated prediction samples.
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
16.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: clipping, for a conversion between a current video block of a video and a bitstream of the video, a value of a first sample associated with the current video block by using a clipping operation, at least one parameter for the clipping operation being indicated in the bitstream and the at least one parameter comprising at least one of an upper limit value or a lower limit value; and performing the conversion based on the clipped value.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
17.
QUESTION-AND-ANSWER DATA GENERATION METHOD AND APPARATUS, AND COMPUTER DEVICE AND STORAGE MEDIUM
The present disclosure relates to the field of data processing, and in particular to a question-and-answer (QA) data generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring a plurality of segmented document blocks, wherein each segmented document block includes a first preset number of pieces of text data; on the basis of the text data in the segmented document blocks and a target questioning model, obtaining target question data corresponding to the segmented document blocks; acquiring full data, wherein the full data is content information which is associated with the target question data and is included in a complete document composed of the plurality of segmented document blocks; on the basis of the segmented document blocks, the target question data, the full data and a target answering model, obtaining target answer data corresponding to the target question data; and generating QA data on the basis of the target question data and the target answer data. The present disclosure solves the problems in the related art of low efficiency and poor data quality in QA data mining for a document.
G06F 16/335 - Filtrage basé sur des données supplémentaires, p. ex. sur des profils d’utilisateurs ou de groupes
G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
G06F 18/214 - Génération de motifs d'entraînementProcédés de Bootstrapping, p. ex. ”bagging” ou ”boosting”
18.
LIVE STREAMING INTERACTION METHOD, LIVE STREAMING SYSTEM, READABLE MEDIUM AND ELECTRONIC DEVICE
The present disclosure relates to a live streaming interaction method, a live streaming system, a readable medium and an electronic device, used for increasing interaction among multiple live streams the basis of a cloud game, and improving the interestingness of the live streams. The method comprises: displaying a first live streaming interface of a target live stream, wherein the first live streaming interface displays a game battle picture of a target cloud game, and the live streamer information of live streamers participating in the target live stream and other live streams of the target cloud game; in response to an interaction operation for a target interaction live streamer, determining a game interaction instruction corresponding to the interaction operation, wherein the target interaction live streamer is one of the live streamers of the target live streams and the other live streams; receiving game video stream data for updating the target cloud game, wherein the game video stream data is generated at least on the basis of the game interaction instruction and game operation instructions of the live streamers; and on the basis of the game video stream data, updating the game battle picture displayed by the first live streaming interface.
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, the bitstream comprising a first indication indicating a level to which the bitstream conforms.
H04N 19/60 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée
20.
METHOD AND APPARATUS FOR GENERATING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present disclosure relates to an method and apparatus for generating an image, an electronic device, and a storage medium. The method includes: obtaining M original images of a target scene captured by M image acquisition devices, wherein each image acquisition device of the M image acquisition devices provides a different viewpoint of the target scene, and M is a positive integer greater than or equal to 8; performing, for each original image of the M original images, an image deformation process on the original image, based on depth information of the original image and internal and external parameters of the image acquisition device to obtain a deformed image; and performing a splicing process on M deformed images subjected to the image deformation process, to obtain an omni-directional stereo panoramic image.
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: partitioning, for a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, a first tensor used in an adaptive filter of the NN-based model into a first set of tiles based on a first tile size, the first tile size being a multiple of a first predetermined value; and performing the conversion based on the first set of tiles.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: refining, for a conversion between a video unit of a video and a bitstream of the video, a sample of the video unit by applying a local illumination compensation (LIC), wherein the video unit is coded with a geometric partitioning mode (GPM); and performing the conversion based on the refined sample.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/109 - Sélection du mode de codage ou du mode de prédiction parmi plusieurs modes de codage prédictif temporel
H04N 19/157 - Mode de codage attribué, c.-à-d. le mode de codage étant prédéfini ou présélectionné pour être utilisé ultérieurement afin de sélectionner un autre élément ou paramètre
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
23.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: applying, for a conversion between a video unit of a video and a bitstream of the video, a first target coding mode to the video unit, wherein the first target coding mode comprises an inter prediction and a coding tool that comprises at least one of: an intra block copy (IBC) or an intra template matching prediction (intraTMP), and the video unit comprises a plurality of sub-partitions or a plurality of sub-blocks; and performing the conversion based on the first target coding mode.
H04N 19/103 - Sélection du mode de codage ou du mode de prédiction
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
24.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a video unit of a video and a bitstream of the video, a block vector (BV) list for the video unit that is coded with a first target coding mode, wherein the first target coding mode comprises an inter prediction and a coding tool that comprises at least one of: an intra block copy (IBC) or an intra template matching prediction (intraTMP), and the video unit comprises a plurality of sub-partitions or a plurality of sub-blocks; and performing the conversion based on the BV list.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
25.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: obtaining, for a conversion between a video unit of a video and a bitstream of the video, a sample of the video unit by applying a first target coding mode, wherein the first target coding mode comprises an inter prediction and a coding tool, and the video unit comprises a plurality of sub-partitions or a plurality of sub-blocks; and performing the conversion based on the obtained sample.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
26.
VIDEO ENCODING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The embodiment of the disclosure provides a video encoding method and apparatus, an electronic device and a storage medium. The method includes the steps of determining a video segment included in a target video according to a duration corresponding to a first picture group parameter; determining a target feature corresponding to video encoding data of a previous video segment for a current video segment; inputting the target feature into a target decision model, and determining a picture group parameter corresponding to the current video segment by using the target decision model; and encoding the current video segment based on the picture group parameter corresponding to the current video segment.
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/177 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un groupe d’images [GOP]
27.
METHOD FOR OBJECT RECOMMENDATION, AND DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT
According to the embodiments of the present disclosure, provided are a method for object recommendation, and a device, a storage medium and a program product. The method comprises: obtaining behavior information related to a behavior sequence which is executed by a target user on objects, and scenario information related to a target scenario used for recommending a target object; using each of a plurality of first feature generation layers, which respectively correspond to a plurality of behaviors, in a trained recommendation model to respectively determine, on the basis of the scenario information, processing parameter values specific to the behaviors, and on the basis of the processing parameter values specific to the behaviors and a shared processing parameter value among the plurality of behaviors, extracting behavior feature representations of the behaviors from the behavior information related to the behavior sequence; using a second feature generation layer in the recommendation model to extract an object feature representation from the scenario information and object information of the target object; and at least on the basis of the respective behavior feature representations of the plurality of behaviors and the object feature representation, determining a prediction result of the target user executing a specified behavior on the target object.
According to the embodiments of the present disclosure, provided are a workflow editing method and apparatus, and a device and a storage medium. The method comprises: presenting an editing interface for a workflow, wherein the editing interface is used for editing a node graph corresponding to the workflow; in the editing interface, presenting a first group of candidate ports associated with a first node in the node graph, wherein the first group of candidate ports correspond to different types; in response to a selection for a first port in the first group of candidate ports, providing a first group of candidate nodes corresponding to the first port; and in response to a selection for a second node in the first group of candidate nodes, associating the second node with the first node, such that the first node can invoke the second node during the operation of the workflow. The embodiments of the present disclosure can improve the efficiency of workflow editing, and improve the flexibility of workflow editing.
The embodiments of the present disclosure relate to an information search method and apparatus, and a device and a storage medium. The method provided herein comprises: acquiring input information expressed in a natural language; using a first model to process the input information, so as to generate intent description text concerning the input information; in response to the intent description text indicating a search intent of a target type, determining a set of search parameters on the basis of the intent description text; and on the basis of the set of search parameters, using a target search tool to acquire at least one matched search result, wherein the target search tool matches the search intent of the target type. In this way, the embodiments of the present disclosure can convert a natural language intent of a user into parameters that a search engine can understand, such that a search result better conforms to the real intent and demand of the user, thereby improving the search experience of the user.
The embodiments of the present disclosure relate to an interaction method and apparatus, and a device and a storage medium. The method provided herein comprises: receiving an input message in a dialogue window where a dialogue with a virtual object is conducted, wherein the virtual object is created on the basis of configuration information, and the configuration information comprises plug-in information associated with a navigation service; and in response to the input message indicating a navigation intent, executing, in the dialogue window, a target operation associated with a navigation card, wherein the target operation comprises: drawing a new navigation card in the dialogue window, so as to present navigation information generated on the basis of the input message; or updating a navigation card which has been provided in the dialogue window, so as to execute the generation of a navigation instruction that is performed on the basis of the input message. In this way, the embodiments of the present disclosure can realize the processing for a navigation request of a user in a scenario where a dialogue with a virtual object is conducted.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
31.
PROMPT INFORMATION GENERATION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
The present disclosure provides a prompt information generation method and apparatus, a computer device, and a storage medium. The method comprises: acquiring initial prompt information; performing intent classification on the initial prompt information to obtain at least one intent category; on the basis of a prompt information generation strategy respectively corresponding to the at least one intent category, generating a plurality of prompt information to be selected under each intent category; and for each intent category, selecting intermediate prompt information from among the plurality of prompt information to be selected under the intent category, and returning the intermediate prompt information as the initial prompt information to the step of performing the intent classification until target prompt information meeting a preset prompt information requirement is obtained.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: applying, for a conversion between a video unit of a video and a bitstream of the video, a combined filtering approach to a sample associated with the video unit, wherein the combined filtering approach comprises: a filtering approach and a bilateral filtering (BF) approach for a reference sample; and performing the conversion based on the filtered sample.
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
33.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: deriving, for a conversion between a video unit of a video and a bitstream of the video, a first set of local illumination compensation (LIC) parameters for the video unit based on spatial neighboring video units of the video unit or a template of the video unit; applying the first set of LIC parameters and a second set of LIC parameters during an illumination compensation associated with the video unit; and performing the conversion based on the illumination compensation.
The embodiments of the present disclosure relate to an information processing method and apparatus, and a device and a storage medium. The method provided herein comprises: acquiring a training input concerning a strategy model; on the basis of a performance indicator of the strategy model during training, determining a maximum output length of the strategy model; acquiring a first result which is output by the strategy model on the basis of the maximum output length; and on the basis of a comparison between the first result and a second result, adjusting parameters of the strategy model, wherein the second result is generated by a first reference model on the basis of the training input. In this way, the embodiments of the present disclosure can use a continuous maximization strategy to dynamically adjust an output length, and generate more detailed and analytical content, thereby improving the performance of a model.
The embodiments of the present disclosure relate to a request processing method and apparatus, and a device and a storage medium. The method provided herein comprises: providing a target interaction component in a dialogue window where a dialogue with a virtual object is conducted; in response to the reception of an input message in the dialogue window, acquiring a control instruction which is generated by a processing entity associated with the virtual object, wherein the control instruction is generated on the basis of the input message and context information associated with the target interaction component; and on the basis of the acquired control instruction, executing a target operation associated with the target interaction component. In this way, the embodiments of the present disclosure can realize the processing of a user request in a scenario where a dialogue with a virtual object is conducted.
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
36.
METHOD, APPARATUS, AND MEDIUM FOR VISUAL DATA PROCESSING
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, a pooling layer in the NN-based model being configured to be applied with a padding operation that is used for padding at least one sample for an input tensor of the pooling layer.
A method for visual data processing comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, a synthesis transform in the NN-based model comprising at least one upscaling layer, and each of the at least one upscaling layer being used for increasing a data dimension of respective input of the at least one upscaling layer.
The embodiment of the invention provides method, apparatus, device and a storage medium for training and optimizing an analysis model. The method of optimizing the analysis model includes: fine-tuning an analysis model with a first set of values regarding a first property of a target material to determine a second set of values regarding a second property of the target material; determining an association between the first property and the second property of the target material based on a first set of values and a second set of values; determining a target value of the target material regarding the first property with the association based on a reference value of the target material regarding the second property, the reference value being determined based on an experiment on target material; and optimizing the analysis model with the target value of the target material regarding the first property. In this way, embodiments of the present disclosure can utilize limited experimental data to optimize the analysis model.
G16C 20/70 - Apprentissage automatique, exploration de données ou chimiométrie
G16C 20/30 - Prévision des propriétés des composés, des compositions ou des mélanges chimiques
G16C 60/00 - Science informatique des matériaux, c.-à-d. TIC spécialement adaptées à la recherche des propriétés physiques ou chimiques de matériaux ou de phénomènes associés à leur conception, synthèse, traitement, caractérisation ou utilisation
39.
INFORMATION PROCESSING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
The embodiments of the present disclosure relate to an information processing method and apparatus, and a device and a storage medium. The method provided herein comprises: in response to an input message being received, determining a set of candidate results which match the input message; constructing first prompt information on the basis of the input message and the set of candidate results; providing the first prompt information to a target model, so as to determine at least one target result from the set of candidate results, wherein the target model is finely adjusted on the basis of a set of sample requests and annotation information, and the annotation information indicates a sorting result of a set of sample results which correspond to corresponding sample requests; and on the basis of the at least one target result, generating a response concerning the input message. In this way, the embodiments of the present disclosure can understand the intent of a user more accurately, such that the accuracy and efficiency of recommendation for a point of interest are improved, thereby improving the search experience of the user.
A method for point cloud coding, in the method, for a conversion between a current frame of a point cloud sequence and a bitstream of the point cloud sequence, a set of triangle meshes of the current frame are determined. A parameter for at least one projection plane for a triangle mesh of the set of triangle meshes is determined before voxelizing the triangle mesh. The conversion is performed based on the parameter.
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, a color transformation between an internal color format and an output color format associated with the conversion being allowed to be disabled.
H04N 19/50 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif
H04N 19/13 - Codage entropique adaptatif, p. ex. codage adaptatif à longueur variable [CALV] ou codage arithmétique binaire adaptatif en fonction du contexte [CABAC]
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, wherein a first format for coding the visual data and a second format of output visual data from the conversion are indicated with different indications, the first format indicates a first relationship between a size of a first component of the coded visual data and a size of a second component of the coded visual data, and the second format indicates a second relationship between a size of a first component of the output visual data and a size of a second component of the output visual data.
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
43.
METHOD, APPARATUS, AND MEDIUM FOR VISUAL DATA PROCESSING
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: performing a first conversion between visual data and a bitstream of the visual data with a neural network (NN)-based model, the first conversion comprising at least one of the following: converting, based on a first predetermined order, a first tensor having more than one dimension to a one-dimensional first array, the first tensor and the first array storing first data associated with the visual data, and the first predetermined order indicating a correspondence between samples of the first tensor and samples of the first array, or converting, based on a second predetermined order, a one-dimensional second array to a second tensor having more than one dimension, the second tensor and the second array storing second data associated with the visual data.
A mechanism for processing video data is disclosed. The mechanism includes determining to apply an adaptive coefficient precision for an adaptive loop filter (ALF). A conversion can then be performed between a visual media data and a bitstream based on the adaptive coefficient precision applied.
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
45.
METHOD, APPARATUS, ELECTRONIC DEVICE, MEDIUM AND PRODUCT FOR VIDEO BITRATE ADJUSTMENT
Embodiment of the disclosure disclose a method, apparatus, device, storage medium and product for video bitrate adjustment, and the method includes: analyzing and determining a current picture quality evaluation result of a target video stream in a current bitrate regulation period based on a video frame; and determining a target bitrate based on a predetermined target picture quality evaluation standard associated with a bitrate and/or a picture quality change trend of the current picture quality evaluation result relative to a historical picture quality evaluation result in a historical bitrate regulation period.
H04N 21/234 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé
H04N 21/238 - Interfaçage de la voie descendante du réseau de transmission, p. ex. adaptation du débit de transmission d'un flux vidéo à la bande passante du réseauTraitement de flux multiplexés
46.
METHODS AND APPARATUS FOR CODE EDITING, AND DEVICE AND STORAGE MEDIUM
The embodiments of the present disclosure relate to methods and an apparatus for code editing, and a device and a storage medium. A method provided herein comprises: displaying a code editing interface, wherein the code editing interface comprises a code editing area used for editing a code line; on the basis of a received first preset operation, presenting an interaction component in the code editing area, wherein the interaction component comprises an input control; on the basis of a first input acquired via the input control, providing first response content for the first input while maintaining the display of the interaction component; and on the basis of a second input acquired via the input control of the interaction component, providing second response content for the second input, wherein the second response content is generated on the basis of context information associated with the first response content. In this way, the embodiments of the present disclosure can enable a user to perform multiple rounds of code editing operations in a code editing interface, thereby enhancing the efficiency of code editing.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
47.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: applying, for a conversion between a video unit of a video and a bitstream of the video, a filtering approach to a prediction sample associated with the video unit;and performing the conversion based on the filtered prediction sample.
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
48.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: obtaining, for a conversion between a current block of a video and a bitstream of the video, a chroma fusion candidate for the current block; and performing the conversion based on the chroma fusion candidate.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
49.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: deriving, for a conversion between a current block of a video and a bitstream of the video, at least one of: a filter or a model based on decoding information; applying the at least one of: the filter or the model to coding of at least one of: the current block or a future block; and performing the conversion based on the at least one of: the filter or the model.
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
50.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In the method, for a conversion between a current video block of a video and a bitstream of the video, affine information of the current video block is determined based on first affine information of a first block of the video. The first block is coded with a first coding tool different from an affine coding tool. A prediction of the current video block is determined based on the affine information. The conversion is performed based on the prediction.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
51.
METHOD, APPARATUS, AND MEDIUM FOR POINT CLOUD CODING
Embodiments of the present disclosure provide a solution for point cloud coding. A method for point cloud coding is proposed. In the method, for a conversion between a current frame of a point cloud sequence and a bitstream of the point cloud sequence, a prediction mode for a region of the current frame is determined. The prediction mode at least comprises a first mode based on an intra prediction and an inter prediction. A prediction of the region is determined based on the prediction mode. The conversion is performed based on the prediction.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In the method, for a conversion between a current video block of a video and a bitstream of the video, a first prediction of the current video block is determined based on a first coding tool. The first coding tool comprises a motion vector difference (MVD) -related coding tool. A third prediction of the current video block is determined based on the first prediction and a second prediction of the current video block. The second prediction is determined based on a second coding tool different from the first coding tool. The conversion is performed based on the third prediction.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
53.
METHOD, APPARATUS, AND MEDIUM FOR POINT CLOUD PROCESSING
Embodiments of the present disclosure provide a solution for point cloud processing. A method for point cloud processing is proposed. The method comprises: determining, for a conversion between a current point cloud (PC) sample of a point cloud sequence and a bitstream of the point cloud sequence, a prediction for a current feature of the current PC sample based on a current sparse PC sample of the current PC sample, a set of sparse PC samples and a set of features of at least one reference PC sample of the current PC sample; and performing the conversion based on the prediction for the current feature.
The embodiment of the invention provides a method, apparatus, device and storage medium for generating ligand molecules. The method comprises: determining a set of initial arms by decomposing a reference ligand molecule for a target biological target; generating a set of candidate ligand molecules based on the set of initial arms; determining a set of candidate arms corresponding to each of the set of initial arms by decomposing each of the set of candidate ligand molecules; determining a target candidate arm for each initial arm from the set of candidate arms to determine a set of target candidate arms; and generating a set of ligand molecules for the target biological target based on the set of target candidate arms. In this way, by selecting the key information (i.e., the arm) more fitting the drug design target in the drug design process as a condition, the embodiments of the present disclosure can improve the ratio of the candidate drug molecules to the drug design target.
G16B 15/30 - Ciblage de médicament à l’aide de données structurellesPrévision d’amarrage ou de liaison moléculaire
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G16B 15/20 - Repliement de protéines ou de domaines
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In a method, for a conversion between a current video block of a video and a bitstream of the video, a first guiding vector associated with the current video block is determined. The first guiding vector comprises a first guiding block vector (BV) or a first guiding motion vector (MV). A first reference vector of a first reference block located based on the first guiding vector is determined. The first reference vector comprises a first reference block vector or a first reference motion vector. A vector prediction of the current video block is determined at least based on the first guiding vector and the first reference vector. The vector prediction comprises a block vector prediction or a motion vector prediction.The conversion is performed based on the vector prediction.
H04N 19/50 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif
56.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current block of a video and a bitstream of the video, that an input of a cross-component prediction (CCP) model comprises at least one of: a sample within a reference area, or a sample within the current block; and performing the conversion based on the CCP model.
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
57.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a video unit of a video and a bitstream of the video, that a cross-component prediction (CCP) model candidate is inserted to a CCP candidate list; and performing the conversion based on the CCP model candidate.
A mechanism for processing video data is disclosed. The mechanism includes determining to apply slice quantization parameters (QPs) as one or more extra inputs to a neural network (NN)-based super resolution (SR) process. A conversion is performed between a visual media data and a bitstream based on the NN-based SR process.
A mechanism for processing video data is disclosed. The mechanism includes determining that an adaptive coefficient precision for an adaptive loop filter (ALF) is set to a number M, where the number M is different from a fixed number used at an encoder or a decoder for the adaptive coefficient precision for the ALF. A conversion can then be performed between a visual media data and a bitstream based on the adaptive coefficient precision for the ALF set to the number M.
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: obtaining, for a conversion between a video unit of a video and a bitstream of the video, a feature vector by applying a pre-analysis to a list of frames associated with the video unit; determining a prediction structure by feeding the feature vector to a model as an input; and performing the conversion based on the prediction structure.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
61.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current video block of a video and a bitstream of the video, prediction samples of the current video block based on an intra prediction mode and first information regarding whether to apply a bilateral filter to a set of reference samples of the current video block that is used for the intra prediction mode; and performing the conversion based on the prediction samples.
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
62.
METHOD, APPARATUS, AND MEDIUM FOR POINT CLOUD CODING
Embodiments of the present disclosure provide a method for point cloud coding. In the method, for a conversion between a current frame of a point cloud sequence and a bitstream of the point cloud sequence, a first trisoup node in a first slice of the current frame and a second trisoup node in a second slice of the current frame are determined. The first and second trisoup nodes are aligned based on a trisoup node size. The conversion is performed based on the aligned first and second trisoup nodes.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In a method, for a conversion between a current video block of a video and a bitstream of the video, a guiding block vector (BV) is determined based on a first BV shift of the current video block. A subblock-based block vector guided block vector prediction (SbBVGBVP) of the current video block is determined based on the guiding BV. The conversion is performed based on the SbBVGBVP.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current video block of a video and a bitstream of the video, prediction samples of the current video block based on an intra prediction mode and first information regarding whether to apply a deblocking filter to a set of reference samples of the current video block that is used for the intra prediction mode; and performing the conversion based on the prediction samples.
The present invention provides a video generation method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring and displaying a target image, wherein the target image comprises at least one object; in response to a trigger operation for the target image, determining a trigger position corresponding to the trigger operation, and determining from the target image a target object corresponding to the trigger position; separating the target object from the target image to obtain a target object layer, and on the basis of the target object layer, generating an animation special effect associated with the target object; and inputting the target object layer into a preset model to obtain description information matched with the target object layer, and on the basis of the animation special effect associated with the target object and the description information, generating an encyclopedia video associated with the target object.
H04N 19/30 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques hiérarchiques, p. ex. l'échelonnage
66.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In the method, for a conversion between a current video block of a video and a bitstream of the video, temporal layer information of a video region associated with the current video block is determined. The temporal layer information indicates at least one of: a temporal layer, or a temporal layer index (Tid). A usage of subblock-based temporal motion vector prediction (SbTMVP) is determined based on the temporal information. The conversion is performed based on the usage of SbTMVP.
Embodiments of the present disclosure provide a solution for video processing. In a method for video processing, for a conversion between a current video block of a video and a bitstream of the video, a value of a first variable of a merge candidate of the current video block is determined. Adjustment information of the first variable is determined based on at least one of: a template size, a sequence resolution, or a block size. The adjustment information comprises at least one of: whether to adjust the value of the first variable, or how to adjust the value of the first variable. The conversion is performed based on the adjustment information.
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
68.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current block of a video and a bitstream of the video, that a component of the current block is coded with a coding mode and another component of the current block is coded with another coding mode; and performing the conversion based on the coding modes.
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
69.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In the method, for a conversion between a current video block of a video and a bitstream of the video, at least one template segment of a current template of the current video block is determined. A reference template of the current video block is determined based on at least one motion information of the at least one template segment. The conversion is performed based on the reference template.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/577 - Compensation de mouvement avec interpolation de trame bidirectionnelle, p. ex. utilisation d’images B
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In the method, for a conversion between a current video block of a video and a bitstream of the video, information regarding applying a subblock-boundary based deblocking or filtering process to the current video block is determined based on a slice type. The conversion is performed based on the information. The information indicates at least one of: whether a kind of subblock-boundary based deblocking or filtering process is applied, or how the kind of subblock-boundary based deblocking or filtering process is applied.
H04N 19/615 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif utilisant le filtrage temporel à compensation de mouvement [MCTF]
H04N 19/50 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif
71.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. In a method for video processing, for a conversion between a current video block of a video and a bitstream of the video, a value of a first variable of a merge candidate of the current video block is determined. Adjustment information of the first variable is determined based on at least one of: a template size, a block size or at least one threshold size. The adjustment information comprises at least one of: whether to adjust the value of the first variable, or how to adjust the value of the first variable. The conversion is performed based on the adjustment information.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current block of a video and a bitstream of the video, a cross-component prediction (CCP) model based on at least one of: a reference area related to the current block or a reference sample related to the current block; and performing the conversion based on the CCP model.
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
73.
METHOD, APPARATUS, AND MEDIUM FOR POINT CLOUD PROCESSING
Embodiments of the present disclosure provide a solution for point cloud processing. A method for point cloud processing is proposed. The method comprises: obtaining, for a conversion between a current point cloud (PC) sample of a point cloud sequence and a bitstream of the point cloud sequence, a first set of points in the current PC sample, the first set of points representing low frequency information of the current PC sample;and performing the conversion based on the first set of points.
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
74.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a video unit of a video and a bitstream of the video, a parameter of a cross-component model for the video unit based on at least one of: a reference area related to the video unit or a previous cross-component model, wherein the cross-component model is a cross-component prediction (CCP) model or an inter convolutional cross-component model (CCCM); and performing the conversion based on the parameter of the cross-component model.
The embodiments of the present disclosure relate to a code editing method and apparatus, a device, and a storage medium. The method provided herein comprises: receiving, on the basis of communication between a service component and a plug-in component in a development application, a request message from the plug-in component, the request message being generated on the basis of a code editing request received in the development application; determining at least one target processing module for processing the code editing request from a group of base processing modules on the basis of the request type indicated in the code editing request; utilizing the target model by the at least one target processing module to generate a response message for the request message; and sending a response message to the plug-in component, so that the plug-in component can provide, on the basis of the response message, a response to the code editing request. Using this means, the embodiment of the present disclosure can support using a unified architecture to respond to code editing requests from different types of development applications.
Embodiments of the present disclosure provide a request processing method and apparatus, a device, and a storage medium. The method comprises: in response to a request by a first workload to call a second workload, receiving a first identity token associated with the first workload, wherein the first workload is associated with a first trust domain, the first identity token is generated on the basis of a first authentication protocol corresponding to the first trust domain, and the second workload is associated with a second trust domain; and, on the basis of identity declaration information indicated by the first identity token, generating a second identity token corresponding to a second authentication protocol, wherein the second authentication protocol corresponds to the second trust domain; and sending the second identity token to the first workload, thus allowing the first workload to call the second workload on the basis of the second identity token. In this way, embodiments of the present disclosure are able to realize cross-protocol identity authentication, thereby supporting cross-trust-domain request calling.
The present disclosure provides an image generation method and apparatus, a computer-readable medium, and an electronic device. The method includes: displaying an image editing interface, a first merchandise image corresponding to a target merchandise is displayed in the image editing interface; determining a target scene to be added to the first merchandise image; and displaying a second merchandise image that is randomly generated by the first merchandise image and the target scene.
G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation électromagnétique, p. ex. lecture optiqueMéthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p. ex. lecture de la lumière blanche réfléchie
78.
METHOD, APPARATUS, AND MEDIUM FOR VISUAL DATA PROCESSING
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: obtaining, for a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, a format for coding the visual data, the format indicating a relationship between a size of a first component of the coded visual data and a size of a second component of the coded visual data, and a first synthesis transform in the NN-based model being used for the first component; determining, based on the format, a second synthesis transform in the NN-based model that is used for the second component; and performing the conversion based on the first synthesis transform and the second synthesis transform.
H04N 19/184 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant des bits, p. ex. de flux vidéo compressé
79.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In the method, for a conversion between a current video block of a video and a bitstream of the video, a first prediction of the current video block is determined based on a first coding tool. The first coding tool compriss a subblock-based coding tool. A third prediction of the current video block is determined based on the first prediction and a second prediction of the current video block, the second prediction being determined based on a second coding tool different from the first coding tool. The conversion is performed based on the third prediction.
The embodiments of the disclosure provides a method, apparatus, device and storage medium of target segmentation. The method includes: determining location information of a current viewpoint when a target user watches a target object; performing target segmentation on the target object at location of a viewpoint based on location information of a current viewpoint and a visual foundation model, to determine and present a current segmentation result; and in response to a segmentation end operation triggered by the target user for the current segmentation result, taking the current segmentation result as a target segmentation result corresponding to the target object. According to the technical solution of the embodiments of the disclosure, any target may be real-time segmented, meeting a user segmentation requirement, and the accuracy and efficiency of target segmentation are ensured.
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques
81.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a video unit of a video and a bitstream of the video, whether to use at least one of: a real-time calculated model or a derived model, based on a decoder derived approach; and performing the conversion based on the at least one of: the real-time calculated model or the derived model.
An image processing method, an electronic device and a storage medium are provided. The method includes acquiring a first image; performing down-sampling processing on the first image to obtain a first down-sampled image; performing average pooling processing on pixels in the first down-sampled image to obtain a first pooled image; determining a plurality of pixel merging weights corresponding to pixels in the first down-sampled image; and performing pixel merging processing on the first down-sampled image and the first pooled image based on the plurality of pixel merging weights to obtain a second image.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: applying, for a conversion between a video unit of a video and a bitstream of the video, a cross-component prediction (CCP) mode to the video unit, based on decoding information;and performing the conversion based on the CCP mode.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: deriving, for a conversion between a video unit of a video and a bitstream of the video, a cross-component prediction (CCP) model of a current block of the video unit, based on a previous CCP model coded block; and performing the conversion based on the CCP model.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
85.
METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR SIGNALING NEGOTIATION OR DATA COMMUNICATION
The embodiment of the disclosure provides a method, apparatus, electronic device and storage medium for signaling negotiation or data communication. The signaling negotiation method includes: generating, in response to a signaling request sent by a client, a signaling context corresponding to the signaling request, and storing the signaling context; in response to receiving redirection address information of a redirection server corresponding to the signaling request, generating a target signaling response message based on the redirection address information and negotiation address information of the signaling negotiation server; and feeding back the target signaling response message to the client, to enable the client to detect connectivity based on the redirection address information, and obtain target data corresponding to the signaling request based on a result of detection. With the technical solution, various network protocols may be adapted, and the signaling interaction time in the redirection scene is reduced.
H04L 43/0811 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant la connectivité
86.
MULTIMEDIA CONTENT DISPLAY METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
A multimedia content display method and apparatus, a computer device, and a storage medium. The multimedia content display method comprises: in response to a content acquisition condition, acquiring information stream data and a first display order of multimedia content in the information stream data, and displaying the information stream data according to the first display order; when the information stream data comprises at least one piece of second multimedia content, acquiring first associated data generated in the information stream data display process; when it is detected that the first associated data satisfies a first preset condition, acquiring second associated data and adjusting the first display order on the basis of the second associated data; and according to a second display order obtained after adjustment, displaying the undisplayed multimedia content in the information stream data. In this way, the display order of the multimedia content can be dynamically adjusted, and the user experience is ensured while the display effect of the second multimedia content is improved.
The embodiment provides a method, an apparatus, a device and a storage medium for transmitting audio. The method includes: determining a target number of retransmissions corresponding to current target audio to be transmitted, based on a current network packet loss rate and an audio packet anti-loss rate; determining a current allowed number of responses corresponding to a retransmission mode of a negative acknowledgement, based on a current network round-trip delay and a predetermined allowed delay; determining a target number of redundancies corresponding to a forward redundancy retransmission mode and a target retransmission delay corresponding to the retransmission mode of the negative acknowledgment, based on the target number of retransmissions, the predetermined allowed delay, the current allowed number of responses, and an allowed number of redundancies corresponding to a forward redundancy retransmission mode; and transmitting the target audio based on the target number of redundancies and the target retransmission delay.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. In the method, for a conversion between a current video block of a video and a bitstream of the video, an interweaved affine prediction and a bi-directional optical flow (BDOF) is applied for the current video block. The conversion is performed based on the applying.
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
89.
ABNORMAL BEHAVIOR DETECTION METHOD, ELECTRONIC DEVICE AND NON-TRANSIENT COMPUTER-READABLE STORAGE MEDIUM
The present disclosure provide an abnormal behavior detection method, an electronic device and a non-transient computer-readable storage medium. The method includes: collecting a set of behaviors to be detected that belong to a same session within a preset time window; labeling each behavior in the set of behaviors to be detected, and acquiring label features of the set of behaviors to be detected according to labels of all behaviors in the set of behaviors to be detected; mapping, according to a preset embedded feature extracting model, the label features of the set of behaviors to be detected as embedded features; and determining, based on the embedded features of the set of behaviors to be detected and embedded features of a pre-acquired set of historical behaviors, whether there is abnormality in the set of behaviors to be detected.
ABNORMAL BEHAVIOR TEST METHOD, ABNORMAL BEHAVIOR TEST DEVICE, ELECTRONIC DEVICE, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
The present disclosure provides an abnormal behavior test method, an abnormal behavior test device, an electronic device, a non-transitory computer-readable storage medium, and a computer program product. The abnormal behavior test method comprises: acquiring a set of behaviors to be tested pertaining to a same session within a preset time window; adding a label to each behavior in the set of behaviors to be tested, and obtaining a label feature of the set of behaviors to be tested on the basis of the labels of all of the behaviors in the set of behaviors to be tested; on the basis of a preset embedded feature extraction model, mapping the label feature of the set of behaviors to be tested into an embedded feature; on the basis of the embedded feature of the set of behaviors to be tested and an embedded feature of a historical set of behaviors obtained in advance, determining whether the set of behaviors to be tested is abnormal.
The present disclosure relates to a method and an apparatus for controlling a code rate of live streaming and, an electronic device and a storage medium. The method includes: obtaining code rate information of a stream push of live streaming of an application layer of a terminal; obtaining a target bandwidth estimation value obtained by a transport layer of the terminal; adjusting the target bandwidth estimation value based on the code rate information, so as to obtain an adjusted target bandwidth estimation value; and determining a target code rate of the terminal based on the adjusted target bandwidth estimation value.
H04N 21/2662 - Contrôle de la complexité du flux vidéo, p. ex. en mettant à l'échelle la résolution ou le débit binaire du flux vidéo en fonction des capacités du client
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
92.
METHOD AND APPARATUS FOR GENERATING DISPLAY IMAGE OF TARGET OBJECT, DEVICE, AND STORAGE MEDIUM
The present disclosure relates to the technical field of image processing, and provides a method and apparatus for generating a display image of a target object, a device, and a storage medium. The method comprises: obtaining an original image of a target object, and extracting a main body area image of the target object from the original image; obtaining description information of the target object, and on the basis of the description information, generating a candidate background image suitable for the target object; and fusing the main body area image and the candidate background image to generate a display image of the target object.
A plug-in configuration method and apparatus, and a device and a storage medium. The method comprises: presenting a management panel in a development interface of a plug-in, wherein the management panel is used for displaying or editing metadata of at least one function associated with the plug-in (210); in response to a request to deploy the plug-in, on the basis of the metadata, generating description information corresponding to a target application platform (220); and providing the description information to the target application platform, so as to set at least one configuration parameter of the plug-in in the target application platform (230). Therefore, the efficiency of plug-in configuration is improved.
The present disclosure relates to a data query method and apparatus, and a medium and an electronic device. The method comprises: in response to a structured query statement being received, determining, from among reference cardinal-number estimation models which are loaded by a database engine and respectively correspond to storage units, a target cardinal-number estimation model which corresponds to the structured query statement, wherein the reference cardinal-number estimation models are obtained by means of training outside the database engine; and estimating the cardinal number of the structured query statement by using the target cardinal-number estimation model.
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a video unit of a video and a bitstream of the video, that one or more parameters of a cross-component residual model (CCRM) of the video unit is inherited from a previous filtered based coded block, wherein the CCRM is a filter model including cross-component model or same-component model; and performing the conversion based on the CCRM.
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/52 - Traitement de vecteurs de mouvement par encodage par encodage prédictif
96.
METHOD, DEVICE, AND STORAGE MEDIUM FOR SCHEDULING A DISTRIBUTED DATABASE
The embodiments provide a method, device, and storage medium for scheduling a distributed database. Load states of respective storage units in a first storage node in a distributed database are detected by any of a plurality of management devices of a distributed database, and a hotspot partition of the first storage unit is determined based on the load states; a replica splitting task for a target replica stored in the hotspot partition is generated, the replica splitting task comprising the target number of subreplicas of the target replica after splitting and migration target addresses of the subreplicas; the replica splitting task is executed by invoking a thread in a thread pool, to split the target replica into the subreplicas based on the target number of the subreplicas and migrate at least one of the subreplicas based on the migration target addresses of the subreplicas.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
G06F 16/21 - Conception, administration ou maintenance des bases de données
97.
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between the video unit of the video and a bitstream of the video, decoder-side intra mode derivation (DIMD) information of one or more neighboring video units of the video unit, wherein the one or more neighboring video units are coded with a coding mode which is different from DIMD mode or DIMD merge mode; and performing the conversion based on the DIMD information.
The present disclosure relates to the technical field of electronics, and relates to an information transfer method and apparatus, a medium, and an electronic device. The information transfer method comprises: importing a PRD into a PRD layer, importing IDL information into an IDL method layer and an IDL structure layer, and importing a database table structure into a data layer; using the name of a structure in the IDL structure layer and the name of a field in the structure as keywords for searching, and matching a data structure in the data layer with the structure in the IDL structure layer to obtain a first matching result; using the name of the matched structure in the first matching result and the name of the field in the structure as keywords for searching, and matching the matched structure in the IDL structure layer with an entity in the PRD layer to obtain a second matching result; and transmitting a primary key and unique key information in the data layer to a front-end code layer on the basis of the first and second matching results.
09 - Appareils et instruments scientifiques et électriques
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Business management assistance; sales promotion for others; marketing; provision of an online marketplace for buyers and sellers of goods and services; personnel management consultancy; updating and maintenance of data in computer databases Computer software applications, downloadable; mobile phone software applications, downloadable; computer game software, downloadable; Downloadable smartphone game software; video game and computer game applications; virtual reality game software; computer software platforms, recorded or downloadable; downloadable application software for virtual environments; downloadable computer software in the areas of movies, TV shows, music, magazines, books, audiobooks, apps, games, cloud computing, and the Internet; downloadable software for streaming audiovisual and multimedia content via the internet and global communications networks; downloadable mobile applications for accessing and streaming audiovisual and multimedia content via the internet and global communications networks; Downloadable entertainment software for mobile devices via the Internet and wireless devices; downloadable computer software for the transmission and display of text, images and sound for e-readers; downloadable entertainment software for viewing and interacting with audiovisual and multimedia entertainment content related to television series and films; downloadable computer software for purchasing, accessing and viewing movies, television programs, videos and multimedia content; downloadable computer software enabling content, text, visual works, audio works, audiovisual works, literary works, data, files, documents, and electronic works to be downloaded to and accessed on a computer or other portable consumer electronic device; downloadable computer software for formatting and converting content, text, visual works, audio works, audiovisual works, literary works, data, files, documents, and electronic works into a format compatible with portable electronic devices and computers; Speech synthesizer with audio books; Earphone; headphones; dictating machines; covers for smartphones; apparatus for recording, transmission or reproduction of sound or images; television apparatus; virtual reality headsets; loudspeakers; video screens Entertainment services; Providing online non-downloadable comics and graphic novels; providing online electronic publications, not downloadable; entertainment services, namely, providing online music, not downloadable; entertainment services, namely, providing online videos, not downloadable; entertainment services, namely, providing online images, not downloadable; entertainment services, namely, providing films, not downloadable, via video-on-demand services; entertainment services, namely, rental of electronic book readers; Online library services, namely, electronic library services featuring newspapers, magazines, photographs and pictures through online computer networks; publication of texts, other than publicity texts; online publication of electronic books and journals; entertainment services, namely, providing multimedia entertainment content in the field of science fiction, comic books, film, television, television and film characters, music, and celebrities; entertainment services, namely, providing podcasts in the field of entertainment and entertainment information; entertainment services, namely, providing non-downloadable films and television shows via a video-on-demand transmission service; entertainment services, namely, providing on-line, non-downloadable video clips, photography, music, data, and visual effects; entertainment services in the nature of providing television series and series of motion pictures for distribution via the internet and video-on-demand; providing entertainment services via a global communication network in the nature of online games and websites featuring a wide variety of general interest entertainment information relating to video games, motion picture films, television show programs, musical videos, related film clips, photographs, and other multimedia materials; entertainment services, namely, provision of online non-downloadable digital collectibles in the nature of artwork, video clips, and image files relating to audiovisual entertainment; entertainment services in the nature of development, creation, production, distribution, and post-production of motion picture films, television shows, special events, and multimedia entertainment content; entertainment services, namely, providing virtual environments in which users can interact for recreational, leisure or entertainment purposes; entertainment services, namely, providing game services provided online from a computer network; entertainment services, namely, providing temporary use of non-downloadable interactive games; entertainment services, namely, providing interactive, multiplayer game services for games played over the internet; entertainment services, namely, providing amusement park services; entertainment services, namely, production of podcasts; entertainment services, namely, video production; entertainment services, namely, film production; production of radio and television programmes; entertainment services, namely, production and distribution of radio programs and sound recordings; instruction services; organization of competitions (education or entertainment); mobile multimedia library services; television entertainment; entertainment services, namely, production of music; toy rental; entertainment services, namely, rental of humanoid robots having communication and learning functions for entertaining people
100.
NEURAL-NETWORK POST-FILTER ON VALUE RANGES AND CODING METHODS OF SYNTAX ELEMENTS
A mechanism for processing video data is disclosed. The mechanism includes determining that a value of a neural-network post-filter characteristics (NNPFC) input format indicator (nnpfc_inp_format_inc) is in a range of 0 to N, inclusive, where N is a positive integer. A conversion is performed between a visual media data and a bitstream based on the NNPFC input format indicator.
H04N 19/85 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo