A method including: extracting a set of video features representing properties of a video segment; generating a set of bitrate-resolution pairs based on the set of video features, each bitrate-resolution pair in the set of bitrate-resolution pairs defining a bitrate and defining a resolution estimated to maximize a quality score characterizing the video segment encoded at the bitrate; accessing a distribution of audience bandwidths; selecting a top bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a bottom bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a subset of bitrate-resolution pairs in the set of bitrate-resolution pairs based on the distribution of audience bandwidths, the subset of bitrate-resolution pairs defining bitrates less than the top bitrate and greater than the bottom bitrate; and generating an encoding ladder for the video segment comprising the top bitrate-resolution pair, the bottom bitrate-resolution pair, and the subset of bitrate-resolution pairs.
H04N 19/149 - Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - ScenesScene-specific elements in video content
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
2.
METHODS FOR GENERATING VIDEO-AND AUDIENCE-SPECIFIC ENCODING LADDERS WITH AUDIO AND VIDEO JUST-IN-TIME TRANSCODING
A method including: populating an encoding ladder with a subset of bitrate-resolution pairs, from a set of bitrate-resolution pairs, based on a distribution of audience bandwidths; receiving a first request for a first playback segment, at a first bitrate-resolution pair in a encoding ladder, in the video from a first device; in response to determining an absence of video segments, at the first bitrate-resolution pair and corresponding to the segment, in a first rendition cache: identifying a first set of mezzanine segments, in the video, corresponding to the first playback segment; assigning the first set of mezzanine segments to a set of workers for transcoding into a first set of video segments according to the first bitrate-resolution pair; storing the first set of video segments in the first rendition cache; and based on the first request, releasing the first set of video segments to the first device.
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
3.
METHODS FOR IDENTIFIER-BASED VIDEO STREAMING AND SESSIONIZATION
A method includes, during an initial time: receiving a manifest request, for a video, from a device associated with a first address; generating an identifier associated with the first address; generating a manifest defining a set of segments available for the video; and serving the manifest. The method further includes, during a first time: receiving a first request for a first segment, in the set of segments, the first request associated with the first address and the identifier; and based on association between the first address and the identifier, serving the first segment to the first address. The method also includes, during a second time: receiving a second request for a second segment in the set of segments, the second request associated with a second address and the identifier; and based on disassociation of the second address and the identifier, withholding delivery of the second segment to the second address.
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/4425 - Monitoring of client processing errors or hardware failure
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
5.
SYSTEM AND METHOD FOR REMOVING COPYRIGHTED MATERIAL FROM A STREAMING PLATFORM
A method including: monitoring a set of streaming metrics for a video stream during a set of time intervals of a first duration during a first time window; in response to a first streaming metric, executing an image classification model based on a set of image frames in the video stream to characterize the image frames according to a set of tags; retrieving a content manifest associated with a content type of the video stream, the content manifest defining a set of target concepts related to the content type; deriving a difference between the set of tags to the set of target concepts in the content manifest to compute a match score for the video stream; in response to the match score exceeding a threshold score, flagging the video stream for manual authentication; and in response to receiving an abuse confirmation from the operator removing the video stream from the streaming platform.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
H04N 21/454 - Content filtering, e.g. blocking advertisements
6.
METHOD FOR CLIENT-SIDE, ON-EDGE JUST-IN-TIME TRANSCODING OF VIDEO CONTENT
A method includes: accessing a video in a passthrough rendition encoded according to a passthrough bitrate and a passthrough resolution; and segmenting the video. The method further includes transmitting a first passthrough segment to a first device in response to receiving a first request for a first playback segment of the video in the passthrough rendition from the first device, the first playback segment corresponding to the first passthrough segment. The method also includes, in response to receiving a second request for the first playback segment of the video in a first rendition from a second device, the first rendition defining a first bitrate below the passthrough bitrate and a first resolution below the passthrough resolution: transcoding the first passthrough segment into the first rendition segment in the first rendition according to the first bitrate and the first resolution; and transmitting the first rendition segment to the second device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
7.
METHOD FOR DYNAMIC SELECTION OF A CONTENT DELIVERY NETWORK
A method includes, at a first time: receiving a request for video content from a first user; generating a fingerprint for the first user; associating the first user with a first user population—assigned to a first CDN and receiving the video content from the first CDN during the first time period—based on the fingerprint; and accessing a first metric for distribution of video content from the first CDN to users of the first user population. The method also includes, at a second time: selecting a second user within the first user population; identifying a second CDN distinct from the first CDN; reassigning the second user to the second CDN; and accessing a second metric for distribution of the video content from the second CDN to the second user; and, in response to the second metric exceeding the first metric, reassigning the first user to the second CDN.
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
8.
METHOD FOR ON-DEMAND VIDEO EDITING AT TRANSCODE-TIME IN A VIDEO STREAMING SYSTEM
A method includes: receiving a script configured to modify the audio-video file; calculating a performance metric based on execution of the script on a set of test files; classifying the script as performant based on the performance metric; defining a metadata store associated with the script and the audio-video file; receiving a playback request specifying a rendition of the audio-video file from a computational device; in response to receiving the playback request: accessing a set of data inputs from the metadata store; executing the script on a frame of the audio-video file based on the set of data inputs to generate a modified frame of the audio-video file; transcoding the modified frame of the audio-video file into the rendition to generate an output frame of the audio-video file; and transmitting the output frame of the audio-video file to the computational device for playback at the computational device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/8358 - Generation of protective data, e.g. certificates involving watermark
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
9.
METHOD FOR JUST-IN-TIME TRANSCODING OF BYTERANGE-ADDRESSABLE PARTS
A method including: ingesting a video segment and a set of video features of the video segment; estimating a part size distribution for the video segment based on the set of video features and a first rendition of the video segment; calculating a maximum expected part size based on a threshold percentile in the part size distribution; at a first time, transmitting, to an video player, a manifest file indicating a set of byterange-addressable parts of the video segment in the first rendition, each byterange addressed part characterized by the maximum expected part size; at a second time, receiving, a playback request for a first byterange-addressable part; transcoding the first byterange-addressable part; in response to the maximum expected part size exceeding a size of the first byterange-addressable part in the first rendition, appending padding data to the first byterange-addressable part; and transmitting the first byterange-addressable part to the AV player.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
10.
SYSTEM AND METHOD FOR DETECTING AND REPORTING CONCURRENT VIEWERSHIP OF ONLINE AUDIO-VIDEO CONTENT
A method includes, at a first time: receiving, from a first viewer population, a first request for a first playback segment of a video; generating a first rendition segment corresponding to the first playback segment; transmitting the first rendition segment to the first viewer population; aggregating a first set of viewership data for the first playback segment; and generating a first viewership count based the first set of viewership data, the first viewership count corresponding to a viewership data filter. In addition, the method includes, at the second time: receiving a second request from the second viewer population for a second playback segment of the video; generating the second rendition segment corresponding to the first playback segment; modifying frames of the second rendition segment to include the first viewership count; and transmitting the second modified rendition segment to the second viewer population and the first viewership count.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/4425 - Monitoring of client processing errors or hardware failure
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
12.
Computer system and method for broadcasting audiovisual compositions via a video platform
A method including: accessing a first configuration; accessing a primary video stream comprising a first set of video content, from a first online video platform; accessing a secondary video stream comprising a second set of video content; and at an initial time, combining the primary video stream and the secondary video stream according to the default viewing arrangement; at a first time, detecting the first trigger event in the primary video stream; in response to detecting the first trigger event, combining the primary video stream and the secondary video stream according to the first target viewing arrangement, and publishing the first composite video to a second video platform; and at a second time, detecting the second trigger event in the secondary video stream; in response to detecting the second trigger event, combining the primary video stream and the secondary video stream according to the second target viewing arrangement.
A method includes: accessing a set of errors occurring during playback of segments of a video in a first rendition within a device population during a first time period and, during a subsequent time period, in response to receiving a first request for a first segment of the video in the first rendition: deriving, from the set of errors, a first error rate associated with the first segment in the first rendition; and, in response to the first error rate falling below a threshold, serving the first segment. The method also includes, in response to receiving a second request for a second segment of the video in the first rendition: deriving, from the set of errors, a second error rate associated with the second segment in the first rendition; and in response to the second error rate exceeding the threshold, serving the second segment in a second rendition.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
14.
Methods for identifier-based video streaming and sessionization
A method includes, during an initial time: receiving a manifest request, for a video, from a device associated with a first address; generating an identifier associated with the first address; generating a manifest defining a set of segments available for the video; and serving the manifest. The method further includes, during a first time: receiving a first request for a first segment, in the set of segments, the first request associated with the first address and the identifier; and based on association between the first address and the identifier, serving the first segment to the first address. The method also includes, during a second time: receiving a second request for a second segment in the set of segments, the second request associated with a second address and the identifier; and based on disassociation of the second address and the identifier, withholding delivery of the second segment to the second address.
A method including: extracting a set of video features representing properties of a video segment; generating a set of bitrate-resolution pairs based on the set of video features, each bitrate-resolution pair in the set of bitrate-resolution pairs defining a bitrate and defining a resolution estimated to maximize a quality score characterizing the video segment encoded at the bitrate; accessing a distribution of audience bandwidths; selecting a top bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a bottom bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a subset of bitrate-resolution pairs in the set of bitrate-resolution pairs based on the distribution of audience bandwidths, the subset of bitrate-resolution pairs defining bitrates less than the top bitrate and greater than the bottom bitrate; and generating an encoding ladder for the video segment comprising the top bitrate-resolution pair, the bottom bitrate-resolution pair, and the subset of bitrate-resolution pairs.
H04N 11/02 - Colour television systems with bandwidth reduction
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - ScenesScene-specific elements in video content
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/149 - Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
16.
Methods for generating video-and audience-specific encoding ladders with audio and video just-in-time transcoding
A method including: populating an encoding ladder with a subset of bitrate-resolution pairs, from a set of bitrate-resolution pairs, based on a distribution of audience bandwidths; receiving a first request for a first playback segment, at a first bitrate-resolution pair in a encoding ladder, in the video from a first device; in response to determining an absence of video segments, at the first bitrate-resolution pair and corresponding to the segment, in a first rendition cache: identifying a first set of mezzanine segments, in the video, corresponding to the first playback segment; assigning the first set of mezzanine segments to a set of workers for transcoding into a first set of video segments according to the first bitrate-resolution pair; storing the first set of video segments in the first rendition cache; and based on the first request, releasing the first set of video segments to the first device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
17.
Method for audio and video just-in-time transcoding with command frames
A method includes: ingesting a video; initializing a timed command stream synchronized to the video; emulating transcoding of the video to derive a sequence of video characteristics of the video; populating the timed command stream with the sequence of video characteristics; segmenting the video into a series of mezzanine segments. The method further includes: for each mezzanine segment, in the series of mezzanine segments: retrieving instream video characteristics, in the sequence of video characteristics, contained within a first segment of the timed command stream corresponding to the mezzanine segment; retrieving upstream video characteristics, in the sequence of video characteristics, preceding the first segment of the timed command stream and informing transcoding of the mezzanine segment; transforming the instream video characteristics and the upstream video characteristics into a set of transcode commands; storing the set of transcode commands in command frames; and inserting the command frames into the mezzanine segment.
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
H04N 21/433 - Content storage operation, e.g. storage operation in response to a pause request or caching operations
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
18.
System and method for detecting and reporting concurrent viewership of online audio-video content
A method includes, at a first time: receiving, from a first viewer population, a first request for a first playback segment of a video; generating a first rendition segment corresponding to the first playback segment; transmitting the first rendition segment to the first viewer population; aggregating a first set of viewership data for the first playback segment; and generating a first viewership count based the first set of viewership data, the first viewership count corresponding to a viewership data filter. In addition, the method includes, at the second time: receiving a second request from the second viewer population for a second playback segment of the video; generating the second rendition segment corresponding to the first playback segment; modifying frames of the second rendition segment to include the first viewership count; and transmitting the second modified rendition segment to the second viewer population and the first viewership count.
H04N 7/16 - Analogue secrecy systemsAnalogue subscription systems
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
19.
Method for audio and video just-in-time transcoding
A method for streaming an audio-video file can include: receiving a request for a playback segment of the audio-video file in a rendition from a computational device; in response to identifying absence of the playback segment in the rendition from a rendition cache and identifying absence of an assignment to transcode the playback segment in the rendition: assigning a worker to transcode the playback segment in the rendition. The method can also include, at the worker: identifying a subset of mezzanine segments in the set of mezzanine segments coinciding with a playback interval in the audio-video file; and for each mezzanine segment in the subset of mezzanine segments: concurrently transcoding the mezzanine segment into a rendition segment in the rendition and transmitting the rendition segment coinciding with the playback interval to the computational device via a peer-to-peer stream; and storing the rendition segment in the rendition cache.
H04N 7/16 - Analogue secrecy systemsAnalogue subscription systems
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
20.
Method for dynamic selection of a content delivery network
A method includes, at a first time: receiving a request for video content from a first user; generating a fingerprint for the first user; associating the first user with a first user population—assigned to a first CDN and receiving the video content from the first CDN during the first time period—based on the fingerprint; and accessing a first metric for distribution of video content from the first CDN to users of the first user population. The method also includes, at a second time: selecting a second user within the first user population; identifying a second CDN distinct from the first CDN; reassigning the second user to the second CDN; and accessing a second metric for distribution of the video content from the second CDN to the second user; and, in response to the second metric exceeding the first metric, reassigning the first user to the second CDN.
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
21.
Method for just-in-time transcoding of byterange-addressable parts
A method including: ingesting a video segment and a set of video features of the video segment; estimating a part size distribution for the video segment based on the set of video features and a first rendition of the video segment; calculating a maximum expected part size based on a threshold percentile in the part size distribution; at a first time, transmitting, to an video player, a manifest file indicating a set of byterange-addressable parts of the video segment in the first rendition, each byterange addressed part characterized by the maximum expected part size; at a second time, receiving, a playback request for a first byterange-addressable part; transcoding the first byterange-addressable part; in response to the maximum expected part size exceeding a size of the first byterange-addressable part in the first rendition, appending padding data to the first byterange-addressable part; and transmitting the first byterange-addressable part to the AV player.
H04N 7/173 - Analogue secrecy systemsAnalogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
22.
Computer system and method for broadcasting audiovisual compositions via a video platform
A method including: accessing a first configuration; accessing a primary video stream comprising a first set of video content, from a first online video platform; accessing a secondary video stream comprising a second set of video content; and at an initial time, combining the primary video stream and the secondary video stream according to the default viewing arrangement; at a first time, detecting the first trigger event in the primary video stream; in response to detecting the first trigger event, combining the primary video stream and the secondary video stream according to the first target viewing arrangement, and publishing the first composite video to a second video platform; and at a second time, detecting the second trigger event in the secondary video stream; in response to detecting the second trigger event, combining the primary video stream and the secondary video stream according to the second target viewing arrangement.
A method including: monitoring a set of streaming metrics for a video stream during a set of time intervals of a first duration during a first time window; in response to a first streaming metric, executing an image classification model based on a set of image frames in the video stream to characterize the image frames according to a set of tags; retrieving a content manifest associated with a content type of the video stream, the content manifest defining a set of target concepts related to the content type; deriving a difference between the set of tags to the set of target concepts in the content manifest to compute a match score for the video stream; in response to the match score exceeding a threshold score, flagging the video stream for manual authentication; and in response to receiving an abuse confirmation from the operator removing the video stream from the streaming platform.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
H04N 21/454 - Content filtering, e.g. blocking advertisements
24.
Method for on-demand video editing at transcode-time in a video streaming system
A method includes: receiving a script configured to modify the audio-video file; calculating a performance metric based on execution of the script on a set of test files; classifying the script as performant based on the performance metric; defining a metadata store associated with the script and the audio-video file; receiving a playback request specifying a rendition of the audio-video file from a computational device; in response to receiving the playback request: accessing a set of data inputs from the metadata store; executing the script on a frame of the audio-video file based on the set of data inputs to generate a modified frame of the audio-video file; transcoding the modified frame of the audio-video file into the rendition to generate an output frame of the audio-video file; and transmitting the output frame of the audio-video file to the computational device for playback at the computational device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/8358 - Generation of protective data, e.g. certificates involving watermark
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/4425 - Monitoring of client processing errors or hardware failure
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
26.
Method for just-in-time transcoding of byterange-addressable parts
A method including: ingesting a video segment and a set of video features of the video segment; estimating a part size distribution for the video segment based on the set of video features and a first rendition of the video segment; calculating a maximum expected part size based on a threshold percentile in the part size distribution; at a first time, transmitting, to an video player, a manifest file indicating a set of byterange-addressable parts of the video segment in the first rendition, each byterange addressed part characterized by the maximum expected part size; at a second time, receiving, a playback request for a first byterange-addressable part; transcoding the first byterange-addressable part; in response to the maximum expected part size exceeding a size of the first byterange-addressable part in the first rendition, appending padding data to the first byterange-addressable part; and transmitting the first byterange-addressable part to the AV player.
H04N 7/173 - Analogue secrecy systemsAnalogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
H04N 21/4425 - Monitoring of client processing errors or hardware failure
28.
Methods for generating video-and audience-specific encoding ladders with audio and video just-in-time transcoding
A method including: populating an encoding ladder with a subset of bitrate-resolution pairs, from a set of bitrate-resolution pairs, based on a distribution of audience bandwidths; receiving a first request for a first playback segment, at a first bitrate-resolution pair in a encoding ladder, in the video from a first device; in response to determining an absence of video segments, at the first bitrate-resolution pair and corresponding to the segment, in a first rendition cache: identifying a first set of mezzanine segments, in the video, corresponding to the first playback segment; assigning the first set of mezzanine segments to a set of workers for transcoding into a first set of video segments according to the first bitrate-resolution pair; storing the first set of video segments in the first rendition cache; and based on the first request, releasing the first set of video segments to the first device.
H04N 7/173 - Analogue secrecy systemsAnalogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
29.
Method for on-demand video editing at transcode-time in a video streaming system
A method includes: receiving a script configured to modify the audio-video file; calculating a performance metric based on execution of the script on a set of test files; classifying the script as performant based on the performance metric; defining a metadata store associated with the script and the audio-video file; receiving a playback request specifying a rendition of the audio-video file from a computational device; in response to receiving the playback request: accessing a set of data inputs from the metadata store; executing the script on a frame of the audio-video file based on the set of data inputs to generate a modified frame of the audio-video file; transcoding the modified frame of the audio-video file into the rendition to generate an output frame of the audio-video file; and transmitting the output frame of the audio-video file to the computational device for playback at the computational device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/8358 - Generation of protective data, e.g. certificates involving watermark
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
30.
Method for on-demand video editing at transcode- time in a video streaming system
A method includes: receiving a script configured to modify the audio-video file; calculating a performance metric based on execution of the script on a set of test files; classifying the script as performant based on the performance metric; defining a metadata store associated with the script and the audio-video file; receiving a playback request specifying a rendition of the audio-video file from a computational device; in response to receiving the playback request: accessing a set of data inputs from the metadata store; executing the script on a frame of the audio-video file based on the set of data inputs to generate a modified frame of the audio-video file; transcoding the modified frame of the audio-video file into the rendition to generate an output frame of the audio-video file; and transmitting the output frame of the audio-video file to the computational device for playback at the computational device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/8358 - Generation of protective data, e.g. certificates involving watermark
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
31.
Method for generating video- and audience-specific encoding ladders
A method including: extracting a set of video features representing properties of a video segment; generating a set of bitrate-resolution pairs based on the set of video features, each bitrate-resolution pair in the set of bitrate-resolution pairs defining a bitrate and defining a resolution estimated to maximize a quality score characterizing the video segment encoded at the bitrate; accessing a distribution of audience bandwidths; selecting a top bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a bottom bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a subset of bitrate-resolution pairs in the set of bitrate-resolution pairs based on the distribution of audience bandwidths, the subset of bitrate-resolution pairs defining bitrates less than the top bitrate and greater than the bottom bitrate; and generating an encoding ladder for the video segment comprising the top bitrate-resolution pair, the bottom bitrate-resolution pair, and the subset of bitrate-resolution pairs.
H04N 11/02 - Colour television systems with bandwidth reduction
H04N 19/149 - Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
G06V 20/40 - ScenesScene-specific elements in video content
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
32.
Method for generating video- and audience-specific encoding ladders
A method including: extracting a set of video features representing properties of a video segment; generating a set of bitrate-resolution pairs based on the set of video features, each bitrate-resolution pair in the set of bitrate-resolution pairs defining a bitrate and defining a resolution estimated to maximize a quality score characterizing the video segment encoded at the bitrate; accessing a distribution of audience bandwidths; selecting a top bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a bottom bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a subset of bitrate-resolution pairs in the set of bitrate-resolution pairs based on the distribution of audience bandwidths, the subset of bitrate-resolution pairs defining bitrates less than the top bitrate and greater than the bottom bitrate; and generating an encoding ladder for the video segment comprising the top bitrate-resolution pair, the bottom bitrate-resolution pair, and the subset of bitrate-resolution pairs.
H04N 11/02 - Colour television systems with bandwidth reduction
H04N 19/149 - Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
G06V 20/40 - ScenesScene-specific elements in video content
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/4425 - Monitoring of client processing errors or hardware failure
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
34.
Method for on-demand video editing at transcode-time in a video streaming system
A method includes: receiving a script configured to modify the audio-video file; calculating a performance metric based on execution of the script on a set of test files; classifying the script as performant based on the performance metric; defining a metadata store associated with the script and the audio-video file; receiving a playback request specifying a rendition of the audio-video file from a computational device; in response to receiving the playback request: accessing a set of data inputs from the metadata store; executing the script on a frame of the audio-video file based on the set of data inputs to generate a modified frame of the audio-video file; transcoding the modified frame of the audio-video file into the rendition to generate an output frame of the audio-video file; and transmitting the output frame of the audio-video file to the computational device for playback at the computational device.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
H04N 21/8358 - Generation of protective data, e.g. certificates involving watermark
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/4425 - Monitoring of client processing errors or hardware failure
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
36.
Method for generating video- and audience-specific encoding ladders
A method including: extracting a set of video features representing properties of a video segment; generating a set of bitrate-resolution pairs based on the set of video features, each bitrate-resolution pair in the set of bitrate-resolution pairs defining a bitrate and defining a resolution estimated to maximize a quality score characterizing the video segment encoded at the bitrate; accessing a distribution of audience bandwidths; selecting a top bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a bottom bitrate-resolution pair in the set of bitrate-resolution pairs; selecting a subset of bitrate-resolution pairs in the set of bitrate-resolution pairs based on the distribution of audience bandwidths, the subset of bitrate-resolution pairs defining bitrates less than the top bitrate and greater than the bottom bitrate; and generating an encoding ladder for the video segment comprising the top bitrate-resolution pair, the bottom bitrate-resolution pair, and the subset of bitrate-resolution pairs.
H04N 11/02 - Colour television systems with bandwidth reduction
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06V 20/40 - ScenesScene-specific elements in video content
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/4425 - Monitoring of client processing errors or hardware failure
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
A video monitoring system can include multiple collectors to receive video beacon data from multiple video monitoring interface modules. At least one beacon stream is connected to receive data from multiple collectors. A processing module receives the beacon stream and provides a real-time event stream used for real-time data analysis and a video view stream used for long-term data analysis.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Platform as a service (PAAS) featuring computer software
platforms for hosting, encoding, and streaming of video or
audio content; software as a service (SAAS) featuring
software for hosting, encoding, and streaming of video or
audio content.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Software for monitoring, analyzing, and enabling online
video streaming; software for real-time monitoring,
analyzing, and error-tracking of video playback quality and
experience; software for creating online video viewers and
players. Software as a service (SAAS) services and platform as a
service (PAAS) services featuring software for monitoring,
analyzing, and enabling online video streaming; software as
a service (SAAS) services and platform as a service (PAAS)
services featuring software for real-time monitoring,
analyzing, and error-tracking of video playback quality and
experience; Software as a service (SAAS) services and
platform as a service (PAAS) services featuring software for
creating online video viewers and players.