A data processing system that comprises a processing unit (1), (2), (3), (10) and a communications bus (5) over which bus transactions to access memory (6) can be performed is disclosed. The system includes a codec (20), and the processing unit (1), (2), (3), (10) can initiate over the communications bus (5), bus transactions that comprise the codec (20) accessing the memory (6).
Briefly, example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more processing devices to facilitate and/or support one or more operations and/or techniques for authenticating an identity of a subject. In particular, some embodiments are directed to techniques for authentication of an identity of a subject as being an identity of a particular unique individual based, at least in part, on involuntary responses by the subject to sensory stimuli.
A control system for an image capture device. The control system includes an image sensor including sensor pixels including at least two sub-pixels respectively. First and second pluralities of the sensor sub-pixels capture first and second pixel data with a first and second exposure respectively. An image processor receives image data derived from a first set of the sensor sub-pixels including sub-pixels of the first and second pluralities of the sensor sub- pixels. On the basis of the image data, the image processor generates output data representing at least part of an output image. A focus controller receives focus data derived from a second set of the sensor sub-pixels including at least two of the first plurality of the sensor sub-pixels. On the basis of the focus data, the focus controller generates a control signal for adjusting a focus of the image capture device. Other examples relate to image sensors.
Certain examples described herein relate to a method for detecting a tilt angle between a camera coordinate system and a world coordinate system. In one such example, the method comprises receiving an image and detecting a plurality of lines in the image, wherein each detected line has an associated angle. The method then comprises, based on at least a first set of the lines, determining at least a first parameter indicating a first representative angle corresponding to the first set. Finally, the method comprises determining the tilt angle based on the at least one parameter.
Certain examples described herein relate to a method for detecting a line in an image. In one such example, the method comprises identifying a candidate line break region in the image. Identifying the candidate line break region comprises identifying a first pixel of the image and a second pixel of the image, between which the candidate line break region appears, a characteristic of the first pixel and the second pixel having a predetermined similarity relationship. The method then comprises using the identified candidate line break region to assist in detecting a line in the image.
A hybrid machine learning system (100, 200, 300, 400, 500, 600, 700, 800) is for processing image data (110, 210, 310, 410, 510, 610, 710, 810) obtained from an image sensor (105, 205, 305, 405, 505, 605, 705, 805). The system comprises a front end (115, 215, 315, 415, 515, 615, 715, 815) comprising one or more hard-coded filters (120, 220, 320, 420, 520, 620, 720, 820). Each of the one or more hard-coded filters is arranged to perform a set task. The system also comprises a neural network (125, 225, 325, 425, 525, 625, 725, 825) arranged to receive and process output from the front end. The one or more hard-coded filters include one or more hard-coded noise compensation filters that are hard-coded to compensate for a noise profile of the image sensor from which the image data is obtained.
Methods, apparatus and computer programs are provided for storing a set of sensor output images for multi-exposure high dynamic range processing to produce a processed image, wherein the multi-exposure high dynamic range processing combines pixel values from one or more of the sensor output images to produce the processed image. In one example, a method comprises receiving from an image sensor a first sensor output image, wherein the first sensor output image has a first exposure. The method also comprises receiving from the image sensor a second sensor output image, wherein the second sensor output image has a second exposure different than the first exposure. First image pixels of the first sensor output image having pixel values which are unlikely to have a substantial contribution to the processed image are identified, and the amount of information stored in relation to said first image pixels is reduced on the basis of said identifying step.
A method is provided for efficiently managing the data bandwidth and storage requirement of a video camera by using an object detection engine to process a video captured by an image sensor even prior to compression. The method manages the video transmission bandwidth by processing the captured video based on information extracted by the object detection engine on a frame by frame basis without the loss of critical information. A related computer vision system is also provided..
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
H04N 5/14 - Picture signal circuitry for video frequency region
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/142 - Detection of scene cut or scene change
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/137 - Motion inside a coding unit, e.g. average field, frame or block difference
H04N 19/87 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
10.
METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR AUTOMATICALLY ALTERING A VIDEO STREAM
There is provided a method for automatically altering a digital video stream including multiple video input frames, to automatically obtain output frames with a target composition, in which frame metadata relating to objects in one or more of the video input frames is analysed on a frame-by-frame basis and used by a processor to automatically alter one or more output frames to be more similar to, or to match, the target composition, wherein cropping is performed in 3D. A related system and a related computer program product are also provided.
The invention relates to feature extraction technique based on edge extraction. It can be used in computer vision systems, including image/facial/object recognition systems, scene interpretation, classification and captioning systems. A model or profile of the noise in the sensor is used to improve feature extraction or object detection on an image from a sensor.
The field of the invention relates to methods of reducing digital video flicker, and to related systems, devices and computer program products. There is provided a method of reducing human-perceivable flicker in a digital video, in which video frames are recorded unevenly with respect to time and in synchronization with a lighting flicker frequency. There is provided a system including a camera and a computer, the system configured to reduce human-perceivable flicker in a digital video, in which video frames are recorded unevenly with respect to time and in synchronization with a lighting flicker frequency, so as to reduce human-perceivable flicker in a video assembled using the computer from the video frames.
A method of forming a control parameter dependent on ambient light. The method comprises the steps of acquiring light values from an ambient light sensor and acquiring positional status values from a positional status sensor. The control parameter depends on the light values and is filtered in dependence on the positional status values.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
An image sensor provides a first image with a first exposure and a second image with a second exposure. An exposure controller sets the first exposure in dependence on pixel intensities of at least one of the images. It sets the second exposure in dependence on a factor and the first exposure. The factor is determined in dependence on pixel intensities of at least one of the images.
A method of controlling in a display device screen brightness and strength of dynamic range compression of display data in dependence on ambient light level. Ambient light level values are required from one or more ambient light sensors. The ambient light level values are averaged using a first method to obtain a first average value and using a second, different method to obtain a second average value. The first average value is used for controlling the screen brightness and the second average value is used for controlling the strength of the dynamic range compression.
A display controller controls the display luminance of a display device in dependence on the ambient light level. It also controls the amount of the dynamic range compression of the input display data, the amount of compression depending on the ambient light level. The dynamic range of the display data is compressed to match the dynamic range of the display device. The dynamic range of the display device depends on the ambient light level and the display luminance.
G09G 3/34 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
17.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
An image processing system that is capable of performing tone mapping appropriately and efficiently is provided. The image processing apparatus for tone mapping of an image includes a histogram calculating unit for calculating histogram information from the image; a feature extraction unit for extracting feature quantities in pixels of interest; a target histogram setting unit for setting a target histogram based on the feature quantities; a tone mapping information calculating unit for calculating tone mapping information in which the histogram information is approximated by the target histogram; and a tone mapping unit for executing tone mapping on the pixels of interest using the tone mapping information.
A tone conversion unit (109), when performing tone conversion on an image signal, performs the tone conversion so as to provide a more appropriate tone to a feature area such as a face extracted from the image signal. Specifically, a correction coefficient calculation unit (110) performs multi-resolution decomposition on a specific signal with the extracted feature area, and sets all pixels of low-frequency component to 1, while setting high-frequency components considered to include noise to 0, so as to calculate a correction coefficient. Then, a correction coefficient processing unit (111) performs a processing of multiplying the correction coefficient to the image signal similarly performed with multi-resolution decomposition.
An image processing method comprises controlling the gain of pixels of an input image captured at ISO value A or equivalent according to a higher applied ISO value B or equivalent. The gain of each pixel is determined by a spatially- varying image transform, which is restricted such that the maximum gain depends on a relationship between the applied ISO value B or equivalent and the ISO value A or equivalent at which the image was captured.