A system for generating motion blur comprises: a frame camera, an event camera and an accumulator for accumulating event information from a plurality of events occurring within a window around the exposure time of an image frame in a plurality of event frames. A processor determines from the events in at least a first of the plurality of event frames, one or more areas of movement within the field of view of the event camera; determines from the events in at least a second of the plurality of event frames, a direction of movement for the one or more areas of movement; and applies blur in one or more areas of the image frame corresponding to the one or more determined areas of movement in accordance with at least the direction of movement for each of the one or more areas of movement to produce a blurred image.
A system for generating motion blur comprises: a frame camera, an event camera and an accumulator for accumulating event information from a plurality of events occurring within a window around the exposure time of an image frame in a plurality of event frames. A processor determines from the events in at least a first of the plurality of event frames, one or more areas of movement within the field of view of the event camera; determines from the events in at least a second of the plurality of event frames, a direction of movement for the one or more areas of movement; and applies blur in one or more areas of the image frame corresponding to the one or more determined areas of movement in accordance with at least the direction of movement for each of the one or more areas of movement to produce a blurred image.
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
4.
NETWORK MONITORING TO DETERMINE PERFORMANCE OF INFRASTRUCTURE SERVICE PROVIDERS
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
H04L 67/563 - Data redirection of data network streams
H04L 67/566 - Grouping or aggregating service requests, e.g. for unified processing
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
5.
Method and system for camera motion blur reduction
A method for reducing camera motion blur comprises, before acquiring an image frame for a video stream, a camera measurement unit measuring data related to a camera module motion during a time window; determining camera module motion based on the measured data and predicting a camera motion blur during acquisition of the image frame based at least on the determined camera module motion and the lens projection model; determining whether the predicted camera motion blur exceeds a threshold; in response to determining that the predicted camera motion blur exceeds the threshold, determining a reduction of the provisional exposure time determined to acquire the image frame so that the predicted camera motion blur reaches the threshold, determining whether a corresponding increase in the provisional gain determined to acquire the image frame is below a maximum gain value, adjusting the provisional exposure time and gain, and acquiring the image frame.
A decompression apparatus comprises a number of stages including: a first stage which always reads a binary symbol from a first stage indicator file for each symbol which is to be decoded; one or more mid stages which conditionally read a binary symbol from successive indicator files based on the value of the last symbol read from a previous indicator file; and a final stage which conditionally reads a symbol from a reduced file based on the value of the last symbol read from the last stage indicator file.
A method for reducing camera motion blur comprises, before acquiring an image frame for a video stream, a camera measurement unit measuring data related to a camera module motion during a time window; determining camera module motion based on the measured data and predicting a camera motion blur during acquisition of the image frame based at least on the determined camera module motion and the lens projection model; determining whether the predicted camera motion blur exceeds a threshold; in response to determining that the predicted camera motion blur exceeds the threshold, determining a reduction of the provisional exposure time determined to acquire the image frame so that the predicted camera motion blur reaches the threshold, determining whether a corresponding increase in the provisional gain determined to acquire the image frame is below a maximum gain value, adjusting the provisional exposure time and gain, and acquiring the image frame.
A method at a first participant's client conferencing system in a videoconference comprises receiving, from a second client conferencing system, at least one first video frame of a first video signal including an image of the second participant looking at a third participant, and first metadata associated with the first video frame and including an identity of the third participant. The image of the second participant is modified in the first video frame so that the first video frame is displayed on a first area of the client conferencing system with the second participant looking at a second area of the first display configured for displaying a second video signal of the third participant identified by the first metadata.
A method for generating a composite image comprises: detecting a color temperature of a background image; acquiring from a camera through an image signal processor, ISP, performing white balance correction of acquired image data, an image including a foreground region including face of a user; and detecting a color temperature of the foreground region. Responsive to the color temperature for the foreground region differing from that of the background image by more than a threshold amount, a color temperature for white balance correction of a subsequently acquired image is set which causes skin pixels within the foreground region of the subsequently acquired image to have a color temperature closer to the color temperature for the background image. Pixel values of the foreground region are combined with pixel values of the background image corresponding to a background region of the acquired image to provide the composite image.
H04N 23/88 - Camera processing pipelinesComponents thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
A method comprises displaying a first image acquired from a camera having an input camera projection model including a first focal length and an optical axis parameter value. A portion of the first image is selected as a second image associated with an output camera projection model in which either a focal length and/or an optical axis parameter value differ from the parameters of the input camera projection model. The method involves iteratively: adjusting either the focal length and/or an optical axis parameter value for the camera lens so that it approaches the corresponding value of the output camera projection model; acquiring a subsequent image using the adjusted focal length or optical axis parameter value; mapping pixel coordinates in the second image, through a normalized 3D coordinate system to respective locations in the subsequent image to determine respective values for the pixel coordinates; and displaying the second image.
A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
G06N 3/04 - Architecture, e.g. interconnection topology
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06T 7/269 - Analysis of motion using gradient-based methods
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A neural network image processing apparatus arranged to acquire images from an image sensor and to: identify a ROI containing a face region in an image; determine at plurality of facial landmarks in the face region; use the facial landmarks to transform the face region within the ROI into a face region having a given pose; and use transformed landmarks within the transformed face region to identify a pair of eye regions within the transformed face region. Each identified eye region is fed to a respective first and second convolutional neural network, each network configured to produce a respective feature vector. Each feature vector is fed to respective eyelid opening level neural networks to obtain respective measures of eyelid opening for each eye region. The feature vectors are combined and to a gaze angle neural network to generate gaze yaw and pitch values substantially simultaneously with the eyelid opening values.
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04L 67/563 - Data redirection of data network streams
H04L 67/566 - Grouping or aggregating service requests, e.g. for unified processing
A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt−1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt−1) and intermediate output information (Ht−1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.
G06T 5/10 - Image enhancement or restoration using non-spatial domain filtering
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
H04N 23/951 - Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
Techniques and arrangements that utilize speckle imaging and autocorrelation to estimate the PSF of an image sensor for a digital imaging apparatus, e.g., a camera or a scanner. In particular, a system of components described herein is a simple arrangement that does not require a complex setup. Therefore, the system is portable and easy to set up. Additionally, by utilizing autocorrelation, the calculations of PSF using data obtained by the system are simplified.
An image processing system is configured to receive a first high resolution stream of images and a second lower resolution stream of images from image sources with substantially the same field of view. The system comprises a localizer component configured to provide a location for any object of interest independently of class within successive images of the second stream of images; a classifier configured to: receive one or more locations selectively provided by the localizer, identify a corresponding portion of an image acquired from the first stream at substantially the same time at which an image from the second stream in which an object of interest was identified and return a classification for the type of object within the identified portion of the image from the first stream; and a tracker configured to associate the classification with the location through acquisition of successive images in the second stream.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
19.
Systems and methods for detecting data insertions in biometric authentication systems utilizing a secret
Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a Secret embedded into the data stream is compared to a Replica Secret upon receipt to confirm data transmission integrity.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
A method operable by a computing device for configuring access for a limited user interface (UI) device to a network service via a local network access point is disclosed. The method comprises the steps of: obtaining from the limited UI device a device identifier via a first out-of-band channel. The device identifier is provided to the network service via a secure network link. A zero knowledge proof (ZKP) challenge is received from the network service. Configuration information is provided to the limited-UI device via a second out-of-band channel, the configuration information including information sufficient to enable the limited-UI device to connect to the local network access point. The ZKP challenge is provided to the limited-UI device via the second out-of-band channel. A secure channel key is received from the network service indicating a successful response from the limited-UI device to the ZKP challenge; and provided to the limited-UI device enabling the limited-UI device to access the network service.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
A method for stabilizing a video sequence comprises: obtaining an indication of camera movement from acquisition of a previous camera frame to acquisition of a current camera frame; determining an orientation for the camera at a time of acquiring the current camera frame; and determining a candidate orientation for a crop frame for the current camera frame by adjusting an orientation of a crop frame associated with the previous camera frame according to the determined orientation. A boundary of one of the camera frame or crop frame is traversed to determine if a specific point on the boundary of the crop frame exceeds a boundary of the camera frame. If so, a rotation of the specific point location which would bring the specific point location onto the boundary of the crop frame is determined and the candidate crop frame orientation updated accordingly before the crop frame is displayed.
A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt−1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt−1) and intermediate output information (Ht−1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.
A device, such as a head-mounted device (HMD), may include a frame and a plurality of mirrors coupled to an interior portion of the frame. An imaging device may be coupled to the frame at a position to capture images of an eye of the wearer reflected from the mirrors. The HMD may also include a mirror angle adjustment device to adjust an angle of one or more of the mirrors relative to the imaging device so that the mirror(s) reflect the eye of the wearer to the imaging device.
A hardware acceleration module may generate a channel-wise argmax map using a predefined set of hardware-implemented operations. In some examples, a hardware acceleration module may receive a set of feature maps for different image channels. The hardware acceleration module may execute a sequence of hardware operations, including a portion(s) of hardware for executing a convolution, rectified linear unit (ReLU) activation, and/or layer concatenation, to determine a maximum channel feature value and/or argument maxima (argmax) value for a set of associated locations within the feature maps. An argmax map may be generated based at least in part on the argument maximum for a set of associated locations.
G06F 30/331 - Design verification, e.g. functional simulation or model checking using simulation with hardware acceleration, e.g. by using field programmable gate array [FPGA] or emulation
G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
G06F 7/483 - Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
G06K 9/62 - Methods or arrangements for recognition using electronic means
A method includes determining, by a portable device, image capturing conditions based on an analysis of contents of a first digital image of a group of digital images captured by an image capturing device, and determining, by the portable device, whether the image capturing conditions determined for the first digital image indicate outdoor image capturing conditions. Based at least in part on a determination that the image capturing conditions determined for the first digital image indicate outdoor image capturing conditions, displaying a first indication that the first digital image must be captured in indoor image capturing conditions for an iris code enrollment process, and displaying a second indication of resumption of the iris code enrollment process when the image capturing conditions, determined for the first digital image, indicate the indoor image capturing conditions.
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns
26.
Configuring manifest files including redirect uniform resource locators
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
An image acquisition system determines first and second sets of points defining an iris-pupil boundary and an iris-sclera boundary in an acquired image; determines respective ellipses fitting the first and second sets of points; determines a transformation to transform one of the ellipses into a circle on a corresponding plane; using the determined transformation, transforms the selected ellipse into a circle on the plane; using the determined transformation, transforms the other ellipse into a transformed ellipse on the plane; determines a plurality of ellipses on the plane for defining an iris grid, by interpolating a plurality of ellipses between the circle and the transformed ellipse; moves the determined grid ellipses onto the iris in the image using an inverse transformation of the determined transformation; and extracts an iris texture by unwrapping the iris and interpolating image pixel values at each grid point defined along each of the grid ellipses.
A handheld computing device comprises a display comprising an array of pixels illuminated by a plurality of visible light sources, and a plurality of infra-red light sources interleaved between the visible light sources, the IR light sources being actuable to emit diffuse IR light with a first intensity. A camera has an image sensor comprising an array of pixels responsive to infra-red light and a lens assembly with an optical axis extending from the image sensor through the surface of the display. A dedicated illumination source is located outside the display and is actuable to emit infra-red light with a second greater intensity. A processor is configured to switch between an iris region processing mode in which a subject is illuminated at least by the dedicated light source and a face region processing mode in which a subject is illuminated by the plurality of IR light sources.
A camera comprises a lens assembly coupled to an event-sensor, the lens assembly being configured to focus a light field onto a surface of the event-sensor, the event-sensor surface comprising a plurality of light sensitive-pixels, each of which cause an event to be generated when there is a change in light intensity greater than a threshold amount incident on the pixel. The camera further includes an actuator which can be triggered to cause a change in the light field incident on the surface of the event-sensor and to generate a set of events from a sub-set of pixels distributed across the surface of the event-sensor.
H04N 5/345 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by partially reading an SSIS array
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
G02B 27/64 - Imaging systems using optical elements for stabilisation of the lateral and angular position of the image
A method for stabilizing a video sequence comprises: obtaining an indication of camera movement from acquisition of a previous camera frame to acquisition of a current camera frame; determining an orientation for the camera at a time of acquiring the current camera frame; and determining a candidate orientation for a crop frame for the current camera frame by adjusting an orientation of a crop frame associated with the previous camera frame according to the determined orientation. A boundary of one of the camera frame or crop frame is traversed to determine if a specific point on the boundary of the crop frame exceeds a boundary of the camera frame. If so, a rotation of the specific point location which would bring the specific point location onto the boundary of the crop frame is determined and the candidate crop frame orientation updated accordingly before the crop frame is displayed.
A method of generating landmark locations for an image crop comprises: processing the crop through an encoder-decoder to provide a plurality of N output maps of comparable spatial resolution to the crop, each output map corresponding to a respective landmark of an object appearing in the image crop; processing an output map from the encoder through a plurality of feed forward layers to provide a feature vector comprising N elements, each element including an (x,y) location for a respective landmark. Any landmarks locations from the feature vector having an x or a y location outside a range for a respective row or column of the crop are selected for a final set of landmark locations; with remaining landmark locations tending to be selected from the N (x,y) landmark locations from the plurality of N output maps.
A method for automatically determining exposure settings for an image acquisition system comprises maintaining a plurality of look-up tables, each look-up table being associated with a corresponding light condition and storing image exposure settings associated with corresponding distance values between a subject and the image acquisition system. An image of a subject is acquired from a camera module; and a light condition occurring during the acquisition is determined based on the acquired image. A distance between the subject and the camera module during the acquisition is calculated. The method then determines whether a correction of the image exposure settings for the camera module is required based on the calculated distance and the determined light condition; and responsive to correction being required: selects image exposure settings corresponding to the calculated distance from the look-up table corresponding to the determined light condition; and acquires a new image using the selected image exposure settings.
G06V 40/00 - Recognition of biometric, human-related or animal-related patterns in image or video data
H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
G01S 3/00 - Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
H04N 5/243 - Circuitry for compensating for variation in the brightness of the object by influencing the picture signal
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelengthActuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
33.
Method for determining bias in an inertial measurement unit of an image acquisition device
An image processing system comprises a template matching engine (TME). The TME reads an image from the memory; and as each pixel of the image is being read, calculates a respective feature value of a plurality of feature maps as a function of the pixel value. A pre-filter is responsive to a current pixel location comprising a node within a limited detector cascade to be applied to a window within the image to: compare a feature value from a selected one of the plurality of feature maps corresponding to the pixel location to a threshold value; and responsive to pixels for all nodes within a limited detector cascade to be applied to the window having been read, determine a score for the window. A classifier, responsive to the pre-filter indicating that a score for a window is below a window threshold, does not apply a longer detector cascade to the window before indicating that the window does not comprise an object to be detected.
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
An image acquisition system determines first and second sets of points defining an iris-pupil boundary and an iris-sclera boundary in an acquired image; determines respective ellipses fitting the first and second sets of points; determines a transformation to transform one of the ellipses into a circle on a corresponding plane; using the determined transformation, transforms the selected ellipse into a circle on the plane; using the determined transformation, transforms the other ellipse into a transformed ellipse on the plane; determines a plurality of ellipses on the plane for defining an iris grid, by interpolating a plurality of ellipses between the circle and the transformed ellipse; moves the determined grid ellipses onto the iris in the image using an inverse transformation of the determined transformation; and extracts an iris texture by unwrapping the iris and interpolating image pixel values at each grid point defined along each of the grid ellipses.
A method of providing a sharpness measure for an image comprises detecting an object region within an image; obtaining meta-data for the image; and scaling the chosen object region to a fixed size. A gradient map is calculated for the scaled object region and compared against a threshold determined for the image to provide a filtered gradient map of values exceeding the threshold. The threshold for the image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO/gain used for image acquisition. A sharpness measure for the object region is determined as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values.
A biometrics-enabled portable storage device may store and secure data via biometrics related to a user's iris. The biometrics-enabled portable storage device may include a camera that captures image data related a user's iris and stores the image data to enroll the user for use of the biometrics-enabled portable storage device. To unlock the data, a user aligns the camera with their iris using a hot mirror and the camera captures iris data for comparison with the iris image data stored during enrollment. If the two sets of image data match, the biometrics-enabled portable storage device may be unlocked and the user may access data stored on the biometrics-enabled portable storage device. If the two sets of image data do not match, then the biometrics-enabled portable storage device remains locked.
A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.
G06N 3/04 - Architecture, e.g. interconnection topology
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06T 7/269 - Analysis of motion using gradient-based methods
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
A depth sensing camera system that comprises one or more fisheye lenses and infrared and/or near-infrared image sensors. In some examples, the image sensors may generate output signals based at least in part on receiving radiation via the fisheye lenses. A depth measurement may be calculated based at least in part on the output signals. For example, these output signals may be provided as input to a depth model, which may determine the depth measurement. In some examples, such a depth model may be integrated into an application-specific integrated circuit and/or may be operated by an application processor.
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
G02B 9/64 - Optical objectives characterised both by the number of the components and their arrangements according to their sign, i.e. + or – having more than six components
A system includes an image sensor, an adjustable aperture, and a memory. THE memory includes computer executable instructions that, when executed by a processor, cause the system to perform operations including obtaining a first image via the image sensor based at least in part on a first aperture stop of the adjustable aperture, identifying a first pixel of the first image, identifying a second pixel of the first image, determining a second aperture stop of the adjustable aperture based at least in part on the first pixel, determining a third aperture stop of the adjustable aperture based at least in part on the second pixel, obtaining a second image via the image sensor based at least in part on the second aperture stop, and obtaining a third image via the image sensor based at least in part on the third aperture stop.
An image processing method for iris recognition of a predetermined subject, comprises acquiring through an image sensor, a probe image illuminated by an infra-red (IR) illumination source, wherein the probe image comprises one or more eye regions and is overexposed until skin portions of the image are saturated. One or more iris regions are identified within the one or more eye regions of said probe image; and the identified iris regions are analysed to detect whether they belong to the predetermined subject.
An apparatus for processing a neural network comprises an image memory into which an input image is written tile-by-tile, each tile overlapping a previous tile to a limited extent; a weights memory for storing weight information for a plurality of convolutional layers of a neural network, including at least two pooling layers; and a layer processing engine configured to combine information from the image and weights memories to generate an output map and to write the output map to image memory. The apparatus is configured to store a limited number of values from adjacent a boundary of an output map for a given layer. The layer processing engine is configured to combine the output map values from a previously processed image tile with the information from the image memory and the weights when generating an output map for a layer of the neural network following the given layer.
G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
A dynamically reconfigurable heterogeneous systolic array is configured to process a first image frame, and to generate image processing primitives from the image frame, and to store the primitives and the corresponding image frame in a memory store. A characteristic of the image frame is determined. Based on the characteristic, the array is reconfigured to process a following image frame.
G09G 5/393 - Arrangements for updating the contents of the bit-mapped memory
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
H04N 9/67 - Circuits for processing colour signals for matrixing
H04N 5/335 - Transforming light or analogous information into electric information using solid-state image sensors [SSIS]
G06F 15/80 - Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
46.
Anonymizing facial expression data with a smart-cam
A method of responding to a criterion-based request for information collected from users meeting the criterion while complying with a user-requested privacy requirement. In one embodiment a request is received for data comprising facial or audio expressions for users who meet the criterion. A program monitors activities indicative of user attention or user reaction based on face tracking, face detection, face feature detection, eye gaze determination, eye tracking, audio expression determination, or determination of an emotional state. When a user requests a high level of privacy, the timestream data collected for the user is aggregated with timestream data collected for other users into a statistical dataset by processing the timestreams to ensure the high level of privacy in the statistical dataset which is provided to a content provider without providing data collected for the user who has requested the high level of privacy.
Systems and methods for training a conditional generator model are described. Methods receive a sample, and determine a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
A neural network image processing apparatus arranged to acquire images from an image sensor and to: identify a ROI containing a face region in an image; determine at plurality of facial landmarks in the face region; use the facial landmarks to transform the face region within the ROI into a face region having a given pose; and use transformed landmarks within the transformed face region to identify a pair of eye regions within the transformed face region. Each identified eye region is fed to a respective first and second convolutional neural network, each network configured to produce a respective feature vector. Each feature vector is fed to respective eyelid opening level neural networks to obtain respective measures of eyelid opening for each eye region. The feature vectors are combined and to a gaze angle neural network to generate gaze yaw and pitch values substantially simultaneously with the eyelid opening values.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.
A method for providing depth map information based on image data descriptive of a scene. In one embodiment, after generating an initial sequence of disparity map data, performing a smoothing operation or an interpolation to remove artifact introduced in the disparity map data as a result of segmenting the image data into superpixels.
A method for compensating for off-axis tilting of a lens relative to an image sensor in an image acquisition device comprises acquiring a set of calibrated parameters
y indicate a coordinate of a pixel in an acquired image. Image information is mapped from the acquired image to a lens tilt compensated image according to the formulae:
where s comprises a scale factor given by
y indicate the location of a pixel in the lens tilt compensated image.
Tracking units for facial features with advanced training for natural rendering of human faces in real-time are provided. An example device receives a video stream, and upon detecting a visual face, selects a 3D model from a comprehensive set of head orientation classes. The device determines modifications to the selected 3D model to describe the face, then projects a 2D model of tracking points of facial features based on the 3D model, and controls, actuates, or animates hardware based on the facial features tracking points. The device can switch among an example comprehensive set of 35 different head orientation classes for each video frame, based on suggestions computed from a previous video frame or from yaw and pitch angles of the visual head orientation. Each class of the comprehensive set is trained separately based on a respective collection of automatically marked images for that head orientation class.
A method of providing a sharpness measure for an image comprises detecting an object region within an image; obtaining meta-data for the image; and scaling the chosen object region to a fixed size. A gradient map is calculated for the scaled object region and compared against a threshold determined for the image to provide a filtered gradient map of values exceeding the threshold. The threshold for the image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO/gain used for image acquisition. A sharpness measure for the object region is determined as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values.
A method of estimating motion between a pair of image frames of a given scene comprises calculating respective integral images for each of the image frames and selecting at least one corresponding region of interest within each frame. For each region of interest, an integral image profile from each integral image is calculated, each profile comprising an array of elements, each element comprising a sum of pixel intensities from successive swaths of the region of interest for the frame. Integral image profiles are correlated to determine a relative displacement of the region of interest between the pair of frames. Each region of interest is divided into a plurality of further regions of interest before repeating until a required hierarchy of estimated motion for successively divided regions of interest is provided.
An iris image acquisition system for a mobile device, comprises a lens assembly arranged along an optical axis and configured for forming an image comprising at least one iris of a subject disposed frontally to the lens assembly; and an image sensor configured to acquire the formed image. The lens assembly comprises a first lens refractive element and at least one second lens element for converging incident radiation to the first refractive element. The first refractive element has a variable thickness configured to counteract a shift of the formed image along the optical axis induced by change in iris-lens assembly distance, such that different areas of the image sensor on which irises at different respective iris-lens assembly distances are formed are in focus within a range of respective iris-lens assembly distances at which iris detail is provided at sufficient contrast to be recognised.
G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
A method of iris recognition comprises detecting a body region larger than and comprising at least one iris in an image and performing a first eye modelling on the detected body region. If successful, the result of first iris segmentation based on the first eye model is chosen. Otherwise, a first iris identification is performed on the detected body region. If successful, the result of second iris segmentation based on a second eye modelling is chosen. Otherwise, second iris identification is performed on the image, third eye modelling is performed on the result of the second iris identification, and third iris segmentation is performed on the result of the third eye modelling. If successful, the result of third iris segmentation based on a third eye modelling is chosen. An iris code is extracted from any selected iris segment of the image.
A method for compensating for off-axis tilting of a lens relative to an image sensor in an image acquisition device comprises acquiring a set of calibrated parameters
y′ indicate a coordinate of a pixel in an acquired image. Image information is mapped from the acquired image to a lens tilt compensated image according to the formulae:
where s comprises a scale factor given by
y indicate the location of a pixel in the lens tilt compensated image.
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
59.
Method for producing a histogram of oriented gradients
A method for producing a histogram of oriented gradients (HOG) for at least a portion of an image comprises dividing the image portion into cells, each cell comprising a plurality of image pixels. Then, for each image pixel of a cell, obtaining a horizontal gradient component, gx, and a vertical gradient component, gy, based on differences in pixel values along at least a row of the image and a column of the image respectively including the pixel; and allocating a gradient to one of a plurality of sectors, where n is a sector index, each sector extending through a range of orientation angles and at least some of the sectors being divided from adjacent sectors according to the inequalities: b*16
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
62.
Network monitoring to determine performance of infrastructure service providers
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
INIT) for each of the first and second determined distances; and the stored calibrated lens actuator settings are adjusted according to the determined calibration corrections.
G03B 43/00 - Testing correct operation of photographic apparatus or parts thereof
G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
D<2 of a required scale for a normalised version of the ROI. The apparatus then fractionally downsamples and rotates downsampled information for a tile within the buffer to produce a respective normalised portion of the ROI at the required scale for the normalised ROI. Downsampled and rotated information is accumulated for each tile within a normalised ROI buffer for subsequent processing by the image processing apparatus.
A method and system of generating an adjustment parameter value for a control parameter to enhance a new image, which includes configuring a neural network, trained to restore image quality for a derivative image, to that of an earlier version of the derivative image, to generate as an output the adjustment parameter value, for the control parameter in response to input of data derived from the new image, and changing a control parameter of the new image, by generating the adjustment parameter value by calculating an inverse of the output value, and applying the adjustment parameter value to the control parameter of the new image so as to generate an enhanced image.
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
G06N 3/00 - Computing arrangements based on biological models
66.
Systems and methods for authenticating a biometric device using a trusted coordinating smart device
Systems and methods for authenticating a biometric device using a trusted coordinating smart device in accordance with embodiments of the invention are disclosed. In one embodiment, a process for enrolling a configurable biometric device with a network service includes obtaining a device identifier (ID) of the configurable biometric device using a coordinating smart device, communicating the device ID from the coordinating smart device to a network service, communicating a first challenge based on a challenge-response authentication protocol from the network service to the coordinating smart device, communicating the first challenge and a response uniform resource locator (URL) from the coordinating smart device to the configurable biometric device, generating a first response to the first challenge and communicating the first response to the network service utilizing the response URL, receiving a secure channel key by the coordinating smart device from the network service, communicating the secure channel key from the coordinating smart device to the configurable biometric device, performing a biometric enrollment process using the configurable biometric device including capturing biometric information from a user, and creating a secure communication link between the configurable biometric device and the network service using the secure channel key when the first response satisfies the challenge-response authentication protocol.
G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 29/06 - Communication control; Communication processing characterised by a protocol
A method for producing framing information for a set of N source images, each comprising an object region R, comprises scaling, translating and/or rotating images of the source images so that the object region is aligned. For a given image of the object aligned source images, at a given frame size, a given frame angle for a frame relative to the object aligned images and at a first candidate boundary position for the frame, the method determines if there is at least one position for a second boundary of the frame orthogonal to the first boundary where the frame lies within the image and the frame encloses the object region. If so, counters associated with the first candidate boundary position are incremented. Responsive to any counter meeting a threshold value,K≤N, for the source images, framing is indicated as possible at the given frame size, frame angle, first candidate boundary position and any position for the second boundary associated with the threshold meeting counter. Otherwise, another image can be chosen and the process repeated.
A method operable by a computing device for configuring access for a limited user interface (UI) device to a network service via a local network access point is disclosed. The method comprises the steps of: obtaining from the limited UI device a device identifier via a first out-of-band channel. The device identifier is provided to the network service via a secure network link. A zero knowledge proof (ZKP) challenge is received from the network service. Configuration information is provided to the limited-UI device via a second out-of-band channel, the configuration information including information sufficient to enable the limited-UI device to connect to the local network access point. The ZKP challenge is provided to the limited-UI device via the second out-of-band channel. A secure channel key is received from the network service indicating a successful response from the limited-UI device to the ZKP challenge; and provided to the limited-UI device enabling the limited-UI device to access the network service.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
A method for automatically determining exposure settings for an image acquisition system comprises maintaining a plurality of look-up tables, each look-up table being associated with a corresponding light condition and storing image exposure settings associated with corresponding distance values between a subject and the image acquisition system. An image of a subject is acquired from a camera module; and a light condition occurring during the acquisition is determined based on the acquired image. A distance between the subject and the camera module during the acquisition is calculated. The method then determines whether a correction of the image exposure settings for the camera module is required based on the calculated distance and the determined light condition; and responsive to correction being required: selects image exposure settings corresponding to the calculated distance from the look-up table corresponding to the determined light condition; and acquires a new image using the selected image exposure settings.
H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
G01S 3/00 - Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
H04N 5/243 - Circuitry for compensating for variation in the brightness of the object by influencing the picture signal
71.
Method for dynamically calibrating an image capture device
INIT) for each of the first and second determined distances; and the stored calibrated lens actuator settings are adjusted according to the determined calibration corrections.
A portable system providing augmented vision of surroundings. In one embodiment the system includes a helmet, a plurality of camera units and circuitry to generate a composite field of view from channels of video data. The helmet permits a user to receive a first field of view in the surroundings based on optical information received directly from the surroundings with the user's natural vision. The camera units are mounted about the helmet to generate the multiple channels of video data. Each camera channel captures a different field of view of a scene in a region surrounding the helmet.
A method of providing a sharpness measure for an image comprises detecting an object region within an image; obtaining meta-data for the image; and scaling the chosen object region to a fixed size. A gradient map is calculated for the scaled object region and compared against a threshold determined for the image to provide a filtered gradient map of values exceeding the threshold. The threshold for the image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO/gain used for image acquisition. A sharpness measure for the object region is determined as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values.
A portrait lens configuration for meeting handheld device form factor constraints. First and second meniscus lenses each have a reflective surface to provide internal reflections for transmitting light toward a focal plane. A third lens is positioned between the meniscus lenses and the focal plane. The first lens includes an anterior concave surface having a reflective material extending over a portion thereof. Light received by the first meniscus lens can be transmitted therethrough. The reflective material is positioned along the anterior concave surface to receive light transmitted therethrough and reflected back from the second lens. In an associated method the first meniscus lens is positioned to receive light through a first of two opposing refractive surfaces. After each lens provides an internal reflection, reflected light is transmitted through the second of the two opposing surfaces and then through a bore positioned within the second lens to the third lens.
G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
G02B 7/04 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
G02B 13/00 - Optical objectives specially designed for the purposes specified below
G02B 15/14 - Optical objectives with means for varying the magnification by axial movement of one or more lenses or groups of lenses relative to the image plane for continuously varying the equivalent focal length of the objective
G02B 17/00 - Systems with reflecting surfaces, with or without refracting elements
G03B 17/17 - Bodies with reflectors arranged in beam forming the photographic image, e.g. for reducing dimensions of camera
77.
Portrait lens system formed with an adjustable meniscus lens
A portrait lens configuration for meeting handheld device form factor constraints. First and second meniscus lenses each have a reflective surface to provide internal reflections for transmitting light toward a focal plane. A third lens is positioned between the meniscus lenses and the focal plane. The first lens includes an anterior concave surface having a reflective material extending over a portion thereof. Light received by the first meniscus lens can be transmitted therethrough. The reflective material is positioned along the anterior concave surface to receive light transmitted therethrough and reflected back from the second lens. In an associated method the first meniscus lens is positioned to receive light through a first of two opposing refractive surfaces. After each lens provides an internal reflection, reflected light is transmitted through the second of the two opposing surfaces and then through a bore positioned within the second lens to the third lens.
G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
G02B 7/04 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
G02B 13/00 - Optical objectives specially designed for the purposes specified below
G02B 15/14 - Optical objectives with means for varying the magnification by axial movement of one or more lenses or groups of lenses relative to the image plane for continuously varying the equivalent focal length of the objective
G03B 17/17 - Bodies with reflectors arranged in beam forming the photographic image, e.g. for reducing dimensions of camera
78.
Image processing method and system for iris recognition
A method of iris recognition comprises detecting a body region larger than and comprising at least one iris in an image and performing a first eye modelling on the detected body region. If successful, the result of first iris segmentation based on the first eye model is chosen. Otherwise, a first iris identification is performed on the detected body region. If successful, the result of second iris segmentation based on a second eye modelling is chosen. Otherwise, second iris identification is performed on the image, third eye modelling is performed on the result of the second iris identification, and third iris segmentation is performed on the result of the third eye modelling. If successful, the result of third iris segmentation based on a third eye modelling is chosen. An iris code is extracted from any selected iris segment of the image.
An image processing method for iris recognition of a predetermined subject, comprises acquiring through an image sensor, a probe image illuminated by an infra-red (IR) illumination source, wherein the probe image comprises one or more eye regions and is overexposed until skin portions of the image are saturated. One or more iris regions are identified within the one or more eye regions of said probe image; and the identified iris regions are analyzed to detect whether they belong to the predetermined subject.
Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.
Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.
Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a Secret embedded into the data stream is compared to a Replica Secret upon receipt to confirm data transmission integrity.
Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a data stream is encrypted with a secure encryption key for transmission, then decrypted upon receipt using a corresponding secure decryption key to confirm data transmission integrity.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures
84.
Systems and methods for detecting data insertions in biometric authentication systems using pseudo data segments
Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a pseudo data segment included in the data stream upon transmission is detected upon receipt to confirm data transmission integrity.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures
In an embodiment, a method comprises determining, by a portable device, image capturing conditions by analyzing contents of a particular digital image of one or more first digital images captured by one or more image capturing devices. It is also determined whether the image capturing conditions indicate indoor image capturing conditions. If the image capturing conditions indicate indoor image capturing conditions, then it is determined whether the particular digital image includes a depiction of at least one eye. If so, an iris region, depicting the at least one eye, is segmented in the particular digital image. If the segmented iris region does not include iris valid information, then one or more image capturing devices capture one or more second digital images having an enhanced contrast. If the one or more second digital images include iris valid information, then iris code is extracted from the one or more second digital images.
Systems and methods for authenticating a biometric device using a trusted coordinating smart device in accordance with embodiments of the invention are disclosed. In one embodiment, a process for enrolling a configurable biometric device with a network service includes obtaining a device identifier (ID) of the configurable biometric device using a coordinating smart device, communicating the device ID from the coordinating smart device to a network service, communicating a first challenge based on a challenge-response authentication protocol from the network service to the coordinating smart device, communicating the first challenge and a response uniform resource locator (URL) from the coordinating smart device to the configurable biometric device, generating a first response to the first challenge and communicating the first response to the network service utilizing the response URL, receiving a secure channel key by the coordinating smart device from the network service, communicating the secure channel key from the coordinating smart device to the configurable biometric device, performing a biometric enrollment process using the configurable biometric device including capturing biometric information from a user, and creating a secure communication link between the configurable biometric device and the network service using the secure channel key when the first response satisfies the challenge-response authentication protocol.
G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 29/06 - Communication control; Communication processing characterised by a protocol
A method for improving accuracy of depth map information derived from image data descriptive of a scene. In one embodiment Mutual Feature Map data are created based on initial disparity map data values and the image data descriptive of the scene. The Mutual Feature Map data are applied to create a series of weighting functions representing structural details that can be transferred to the first disparity values to restore degraded features or replace some of the first disparity values with values more representative of structural features present in the image data descriptive of the scene.
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
G06K 9/46 - Extraction of features or characteristics of the image
G06K 9/62 - Methods or arrangements for recognition using electronic means
H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
88.
Manifest file configuration based on manifest request characteristics
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
89.
Redirects during manifest file configuration and serving of video segment files
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04L 29/06 - Communication control; Communication processing characterised by a protocol
90.
Image processing method and system for iris recognition
An image processing method for iris recognition of a predetermined subject, comprises acquiring through an image sensor, a probe image illuminated by an infra-red (IR) illumination source, wherein the probe image comprises one or more eye regions and is overexposed until skin portions of the image are saturated. One or more iris regions are identified within the one or more eye regions of said probe image; and the identified iris regions are analysed to detect whether they belong to the predetermined subject.
A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
A biometric recognition system for a hand held computing device incorporating an inertial measurement unit (IMU) comprising a plurality of accelerometers and at least one gyroscope is disclosed. A tremor analysis component is arranged to: obtain from the IMU, accelerometer signals indicating device translational acceleration along each of X, Y and Z axes as well as a gyroscope signal indicating rotational velocity about the Y axis during a measurement window. Each of the IMU signals is filtered to provide filtered frequency components for the signals during the measurement window. The accelerometer signals are combined to provide a combined filtered accelerometer magnitude signal for the measurement window. A spectral density estimation is provided for each of the combined filtered accelerometer magnitude signal and the filtered gyroscope signal. An irregularity is determined for each spectral density estimation; and based on the determined irregularities, the tremor analysis component attempts to authenticate a user of the device.
Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
A dynamically reconfigurable heterogeneous systolic array is configured to process a first image frame, and to generate image processing primitives from the image frame, and to store the primitives and the corresponding image frame in a memory store. A characteristic of the image frame is determined. Based on the characteristic, the array is reconfigured to process a following image frame.
G06T 1/20 - Processor architecturesProcessor configuration, e.g. pipelining
G06F 15/80 - Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
An image processing system comprises a template matching engine (TME). The TME reads an image from the memory; and as each pixel of the image is being read, calculates a respective feature value of a plurality of feature maps as a function of the pixel value. A pre-filter is responsive to a current pixel location comprising a node within a limited detector cascade to be applied to a window within the image to: compare a feature value from a selected one of the plurality of feature maps corresponding to the pixel location to a threshold value; and responsive to pixels for all nodes within a limited detector cascade to be applied to the window having been read, determine a score for the window. A classifier, responsive to the pre-filter indicating that a score for a window is below a window threshold, does not apply a longer detector cascade to the window before indicating that the window does not comprise an object to be detected.
A method operable by a computing device for configuring access for a limited user interface (UI) device to a network service via a local network access point is disclosed. The method comprises the steps of: obtaining from the limited UI device a device identifier via a first out-of-band channel. The device identifier is provided to the network service via a secure network link. A zero knowledge proof (ZKP) challenge is received from the network service. Configuration information is provided to the limited-UI device via a second out-of-band channel, the configuration information including information sufficient to enable the limited-UI device to connect to the local network access point. The ZKP challenge is provided to the limited-UI device via the second out-of-band channel. A secure channel key is received from the network service indicating a successful response from the limited-UI device to the ZKP challenge; and provided to the limited-UI device enabling the limited-UI device to access the network service.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
n·gy, where n is any integer value with a magnitude greater than or equal to 1. At least one sector is associated with a bin; and a count of each instance of a pixel gradient of a cell associated with a bin is performed to provide a HOG for said cell.
A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.