Adeia Media Holdings Inc.

United States of America

Back to Profile

1-100 of 139 for Adeia Media Holdings Inc. Sort by
Query
Aggregations
Date
2025 (YTD) 1
2024 4
2023 4
2022 8
2021 14
See more
IPC Class
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints 37
H04N 5/232 - Devices for controlling television cameras, e.g. remote control 28
G06T 5/00 - Image enhancement or restoration 21
H04L 29/06 - Communication control; Communication processing characterised by a protocol 18
G06K 9/46 - Extraction of features or characteristics of the image 14
See more
Status
Pending 3
Registered / In Force 136
Found results for  patents
  1     2        Next Page

1.

SYSTEM FOR GENERATING MOTION BLUR

      
Application Number 18977996
Status Pending
Filing Date 2024-12-12
First Publication Date 2025-04-03
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Stec, Piotr
  • Bigioi, Petronel

Abstract

A system for generating motion blur comprises: a frame camera, an event camera and an accumulator for accumulating event information from a plurality of events occurring within a window around the exposure time of an image frame in a plurality of event frames. A processor determines from the events in at least a first of the plurality of event frames, one or more areas of movement within the field of view of the event camera; determines from the events in at least a second of the plurality of event frames, a direction of movement for the one or more areas of movement; and applies blur in one or more areas of the image frame corresponding to the one or more determined areas of movement in accordance with at least the direction of movement for each of the one or more areas of movement to produce a blurred image.

IPC Classes  ?

  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • G06T 5/70 - DenoisingSmoothing
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
  • H04N 25/47 - Image sensors with pixel address outputEvent-driven image sensorsSelection of pixels to be read out based on image data

2.

System for generating motion blur

      
Application Number 18305384
Grant Number 12254599
Status In Force
Filing Date 2023-04-24
First Publication Date 2024-10-24
Grant Date 2025-03-18
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Stec, Piotr
  • Bigioi, Petronel

Abstract

A system for generating motion blur comprises: a frame camera, an event camera and an accumulator for accumulating event information from a plurality of events occurring within a window around the exposure time of an image frame in a plurality of event frames. A processor determines from the events in at least a first of the plurality of event frames, one or more areas of movement within the field of view of the event camera; determines from the events in at least a second of the plurality of event frames, a direction of movement for the one or more areas of movement; and applies blur in one or more areas of the image frame corresponding to the one or more determined areas of movement in accordance with at least the direction of movement for each of the one or more areas of movement to produce a blurred image.

IPC Classes  ?

  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • G06T 5/70 - DenoisingSmoothing
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
  • H04N 25/47 - Image sensors with pixel address outputEvent-driven image sensorsSelection of pixels to be read out based on image data

3.

PROVIDING THIRD-PARTY DYNAMIC CONTENT WITHIN ADAPTIVE STREAMING VIDEO

      
Application Number 18448769
Status Pending
Filing Date 2023-08-11
First Publication Date 2024-07-04
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Gordon, Michael
  • Morel, David

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04L 65/75 - Media network packet handling
  • H04L 9/40 - Network security protocols
  • H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
  • H04L 65/80 - Responding to QoS
  • H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
  • H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot

4.

NETWORK MONITORING TO DETERMINE PERFORMANCE OF INFRASTRUCTURE SERVICE PROVIDERS

      
Application Number 18448687
Status Pending
Filing Date 2023-08-11
First Publication Date 2024-06-06
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04L 65/75 - Media network packet handling
  • H04L 65/80 - Responding to QoS
  • H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
  • H04L 67/10 - Protocols in which an application is distributed across nodes in the network
  • H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
  • H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
  • H04L 67/563 - Data redirection of data network streams
  • H04L 67/566 - Grouping or aggregating service requests, e.g. for unified processing
  • H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

5.

Method and system for camera motion blur reduction

      
Application Number 18377040
Grant Number 12143723
Status In Force
Filing Date 2023-10-05
First Publication Date 2024-01-25
Grant Date 2024-11-12
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Condorovici, Razvan
  • Petcu, Bogdan
  • Stec, Piotr

Abstract

A method for reducing camera motion blur comprises, before acquiring an image frame for a video stream, a camera measurement unit measuring data related to a camera module motion during a time window; determining camera module motion based on the measured data and predicting a camera motion blur during acquisition of the image frame based at least on the determined camera module motion and the lens projection model; determining whether the predicted camera motion blur exceeds a threshold; in response to determining that the predicted camera motion blur exceeds the threshold, determining a reduction of the provisional exposure time determined to acquire the image frame so that the predicted camera motion blur reaches the threshold, determining whether a corresponding increase in the provisional gain determined to acquire the image frame is below a maximum gain value, adjusting the provisional exposure time and gain, and acquiring the image frame.

IPC Classes  ?

  • H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

6.

Data decompression apparatus

      
Application Number 17686367
Grant Number 12032524
Status In Force
Filing Date 2022-03-03
First Publication Date 2023-09-07
Grant Date 2024-07-09
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bartha, Lorant
  • Lajtai, Norbert

Abstract

A decompression apparatus comprises a number of stages including: a first stage which always reads a binary symbol from a first stage indicator file for each symbol which is to be decoded; one or more mid stages which conditionally read a binary symbol from successive indicator files based on the value of the last symbol read from a previous indicator file; and a final stage which conditionally reads a symbol from a reduced file based on the value of the last symbol read from the last stage indicator file.

IPC Classes  ?

  • G06F 16/174 - Redundancy elimination performed by the file system
  • G06N 3/02 - Neural networks
  • H03M 7/30 - CompressionExpansionSuppression of unnecessary data, e.g. redundancy reduction

7.

Method and system for camera motion blur reduction

      
Application Number 17541222
Grant Number 11812146
Status In Force
Filing Date 2021-12-02
First Publication Date 2023-06-08
Grant Date 2023-11-07
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Condorovici, Razvan
  • Petcu, Bogdan
  • Stec, Piotr

Abstract

A method for reducing camera motion blur comprises, before acquiring an image frame for a video stream, a camera measurement unit measuring data related to a camera module motion during a time window; determining camera module motion based on the measured data and predicting a camera motion blur during acquisition of the image frame based at least on the determined camera module motion and the lens projection model; determining whether the predicted camera motion blur exceeds a threshold; in response to determining that the predicted camera motion blur exceeds the threshold, determining a reduction of the provisional exposure time determined to acquire the image frame so that the predicted camera motion blur reaches the threshold, determining whether a corresponding increase in the provisional gain determined to acquire the image frame is below a maximum gain value, adjusting the provisional exposure time and gain, and acquiring the image frame.

IPC Classes  ?

  • H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

8.

Gaze repositioning during a video conference

      
Application Number 17495800
Grant Number 11722329
Status In Force
Filing Date 2021-10-06
First Publication Date 2023-04-06
Grant Date 2023-08-08
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Condorovici, Razvan
  • Stan, Andra

Abstract

A method at a first participant's client conferencing system in a videoconference comprises receiving, from a second client conferencing system, at least one first video frame of a first video signal including an image of the second participant looking at a third participant, and first metadata associated with the first video frame and including an identity of the third participant. The image of the second participant is modified in the first video frame so that the first video frame is displayed on a first area of the client conferencing system with the second participant looking at a second area of the first display configured for displaying a second video signal of the third participant identified by the first metadata.

IPC Classes  ?

  • H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 40/18 - Eye characteristics, e.g. of the iris

9.

Method for generating a composite image

      
Application Number 17411887
Grant Number 11895427
Status In Force
Filing Date 2021-08-25
First Publication Date 2023-03-02
Grant Date 2024-02-06
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Condorovici, Razvan
  • Stan, Andra
  • Stan, Cosmin

Abstract

A method for generating a composite image comprises: detecting a color temperature of a background image; acquiring from a camera through an image signal processor, ISP, performing white balance correction of acquired image data, an image including a foreground region including face of a user; and detecting a color temperature of the foreground region. Responsive to the color temperature for the foreground region differing from that of the background image by more than a threshold amount, a color temperature for white balance correction of a subsequently acquired image is set which causes skin pixels within the foreground region of the subsequently acquired image to have a color temperature closer to the color temperature for the background image. Pixel values of the foreground region are combined with pixel values of the background image corresponding to a background region of the acquired image to provide the composite image.

IPC Classes  ?

  • H04N 5/272 - Means for inserting a foreground image in a background image, i.e. inlay, outlay
  • G06T 7/194 - SegmentationEdge detection involving foreground-background segmentation
  • H04N 23/88 - Camera processing pipelinesComponents thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

10.

Event camera hardware

      
Application Number 17851387
Grant Number 11818495
Status In Force
Filing Date 2022-06-28
First Publication Date 2022-10-13
Grant Date 2023-11-14
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bartha, Lorant
  • Zaharia, Corneliu
  • Georgescu, Vlad
  • Lemley, Joe

Abstract

A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.

IPC Classes  ?

  • H04N 5/77 - Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
  • B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
  • H04N 5/14 - Picture signal circuitry for video frequency region
  • H04N 5/91 - Television signal processing therefor
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • H04N 25/00 - Circuitry of solid-state image sensors [SSIS]Control thereof
  • H04N 25/772 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
  • B60R 11/00 - Arrangements for holding or mounting articles, not otherwise provided for

11.

Method of controlling a camera

      
Application Number 17214424
Grant Number 11610338
Status In Force
Filing Date 2021-03-26
First Publication Date 2022-09-29
Grant Date 2023-03-21
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

A method comprises displaying a first image acquired from a camera having an input camera projection model including a first focal length and an optical axis parameter value. A portion of the first image is selected as a second image associated with an output camera projection model in which either a focal length and/or an optical axis parameter value differ from the parameters of the input camera projection model. The method involves iteratively: adjusting either the focal length and/or an optical axis parameter value for the camera lens so that it approaches the corresponding value of the output camera projection model; acquiring a subsequent image using the adjusted focal length or optical axis parameter value; mapping pixel coordinates in the second image, through a normalized 3D coordinate system to respective locations in the subsequent image to determine respective values for the pixel coordinates; and displaying the second image.

IPC Classes  ?

  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control

12.

Method and system for tracking an object

      
Application Number 17827574
Grant Number 11948350
Status In Force
Filing Date 2022-05-27
First Publication Date 2022-09-15
Grant Date 2024-04-02
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Dinu, Dragos
  • Munteanu, Mihai Constantin
  • Caliman, Alexandru

Abstract

A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.

IPC Classes  ?

  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06N 3/08 - Learning methods
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06T 7/269 - Analysis of motion using gradient-based methods
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

13.

Neural network image processing apparatus

      
Application Number 17677320
Grant Number 11699293
Status In Force
Filing Date 2022-02-22
First Publication Date 2022-06-02
Grant Date 2023-07-11
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Lemley, Joseph
  • Dutu, Liviu-Cristian
  • Mathe, Stefan
  • Dumitru-Guzu, Madalin
  • Filip, Dan

Abstract

A neural network image processing apparatus arranged to acquire images from an image sensor and to: identify a ROI containing a face region in an image; determine at plurality of facial landmarks in the face region; use the facial landmarks to transform the face region within the ROI into a face region having a given pose; and use transformed landmarks within the transformed face region to identify a pair of eye regions within the transformed face region. Each identified eye region is fed to a respective first and second convolutional neural network, each network configured to produce a respective feature vector. Each feature vector is fed to respective eyelid opening level neural networks to obtain respective measures of eyelid opening for each eye region. The feature vectors are combined and to a gaze angle neural network to generate gaze yaw and pitch values substantially simultaneously with the eyelid opening values.

IPC Classes  ?

  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06N 3/08 - Learning methods
  • G06V 40/19 - Sensors therefor
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

14.

Network monitoring to determine performance of infrastructure service providers

      
Application Number 17384532
Grant Number 11765219
Status In Force
Filing Date 2021-07-23
First Publication Date 2022-05-19
Grant Date 2023-09-19
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04L 65/75 - Media network packet handling
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04L 65/80 - Responding to QoS
  • H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
  • H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
  • H04L 67/10 - Protocols in which an application is distributed across nodes in the network
  • H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
  • H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
  • H04L 67/563 - Data redirection of data network streams
  • H04L 67/566 - Grouping or aggregating service requests, e.g. for unified processing

15.

Video super resolution method

      
Application Number 17643984
Grant Number 11727541
Status In Force
Filing Date 2021-12-13
First Publication Date 2022-03-31
Grant Date 2023-08-15
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Ryan, Cian
  • Blythman, Richard

Abstract

A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt−1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt−1) and intermediate output information (Ht−1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.

IPC Classes  ?

  • G06T 7/00 - Image analysis
  • G06T 5/00 - Image enhancement or restoration
  • G06T 5/10 - Image enhancement or restoration using non-spatial domain filtering
  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • H04N 23/951 - Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

16.

Event camera hardware

      
Application Number 17016133
Grant Number 11405580
Status In Force
Filing Date 2020-09-09
First Publication Date 2022-03-10
Grant Date 2022-08-02
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bartha, Lorant
  • Zaharia, Corneliu
  • Georgescu, Vlad
  • Lemley, Joe

Abstract

A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.

IPC Classes  ?

  • H04N 5/77 - Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
  • B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
  • H04N 5/14 - Picture signal circuitry for video frequency region
  • H04N 5/91 - Television signal processing therefor
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • B60R 11/00 - Arrangements for holding or mounting articles, not otherwise provided for

17.

Measurement of an image sensor point spread function (PSF)

      
Application Number 17405398
Grant Number 11985300
Status In Force
Filing Date 2021-08-18
First Publication Date 2022-02-24
Grant Date 2024-05-14
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

Techniques and arrangements that utilize speckle imaging and autocorrelation to estimate the PSF of an image sensor for a digital imaging apparatus, e.g., a camera or a scanner. In particular, a system of components described herein is a simple arrangement that does not require a complex setup. Therefore, the system is portable and easy to set up. Additionally, by utilizing autocorrelation, the calculations of PSF using data obtained by the system are simplified.

IPC Classes  ?

  • H04N 17/02 - Diagnosis, testing or measuring for television systems or their details for colour television signals
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  • H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
  • H04N 23/80 - Camera processing pipelinesComponents thereof

18.

Image processing system

      
Application Number 17243423
Grant Number 11875573
Status In Force
Filing Date 2021-04-28
First Publication Date 2021-11-04
Grant Date 2024-01-16
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Toca, Cosmin
  • Sandoi, Bogdan
  • Bigioi, Petronel

Abstract

An image processing system is configured to receive a first high resolution stream of images and a second lower resolution stream of images from image sources with substantially the same field of view. The system comprises a localizer component configured to provide a location for any object of interest independently of class within successive images of the second stream of images; a classifier configured to: receive one or more locations selectively provided by the localizer, identify a corresponding portion of an image acquired from the first stream at substantially the same time at which an image from the second stream in which an object of interest was identified and return a classification for the type of object within the identified portion of the image from the first stream; and a tracker configured to associate the classification with the location through acquisition of successive images in the second stream.

IPC Classes  ?

  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
  • G06T 7/292 - Multi-camera tracking
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
  • G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
  • G06F 18/24 - Classification techniques
  • G06F 18/25 - Fusion techniques
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components

19.

Systems and methods for detecting data insertions in biometric authentication systems utilizing a secret

      
Application Number 17360190
Grant Number 11870896
Status In Force
Filing Date 2021-06-28
First Publication Date 2021-10-21
Grant Date 2024-01-09
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Corcoran, Peter
  • Drimbarean, Alexandru

Abstract

Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a Secret embedded into the data stream is compared to a Replica Secret upon receipt to confirm data transmission integrity.

IPC Classes  ?

  • H04L 9/08 - Key distribution
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • G06T 1/00 - General purpose image data processing
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06V 40/18 - Eye characteristics, e.g. of the iris
  • G06V 40/40 - Spoof detection, e.g. liveness detection
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures

20.

Configuring access for internet-of-things and limited user interface devices

      
Application Number 17243236
Grant Number 11863556
Status In Force
Filing Date 2021-04-28
First Publication Date 2021-10-21
Grant Date 2024-01-02
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Corcoran, Peter
  • Raducan, Ilariu
  • Bigioi, Petronel

Abstract

A method operable by a computing device for configuring access for a limited user interface (UI) device to a network service via a local network access point is disclosed. The method comprises the steps of: obtaining from the limited UI device a device identifier via a first out-of-band channel. The device identifier is provided to the network service via a secure network link. A zero knowledge proof (ZKP) challenge is received from the network service. Configuration information is provided to the limited-UI device via a second out-of-band channel, the configuration information including information sufficient to enable the limited-UI device to connect to the local network access point. The ZKP challenge is provided to the limited-UI device via the second out-of-band channel. A secure channel key is received from the network service indicating a successful response from the limited-UI device to the ZKP challenge; and provided to the limited-UI device enabling the limited-UI device to access the network service.

IPC Classes  ?

  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04W 12/06 - Authentication
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
  • H04W 76/10 - Connection setup
  • H04W 12/50 - Secure pairing of devices
  • H04L 9/40 - Network security protocols
  • H04W 84/12 - WLAN [Wireless Local Area Networks]
  • H04W 4/70 - Services for machine-to-machine communication [M2M] or machine type communication [MTC]
  • H04W 12/65 - Environment-dependent, e.g. using captured environmental data
  • H04W 12/77 - Graphical identity
  • H04W 12/04 - Key management, e.g. using generic bootstrapping architecture [GBA]

21.

Method for stabilizing a camera frame of a video sequence

      
Application Number 17233546
Grant Number 11531211
Status In Force
Filing Date 2021-04-19
First Publication Date 2021-09-30
Grant Date 2022-12-20
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • O'Sullivan, Brian
  • Stec, Piotr

Abstract

A method for stabilizing a video sequence comprises: obtaining an indication of camera movement from acquisition of a previous camera frame to acquisition of a current camera frame; determining an orientation for the camera at a time of acquiring the current camera frame; and determining a candidate orientation for a crop frame for the current camera frame by adjusting an orientation of a crop frame associated with the previous camera frame according to the determined orientation. A boundary of one of the camera frame or crop frame is traversed to determine if a specific point on the boundary of the crop frame exceeds a boundary of the camera frame. If so, a rotation of the specific point location which would bring the specific point location onto the boundary of the crop frame is determined and the candidate crop frame orientation updated accordingly before the crop frame is displayed.

IPC Classes  ?

  • G02B 27/64 - Imaging systems using optical elements for stabilisation of the lateral and angular position of the image
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control

22.

Video super resolution method

      
Application Number 16803062
Grant Number 11200644
Status In Force
Filing Date 2020-02-27
First Publication Date 2021-09-02
Grant Date 2021-12-14
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Ryan, Cian
  • Blythman, Richard

Abstract

A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt−1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt−1) and intermediate output information (Ht−1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.

IPC Classes  ?

  • G06T 7/00 - Image analysis
  • G06T 5/00 - Image enhancement or restoration
  • G06T 5/10 - Image enhancement or restoration using non-spatial domain filtering
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction

23.

Multi-perspective eye acquisition

      
Application Number 16797294
Grant Number 11300784
Status In Force
Filing Date 2020-02-21
First Publication Date 2021-08-26
Grant Date 2022-04-12
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Rotariu, Cosmin Nicolae
  • Andorko, Istvan

Abstract

A device, such as a head-mounted device (HMD), may include a frame and a plurality of mirrors coupled to an interior portion of the frame. An imaging device may be coupled to the frame at a position to capture images of an eye of the wearer reflected from the mirrors. The HMD may also include a mirror angle adjustment device to adjust an angle of one or more of the mirrors relative to the imaging device so that the mirror(s) reflect the eye of the wearer to the imaging device.

IPC Classes  ?

  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G02B 27/09 - Beam shaping, e.g. changing the cross-sectioned area, not otherwise provided for
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G02B 27/01 - Head-up displays

24.

Hardware-implemented argmax layer

      
Application Number 16778525
Grant Number 11429771
Status In Force
Filing Date 2020-01-31
First Publication Date 2021-08-05
Grant Date 2022-08-30
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Pop, Tudor Mihail

Abstract

A hardware acceleration module may generate a channel-wise argmax map using a predefined set of hardware-implemented operations. In some examples, a hardware acceleration module may receive a set of feature maps for different image channels. The hardware acceleration module may execute a sequence of hardware operations, including a portion(s) of hardware for executing a convolution, rectified linear unit (ReLU) activation, and/or layer concatenation, to determine a maximum channel feature value and/or argument maxima (argmax) value for a set of associated locations within the feature maps. An argmax map may be generated based at least in part on the argument maximum for a set of associated locations.

IPC Classes  ?

  • G06F 30/331 - Design verification, e.g. functional simulation or model checking using simulation with hardware acceleration, e.g. by using field programmable gate array [FPGA] or emulation
  • G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
  • G06F 7/483 - Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

25.

Iris recognition workflow

      
Application Number 17135654
Grant Number 11678180
Status In Force
Filing Date 2020-12-28
First Publication Date 2021-07-01
Grant Date 2023-06-13
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Andorko, Istvan
  • Drimbarean, Alexandru

Abstract

A method includes determining, by a portable device, image capturing conditions based on an analysis of contents of a first digital image of a group of digital images captured by an image capturing device, and determining, by the portable device, whether the image capturing conditions determined for the first digital image indicate outdoor image capturing conditions. Based at least in part on a determination that the image capturing conditions determined for the first digital image indicate outdoor image capturing conditions, displaying a first indication that the first digital image must be captured in indoor image capturing conditions for an iris code enrollment process, and displaying a second indication of resumption of the iris code enrollment process when the image capturing conditions, determined for the first digital image, indicate the indoor image capturing conditions.

IPC Classes  ?

  • H04W 12/06 - Authentication
  • G06V 40/19 - Sensors therefor
  • G06V 40/50 - Maintenance of biometric data or enrolment thereof
  • G06V 40/18 - Eye characteristics, e.g. of the iris
  • G06V 20/00 - ScenesScene-specific elements
  • G06T 7/11 - Region-based segmentation
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 5/00 - Image enhancement or restoration
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • H04L 9/40 - Network security protocols
  • G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns

26.

Configuring manifest files including redirect uniform resource locators

      
Application Number 17161496
Grant Number 11936708
Status In Force
Filing Date 2021-01-28
First Publication Date 2021-05-27
Grant Date 2024-03-19
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04L 65/75 - Media network packet handling
  • H04L 65/80 - Responding to QoS
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

27.

Image acquisition system for off-axis eye images

      
Application Number 17163731
Grant Number 11544966
Status In Force
Filing Date 2021-02-01
First Publication Date 2021-05-20
Grant Date 2023-01-03
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • O'Sullivan, Brian
  • Mccullagh, Barry
  • Carateev, Serghei
  • Andorko, Istvan
  • Bigioi, Petronel

Abstract

An image acquisition system determines first and second sets of points defining an iris-pupil boundary and an iris-sclera boundary in an acquired image; determines respective ellipses fitting the first and second sets of points; determines a transformation to transform one of the ellipses into a circle on a corresponding plane; using the determined transformation, transforms the selected ellipse into a circle on the plane; using the determined transformation, transforms the other ellipse into a transformed ellipse on the plane; determines a plurality of ellipses on the plane for defining an iris grid, by interpolating a plurality of ellipses between the circle and the transformed ellipse; moves the determined grid ellipses onto the iris in the image using an inverse transformation of the determined transformation; and extracts an iris texture by unwrapping the iris and interpolating image pixel values at each grid point defined along each of the grid ellipses.

IPC Classes  ?

  • G06V 40/18 - Eye characteristics, e.g. of the iris
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • H04N 5/225 - Television cameras

28.

Handheld computing device

      
Application Number 16672766
Grant Number 11627262
Status In Force
Filing Date 2019-11-04
First Publication Date 2021-05-06
Grant Date 2023-04-11
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Stec, Piotr
  • Bigioi, Petronel
  • Andorko, Istvan

Abstract

A handheld computing device comprises a display comprising an array of pixels illuminated by a plurality of visible light sources, and a plurality of infra-red light sources interleaved between the visible light sources, the IR light sources being actuable to emit diffuse IR light with a first intensity. A camera has an image sensor comprising an array of pixels responsive to infra-red light and a lens assembly with an optical axis extending from the image sensor through the surface of the display. A dedicated illumination source is located outside the display and is actuable to emit infra-red light with a second greater intensity. A processor is configured to switch between an iris region processing mode in which a subject is illuminated at least by the dedicated light source and a face region processing mode in which a subject is illuminated by the plurality of IR light sources.

IPC Classes  ?

  • H04N 5/33 - Transforming infrared radiation
  • H04N 5/225 - Television cameras
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06V 40/19 - Sensors therefor
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

29.

Event-sensor camera

      
Application Number 16674378
Grant Number 11303811
Status In Force
Filing Date 2019-11-05
First Publication Date 2021-05-06
Grant Date 2022-04-12
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

A camera comprises a lens assembly coupled to an event-sensor, the lens assembly being configured to focus a light field onto a surface of the event-sensor, the event-sensor surface comprising a plurality of light sensitive-pixels, each of which cause an event to be generated when there is a change in light intensity greater than a threshold amount incident on the pixel. The camera further includes an actuator which can be triggered to cause a change in the light field incident on the surface of the event-sensor and to generate a set of events from a sub-set of pixels distributed across the surface of the event-sensor.

IPC Classes  ?

  • H04N 5/345 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by partially reading an SSIS array
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G02B 27/64 - Imaging systems using optical elements for stabilisation of the lateral and angular position of the image
  • H04N 5/225 - Television cameras

30.

Method for stabilizing a camera frame of a video sequence

      
Application Number 16575748
Grant Number 10983363
Status In Force
Filing Date 2019-09-19
First Publication Date 2021-03-25
Grant Date 2021-04-20
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • O'Sullivan, Brian
  • Stec, Piotr

Abstract

A method for stabilizing a video sequence comprises: obtaining an indication of camera movement from acquisition of a previous camera frame to acquisition of a current camera frame; determining an orientation for the camera at a time of acquiring the current camera frame; and determining a candidate orientation for a crop frame for the current camera frame by adjusting an orientation of a crop frame associated with the previous camera frame according to the determined orientation. A boundary of one of the camera frame or crop frame is traversed to determine if a specific point on the boundary of the crop frame exceeds a boundary of the camera frame. If so, a rotation of the specific point location which would bring the specific point location onto the boundary of the crop frame is determined and the candidate crop frame orientation updated accordingly before the crop frame is displayed.

IPC Classes  ?

  • G02B 27/64 - Imaging systems using optical elements for stabilisation of the lateral and angular position of the image
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control

31.

Method of image processing using a neural network

      
Application Number 16544238
Grant Number 11302009
Status In Force
Filing Date 2019-08-19
First Publication Date 2021-02-25
Grant Date 2022-04-12
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Vranceanu, Ruxandra
  • Pop, Tudor Mihail
  • Parvan-Cernatescu, Oana
  • Mangapuram, Sathish

Abstract

A method of generating landmark locations for an image crop comprises: processing the crop through an encoder-decoder to provide a plurality of N output maps of comparable spatial resolution to the crop, each output map corresponding to a respective landmark of an object appearing in the image crop; processing an output map from the encoder through a plurality of feed forward layers to provide a feature vector comprising N elements, each element including an (x,y) location for a respective landmark. Any landmarks locations from the feature vector having an x or a y location outside a range for a respective row or column of the crop are selected for a final set of landmark locations; with remaining landmark locations tending to be selected from the N (x,y) landmark locations from the plurality of N output maps.

IPC Classes  ?

  • G06T 7/11 - Region-based segmentation
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06F 17/15 - Correlation function computation

32.

Automatic exposure module for an image acquisition system

      
Application Number 16915987
Grant Number 11375133
Status In Force
Filing Date 2020-06-29
First Publication Date 2020-12-24
Grant Date 2022-06-28
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Andorko, Istvan

Abstract

A method for automatically determining exposure settings for an image acquisition system comprises maintaining a plurality of look-up tables, each look-up table being associated with a corresponding light condition and storing image exposure settings associated with corresponding distance values between a subject and the image acquisition system. An image of a subject is acquired from a camera module; and a light condition occurring during the acquisition is determined based on the acquired image. A distance between the subject and the camera module during the acquisition is calculated. The method then determines whether a correction of the image exposure settings for the camera module is required based on the calculated distance and the determined light condition; and responsive to correction being required: selects image exposure settings corresponding to the calculated distance from the look-up table corresponding to the determined light condition; and acquires a new image using the selected image exposure settings.

IPC Classes  ?

  • G06V 40/00 - Recognition of biometric, human-related or animal-related patterns in image or video data
  • H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
  • G01S 3/00 - Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • H04N 5/243 - Circuitry for compensating for variation in the brightness of the object by influencing the picture signal
  • G06V 40/19 - Sensors therefor
  • G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelengthActuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras

33.

Method for determining bias in an inertial measurement unit of an image acquisition device

      
Application Number 17000698
Grant Number 11223764
Status In Force
Filing Date 2020-08-24
First Publication Date 2020-12-10
Grant Date 2022-01-11
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

e is used to update a bias component value.

IPC Classes  ?

  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
  • G06T 7/20 - Analysis of motion

34.

Image processing system

      
Application Number 16995451
Grant Number 11532148
Status In Force
Filing Date 2020-08-17
First Publication Date 2020-12-03
Grant Date 2022-12-20
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Nicoara, Nicolae
  • Raceala, Cristina
  • Zaharia, Corneliu
  • Fulop, Szabolcs
  • Iovita, Oana

Abstract

An image processing system comprises a template matching engine (TME). The TME reads an image from the memory; and as each pixel of the image is being read, calculates a respective feature value of a plurality of feature maps as a function of the pixel value. A pre-filter is responsive to a current pixel location comprising a node within a limited detector cascade to be applied to a window within the image to: compare a feature value from a selected one of the plurality of feature maps corresponding to the pixel location to a threshold value; and responsive to pixels for all nodes within a limited detector cascade to be applied to the window having been read, determine a score for the window. A classifier, responsive to the pre-filter indicating that a score for a window is below a window threshold, does not apply a longer detector cascade to the window before indicating that the window does not comprise an object to be detected.

IPC Classes  ?

  • G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

35.

Image acquisition system for off-axis eye images

      
Application Number 16410559
Grant Number 10909363
Status In Force
Filing Date 2019-05-13
First Publication Date 2020-11-19
Grant Date 2021-02-02
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • O'Sullivan, Brian
  • Mccullagh, Barry
  • Carateev, Serghei
  • Andorko, Istvan
  • Bigioi, Petronel

Abstract

An image acquisition system determines first and second sets of points defining an iris-pupil boundary and an iris-sclera boundary in an acquired image; determines respective ellipses fitting the first and second sets of points; determines a transformation to transform one of the ellipses into a circle on a corresponding plane; using the determined transformation, transforms the selected ellipse into a circle on the plane; using the determined transformation, transforms the other ellipse into a transformed ellipse on the plane; determines a plurality of ellipses on the plane for defining an iris grid, by interpolating a plurality of ellipses between the circle and the transformed ellipse; moves the determined grid ellipses onto the iris in the image using an inverse transformation of the determined transformation; and extracts an iris texture by unwrapping the iris and interpolating image pixel values at each grid point defined along each of the grid ellipses.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • H04N 5/225 - Television cameras

36.

Method of providing a sharpness measure for an image

      
Application Number 16875656
Grant Number 11244429
Status In Force
Filing Date 2020-05-15
First Publication Date 2020-09-17
Grant Date 2022-02-08
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Nanu, Florin
  • Bobei, Adrian
  • Malaescu, Alexandru
  • Clapon, Cosmin

Abstract

A method of providing a sharpness measure for an image comprises detecting an object region within an image; obtaining meta-data for the image; and scaling the chosen object region to a fixed size. A gradient map is calculated for the scaled object region and compared against a threshold determined for the image to provide a filtered gradient map of values exceeding the threshold. The threshold for the image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO/gain used for image acquisition. A sharpness measure for the object region is determined as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values.

IPC Classes  ?

  • G06T 5/00 - Image enhancement or restoration
  • G06T 7/00 - Image analysis
  • G06T 7/13 - Edge detection
  • G06T 7/42 - Analysis of texture based on statistical description of texture using transform domain methods
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/44 - Smoothing or thinning of the pattern
  • G06K 9/46 - Extraction of features or characteristics of the image

37.

Biometrics-enabled portable storage device

      
Application Number 16800968
Grant Number 11409854
Status In Force
Filing Date 2020-02-25
First Publication Date 2020-08-27
Grant Date 2022-08-09
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Andorko, Istvan
  • Bigioi, Petronel
  • Ballesty, Darragh

Abstract

A biometrics-enabled portable storage device may store and secure data via biometrics related to a user's iris. The biometrics-enabled portable storage device may include a camera that captures image data related a user's iris and stores the image data to enroll the user for use of the biometrics-enabled portable storage device. To unlock the data, a user aligns the camera with their iris using a hot mirror and the camera captures iris data for comparison with the iris image data stored during enrollment. If the two sets of image data match, the biometrics-enabled portable storage device may be unlocked and the user may access data stored on the biometrics-enabled portable storage device. If the two sets of image data do not match, then the biometrics-enabled portable storage device remains locked.

IPC Classes  ?

  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06V 10/143 - Sensing or illuminating at different wavelengths
  • G06V 40/50 - Maintenance of biometric data or enrolment thereof
  • G06V 40/18 - Eye characteristics, e.g. of the iris

38.

Method and system for tracking an object

      
Application Number 16746430
Grant Number 11379719
Status In Force
Filing Date 2020-01-17
First Publication Date 2020-07-16
Grant Date 2022-07-05
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Dinu, Dragos
  • Munteanu, Mihai Constantin
  • Caliman, Alexandru

Abstract

A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06T 7/269 - Analysis of motion using gradient-based methods
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components

39.

Depth sensing camera system

      
Application Number 16389895
Grant Number 11902496
Status In Force
Filing Date 2019-04-19
First Publication Date 2020-06-11
Grant Date 2024-02-13
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bigioi, Petronel
  • Stec, Piotr

Abstract

A depth sensing camera system that comprises one or more fisheye lenses and infrared and/or near-infrared image sensors. In some examples, the image sensors may generate output signals based at least in part on receiving radiation via the fisheye lenses. A depth measurement may be calculated based at least in part on the output signals. For example, these output signals may be provided as input to a depth model, which may determine the depth measurement. In some examples, such a depth model may be integrated into an application-specific integrated circuit and/or may be operated by an application processor.

IPC Classes  ?

  • H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
  • H04N 13/254 - Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
  • G06T 7/55 - Depth or shape recovery from multiple images
  • H04N 5/33 - Transforming infrared radiation

40.

Lens system for a camera module

      
Application Number 16368540
Grant Number 10656391
Status In Force
Filing Date 2019-04-02
First Publication Date 2020-05-19
Grant Date 2020-05-19
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Cognard, Timothee

Abstract

−2.

IPC Classes  ?

  • G02B 13/00 - Optical objectives specially designed for the purposes specified below
  • H04N 5/225 - Television cameras
  • G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
  • G02B 9/64 - Optical objectives characterised both by the number of the components and their arrangements according to their sign, i.e. + or – having more than six components

41.

Method for correcting an acquired image

      
Application Number 16724877
Grant Number 11257192
Status In Force
Filing Date 2019-12-23
First Publication Date 2020-05-07
Grant Date 2022-02-22
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

G.

IPC Classes  ?

  • G06T 5/00 - Image enhancement or restoration
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06T 7/20 - Analysis of motion

42.

Image acquisition method and apparatus

      
Application Number 16708251
Grant Number 10999526
Status In Force
Filing Date 2019-12-09
First Publication Date 2020-04-23
Grant Date 2021-05-04
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Vranceanu, Ruxandra
  • Condorovici, Razvan G.

Abstract

A system includes an image sensor, an adjustable aperture, and a memory. THE memory includes computer executable instructions that, when executed by a processor, cause the system to perform operations including obtaining a first image via the image sensor based at least in part on a first aperture stop of the adjustable aperture, identifying a first pixel of the first image, identifying a second pixel of the first image, determining a second aperture stop of the adjustable aperture based at least in part on the first pixel, determining a third aperture stop of the adjustable aperture based at least in part on the second pixel, obtaining a second image via the image sensor based at least in part on the second aperture stop, and obtaining a third image via the image sensor based at least in part on the third aperture stop.

IPC Classes  ?

  • H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
  • G06T 5/20 - Image enhancement or restoration using local operators
  • G06T 5/40 - Image enhancement or restoration using histogram techniques

43.

Image processing method and system for iris recognition

      
Application Number 16679041
Grant Number 10891479
Status In Force
Filing Date 2019-11-08
First Publication Date 2020-03-05
Grant Date 2021-01-12
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Andorko, Istvan
  • Drimbarean, Alexandru
  • Corcoran, Peter

Abstract

An image processing method for iris recognition of a predetermined subject, comprises acquiring through an image sensor, a probe image illuminated by an infra-red (IR) illumination source, wherein the probe image comprises one or more eye regions and is overexposed until skin portions of the image are saturated. One or more iris regions are identified within the one or more eye regions of said probe image; and the identified iris regions are analysed to detect whether they belong to the predetermined subject.

IPC Classes  ?

  • H04N 5/33 - Transforming infrared radiation
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/20 - Image acquisition
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06T 7/11 - Region-based segmentation
  • G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

44.

Apparatus for processing a neural network

      
Application Number 16542138
Grant Number 11676371
Status In Force
Filing Date 2019-08-15
First Publication Date 2020-02-20
Grant Date 2023-06-13
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Lovin, Ciprian Alexandru

Abstract

An apparatus for processing a neural network comprises an image memory into which an input image is written tile-by-tile, each tile overlapping a previous tile to a limited extent; a weights memory for storing weight information for a plurality of convolutional layers of a neural network, including at least two pooling layers; and a layer processing engine configured to combine information from the image and weights memories to generate an output map and to write the output map to image memory. The apparatus is configured to store a limited number of values from adjacent a boundary of an output map for a given layer. The layer processing engine is configured to combine the output map values from a previously processed image tile with the information from the image memory and the weights when generating an output map for a layer of the neural network following the given layer.

IPC Classes  ?

  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
  • G06N 3/08 - Learning methods
  • G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

45.

Real-time video frame pre-processing hardware

      
Application Number 16570478
Grant Number 10930253
Status In Force
Filing Date 2019-09-13
First Publication Date 2020-01-02
Grant Date 2021-02-23
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Zaharia, Corneliu
  • Bigioi, Petronel
  • Corcoran, Peter

Abstract

A dynamically reconfigurable heterogeneous systolic array is configured to process a first image frame, and to generate image processing primitives from the image frame, and to store the primitives and the corresponding image frame in a memory store. A characteristic of the image frame is determined. Based on the characteristic, the array is reconfigured to process a following image frame.

IPC Classes  ?

  • G06T 1/00 - General purpose image data processing
  • G06T 1/20 - Processor architecturesProcessor configuration, e.g. pipelining
  • G06T 1/60 - Memory management
  • G06T 5/00 - Image enhancement or restoration
  • G09G 5/393 - Arrangements for updating the contents of the bit-mapped memory
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • H04N 9/67 - Circuits for processing colour signals for matrixing
  • H04N 5/335 - Transforming light or analogous information into electric information using solid-state image sensors [SSIS]
  • G06F 15/80 - Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors

46.

Anonymizing facial expression data with a smart-cam

      
Application Number 16551415
Grant Number 11727426
Status In Force
Filing Date 2019-08-26
First Publication Date 2019-12-19
Grant Date 2023-08-15
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bigioi, Petronel
  • Corcoran, Peter

Abstract

A method of responding to a criterion-based request for information collected from users meeting the criterion while complying with a user-requested privacy requirement. In one embodiment a request is received for data comprising facial or audio expressions for users who meet the criterion. A program monitors activities indicative of user attention or user reaction based on face tracking, face detection, face feature detection, eye gaze determination, eye tracking, audio expression determination, or determination of an emotional state. When a user requests a high level of privacy, the timestream data collected for the user is aggregated with timestream data collected for other users into a statistical dataset by processing the timestreams to ensure the high level of privacy in the statistical dataset which is provided to a content provider without providing data collected for the user who has requested the high level of privacy.

IPC Classes  ?

  • G06Q 30/0217 - Discounts or incentives, e.g. coupons or rebates involving input on products or services in exchange for incentives or rewards
  • G06Q 30/0201 - Market modellingMarket analysisCollecting market data

47.

Systems and methods for conditional generative models

      
Application Number 16194211
Grant Number 11797864
Status In Force
Filing Date 2018-11-16
First Publication Date 2019-12-19
Grant Date 2023-10-24
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bazrafkan, Shabab
  • Corcoran, Peter

Abstract

Systems and methods for training a conditional generator model are described. Methods receive a sample, and determine a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.

IPC Classes  ?

  • G06N 20/00 - Machine learning
  • G06N 3/094 - Adversarial learning
  • G06N 7/00 - Computing arrangements based on specific mathematical models
  • G06N 3/08 - Learning methods
  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06N 3/088 - Non-supervised learning, e.g. competitive learning
  • G06N 3/045 - Combinations of networks
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06F 18/2431 - Multiple classes

48.

Neural network image processing apparatus

      
Application Number 16005610
Grant Number 10684681
Status In Force
Filing Date 2018-06-11
First Publication Date 2019-12-12
Grant Date 2020-06-16
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Lemley, Joseph
  • Dutu, Liviu-Cristian
  • Mathe, Stefan
  • Dumitru-Guzu, Madalin

Abstract

A neural network image processing apparatus arranged to acquire images from an image sensor and to: identify a ROI containing a face region in an image; determine at plurality of facial landmarks in the face region; use the facial landmarks to transform the face region within the ROI into a face region having a given pose; and use transformed landmarks within the transformed face region to identify a pair of eye regions within the transformed face region. Each identified eye region is fed to a respective first and second convolutional neural network, each network configured to produce a respective feature vector. Each feature vector is fed to respective eyelid opening level neural networks to obtain respective measures of eyelid opening for each eye region. The feature vectors are combined and to a gaze angle neural network to generate gaze yaw and pitch values substantially simultaneously with the eyelid opening values.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06N 3/08 - Learning methods

49.

Method and system for tracking an object

      
Application Number 16532059
Grant Number 10540586
Status In Force
Filing Date 2019-08-05
First Publication Date 2019-11-21
Grant Date 2020-01-21
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Dinu, Dragos
  • Munteanu, Mihai Constantin
  • Caliman, Alexandru

Abstract

A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06T 7/269 - Analysis of motion using gradient-based methods
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06K 9/46 - Extraction of features or characteristics of the image

50.

Systems and methods for providing depth map information

      
Application Number 16508023
Grant Number 10839535
Status In Force
Filing Date 2019-07-10
First Publication Date 2019-10-31
Grant Date 2020-11-17
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Javidnia, Hossein
  • Corcoran, Peter

Abstract

A method for providing depth map information based on image data descriptive of a scene. In one embodiment, after generating an initial sequence of disparity map data, performing a smoothing operation or an interpolation to remove artifact introduced in the disparity map data as a result of segmenting the image data into superpixels.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 7/50 - Depth or shape recovery
  • G06T 7/10 - SegmentationEdge detection
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06T 5/00 - Image enhancement or restoration
  • G06T 5/20 - Image enhancement or restoration using local operators
  • G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting

51.

Method for compensating for off-axis tilting of a lens

      
Application Number 16503034
Grant Number 10701293
Status In Force
Filing Date 2019-07-03
First Publication Date 2019-10-24
Grant Date 2020-06-30
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Stec, Piotr
  • O'Sullivan, Brian

Abstract

A method for compensating for off-axis tilting of a lens relative to an image sensor in an image acquisition device comprises acquiring a set of calibrated parameters y indicate a coordinate of a pixel in an acquired image. Image information is mapped from the acquired image to a lens tilt compensated image according to the formulae: where s comprises a scale factor given by y indicate the location of a pixel in the lens tilt compensated image.

IPC Classes  ?

  • H04N 5/357 - Noise processing, e.g. detecting, correcting, reducing or removing noise
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

52.

Facial features tracker with advanced training for natural rendering of human faces in real-time

      
Application Number 15912946
Grant Number 10706577
Status In Force
Filing Date 2018-03-06
First Publication Date 2019-09-12
Grant Date 2020-07-07
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Ciuc, Mihai
  • Petrescu, Stefan
  • Haller, Emanuela
  • Oprea, Florin
  • Nicolaescu, Alexandru
  • Nanu, Florin
  • Palade, Iulian

Abstract

Tracking units for facial features with advanced training for natural rendering of human faces in real-time are provided. An example device receives a video stream, and upon detecting a visual face, selects a 3D model from a comprehensive set of head orientation classes. The device determines modifications to the selected 3D model to describe the face, then projects a 2D model of tracking points of facial features based on the 3D model, and controls, actuates, or animates hardware based on the facial features tracking points. The device can switch among an example comprehensive set of 35 different head orientation classes for each video frame, based on suggestions computed from a previous video frame or from yaw and pitch angles of the visual head orientation. Each class of the comprehensive set is trained separately based on a respective collection of automatically marked images for that head orientation class.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

53.

Method of providing a sharpness measure for an image

      
Application Number 16426243
Grant Number 10657628
Status In Force
Filing Date 2019-05-30
First Publication Date 2019-09-12
Grant Date 2020-05-19
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Nanu, Florin
  • Bobei, Adrian
  • Malaescu, Alexandru
  • Clapon, Cosmin

Abstract

A method of providing a sharpness measure for an image comprises detecting an object region within an image; obtaining meta-data for the image; and scaling the chosen object region to a fixed size. A gradient map is calculated for the scaled object region and compared against a threshold determined for the image to provide a filtered gradient map of values exceeding the threshold. The threshold for the image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO/gain used for image acquisition. A sharpness measure for the object region is determined as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values.

IPC Classes  ?

  • G06T 5/00 - Image enhancement or restoration
  • G06T 7/00 - Image analysis
  • G06T 7/13 - Edge detection
  • G06T 7/42 - Analysis of texture based on statistical description of texture using transform domain methods
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/44 - Smoothing or thinning of the pattern
  • G06K 9/46 - Extraction of features or characteristics of the image

54.

Method and apparatus for motion estimation

      
Application Number 16298657
Grant Number 10587806
Status In Force
Filing Date 2019-03-11
First Publication Date 2019-09-05
Grant Date 2020-03-10
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bigioi, Petronel
  • Corcoran, Peter
  • Stec, Piotr

Abstract

A method of estimating motion between a pair of image frames of a given scene comprises calculating respective integral images for each of the image frames and selecting at least one corresponding region of interest within each frame. For each region of interest, an integral image profile from each integral image is calculated, each profile comprising an array of elements, each element comprising a sum of pixel intensities from successive swaths of the region of interest for the frame. Integral image profiles are correlated to determine a relative displacement of the region of interest between the pair of frames. Each region of interest is divided into a plurality of further regions of interest before repeating until a required hierarchy of estimated motion for successively divided regions of interest is provided.

IPC Classes  ?

  • H04N 19/53 - Multi-resolution motion estimationHierarchical motion estimation
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06T 7/223 - Analysis of motion using block-matching

55.

Iris image acquisition system

      
Application Number 15973359
Grant Number 11209633
Status In Force
Filing Date 2018-05-07
First Publication Date 2019-08-29
Grant Date 2021-12-28
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Fitzgerald, Niamh
  • Dainty, Christopher
  • Goncharov, Alexander

Abstract

An iris image acquisition system for a mobile device, comprises a lens assembly arranged along an optical axis and configured for forming an image comprising at least one iris of a subject disposed frontally to the lens assembly; and an image sensor configured to acquire the formed image. The lens assembly comprises a first lens refractive element and at least one second lens element for converging incident radiation to the first refractive element. The first refractive element has a variable thickness configured to counteract a shift of the formed image along the optical axis induced by change in iris-lens assembly distance, such that different areas of the image sensor on which irises at different respective iris-lens assembly distances are formed are in focus within a range of respective iris-lens assembly distances at which iris detail is provided at sufficient contrast to be recognised.

IPC Classes  ?

  • G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
  • H04N 5/225 - Television cameras
  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G06K 9/20 - Image acquisition
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G02B 5/00 - Optical elements other than lenses

56.

Image processing method and system for iris recognition

      
Application Number 16380796
Grant Number 10726259
Status In Force
Filing Date 2019-04-10
First Publication Date 2019-08-01
Grant Date 2020-07-28
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Costache, Gabriel
  • Mccullagh, Barry
  • Carateev, Serghei

Abstract

A method of iris recognition comprises detecting a body region larger than and comprising at least one iris in an image and performing a first eye modelling on the detected body region. If successful, the result of first iris segmentation based on the first eye model is chosen. Otherwise, a first iris identification is performed on the detected body region. If successful, the result of second iris segmentation based on a second eye modelling is chosen. Otherwise, second iris identification is performed on the image, third eye modelling is performed on the result of the second iris identification, and third iris segmentation is performed on the result of the third eye modelling. If successful, the result of third iris segmentation based on a third eye modelling is chosen. An iris code is extracted from any selected iris segment of the image.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 7/11 - Region-based segmentation

57.

Method for compensating for off-axis tilting of a lens

      
Application Number 15904858
Grant Number 10356346
Status In Force
Filing Date 2018-02-26
First Publication Date 2019-07-16
Grant Date 2019-07-16
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Stec, Piotr
  • O'Sullivan, Brian

Abstract

A method for compensating for off-axis tilting of a lens relative to an image sensor in an image acquisition device comprises acquiring a set of calibrated parameters y′ indicate a coordinate of a pixel in an acquired image. Image information is mapped from the acquired image to a lens tilt compensated image according to the formulae: where s comprises a scale factor given by y indicate the location of a pixel in the lens tilt compensated image.

IPC Classes  ?

  • H04N 5/357 - Noise processing, e.g. detecting, correcting, reducing or removing noise
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

58.

Configuring manifest files including redirect uniform resource locators

      
Application Number 16358063
Grant Number 10911509
Status In Force
Filing Date 2019-03-19
First Publication Date 2019-07-11
Grant Date 2021-02-02
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

59.

Method for producing a histogram of oriented gradients

      
Application Number 16303071
Grant Number 10839247
Status In Force
Filing Date 2017-05-19
First Publication Date 2019-07-04
Grant Date 2020-11-17
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Munteanu, Mihai Constantin
  • Georgescu, Vlad
  • Zaharia, Corneliu
  • Suciu, Iulia

Abstract

A method for producing a histogram of oriented gradients (HOG) for at least a portion of an image comprises dividing the image portion into cells, each cell comprising a plurality of image pixels. Then, for each image pixel of a cell, obtaining a horizontal gradient component, gx, and a vertical gradient component, gy, based on differences in pixel values along at least a row of the image and a column of the image respectively including the pixel; and allocating a gradient to one of a plurality of sectors, where n is a sector index, each sector extending through a range of orientation angles and at least some of the sectors being divided from adjacent sectors according to the inequalities: b*16

IPC Classes  ?

  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/48 - Extraction of features or characteristics of the image by coding the contour of the pattern
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06T 1/20 - Processor architecturesProcessor configuration, e.g. pipelining

60.

Manifest file configuration with direct selection of video segment file servers

      
Application Number 16173043
Grant Number 11533352
Status In Force
Filing Date 2018-10-29
First Publication Date 2019-06-13
Grant Date 2022-12-20
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04L 65/75 - Media network packet handling
  • H04L 65/80 - Responding to QoS

61.

Providing third-party dynamic content within adaptive streaming video

      
Application Number 16130637
Grant Number 11757964
Status In Force
Filing Date 2018-09-13
First Publication Date 2019-06-06
Grant Date 2023-09-12
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Gordon, Michael
  • Morel, David

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
  • H04L 65/75 - Media network packet handling
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04L 65/80 - Responding to QoS
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04L 9/40 - Network security protocols
  • H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
  • H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
  • H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
  • H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
  • H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast

62.

Network monitoring to determine performance of infrastructure service providers

      
Application Number 16173104
Grant Number 11075970
Status In Force
Filing Date 2018-10-29
First Publication Date 2019-06-06
Grant Date 2021-07-27
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
  • H04L 29/14 - Counter-measures to a fault

63.

Method for dynamically calibrating an image capture device

      
Application Number 16208091
Grant Number 10560690
Status In Force
Filing Date 2018-12-03
First Publication Date 2019-05-23
Grant Date 2020-02-11
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Malaescu, Alexandru
  • Nanu, Florin

Abstract

INIT) for each of the first and second determined distances; and the stored calibrated lens actuator settings are adjusted according to the determined calibration corrections.

IPC Classes  ?

  • H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G03B 13/36 - Autofocus systems
  • G03B 43/00 - Testing correct operation of photographic apparatus or parts thereof
  • G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism

64.

Image processing apparatus

      
Application Number 16171032
Grant Number 11106894
Status In Force
Filing Date 2018-10-25
First Publication Date 2019-05-02
Grant Date 2021-08-31
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Georgescu, Vlad
  • Munteanu, Mihai Constantin
  • Bigioi, Petronel
  • Zaharia, Corneliu
  • Fulop, Szabolcs
  • Simon, Gyorgy

Abstract

D<2 of a required scale for a normalised version of the ROI. The apparatus then fractionally downsamples and rotates downsampled information for a tile within the buffer to produce a respective normalised portion of the ROI at the required scale for the normalised ROI. Downsampled and rotated information is accumulated for each tile within a normalised ROI buffer for subsequent processing by the image processing apparatus.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 3/60 - Rotation of whole images or parts thereof
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 7/11 - Region-based segmentation

65.

System and method for estimating optimal parameters

      
Application Number 16152364
Grant Number 11030722
Status In Force
Filing Date 2018-10-04
First Publication Date 2019-04-11
Grant Date 2021-06-08
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Condorovici, Razvan G.

Abstract

A method and system of generating an adjustment parameter value for a control parameter to enhance a new image, which includes configuring a neural network, trained to restore image quality for a derivative image, to that of an earlier version of the derivative image, to generate as an output the adjustment parameter value, for the control parameter in response to input of data derived from the new image, and changing a control parameter of the new image, by generating the adjustment parameter value by calculating an inverse of the output value, and applying the adjustment parameter value to the control parameter of the new image so as to generate an enhanced image.

IPC Classes  ?

  • G06T 5/00 - Image enhancement or restoration
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06N 3/00 - Computing arrangements based on biological models

66.

Systems and methods for authenticating a biometric device using a trusted coordinating smart device

      
Application Number 16207666
Grant Number 10586032
Status In Force
Filing Date 2018-12-03
First Publication Date 2019-04-04
Grant Date 2020-03-10
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Corcoran, Peter

Abstract

Systems and methods for authenticating a biometric device using a trusted coordinating smart device in accordance with embodiments of the invention are disclosed. In one embodiment, a process for enrolling a configurable biometric device with a network service includes obtaining a device identifier (ID) of the configurable biometric device using a coordinating smart device, communicating the device ID from the coordinating smart device to a network service, communicating a first challenge based on a challenge-response authentication protocol from the network service to the coordinating smart device, communicating the first challenge and a response uniform resource locator (URL) from the coordinating smart device to the configurable biometric device, generating a first response to the first challenge and communicating the first response to the network service utilizing the response URL, receiving a secure channel key by the coordinating smart device from the network service, communicating the secure channel key from the coordinating smart device to the configurable biometric device, performing a biometric enrollment process using the configurable biometric device including capturing biometric information from a user, and creating a secure communication link between the configurable biometric device and the network service using the secure channel key when the first response satisfies the challenge-response authentication protocol.

IPC Classes  ?

  • G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04L 9/08 - Key distribution
  • H04L 29/14 - Counter-measures to a fault
  • H04W 12/06 - Authentication

67.

Method for determining bias in an inertial measurement unit of an image acquisition device

      
Application Number 16154450
Grant Number 10757333
Status In Force
Filing Date 2018-10-08
First Publication Date 2019-02-14
Grant Date 2020-08-25
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

e is used to update a bias component value.

IPC Classes  ?

  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
  • G06T 7/20 - Analysis of motion

68.

Method for producing framing information for a set of images

      
Application Number 15630744
Grant Number 10339626
Status In Force
Filing Date 2017-06-22
First Publication Date 2018-12-27
Grant Date 2019-07-02
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Pit Rada, Cosmin

Abstract

A method for producing framing information for a set of N source images, each comprising an object region R, comprises scaling, translating and/or rotating images of the source images so that the object region is aligned. For a given image of the object aligned source images, at a given frame size, a given frame angle for a frame relative to the object aligned images and at a first candidate boundary position for the frame, the method determines if there is at least one position for a second boundary of the frame orthogonal to the first boundary where the frame lies within the image and the frame encloses the object region. If so, counters associated with the first candidate boundary position are incremented. Responsive to any counter meeting a threshold value,K≤N, for the source images, framing is indicated as possible at the given frame size, frame angle, first candidate boundary position and any position for the second boundary associated with the threshold meeting counter. Otherwise, another image can be chosen and the process repeated.

IPC Classes  ?

  • G06T 3/00 - Geometric image transformations in the plane of the image
  • G06T 7/11 - Region-based segmentation
  • G06T 7/136 - SegmentationEdge detection involving thresholding
  • G06T 7/194 - SegmentationEdge detection involving foreground-background segmentation
  • G06T 7/174 - SegmentationEdge detection involving the use of two or more images

69.

Method for configuring access for a limited user interface (UI) device

      
Application Number 16107910
Grant Number 10999275
Status In Force
Filing Date 2018-08-21
First Publication Date 2018-12-13
Grant Date 2021-05-04
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Corcoran, Peter
  • Bigioi, Petronel
  • Raducan, Ilariu

Abstract

A method operable by a computing device for configuring access for a limited user interface (UI) device to a network service via a local network access point is disclosed. The method comprises the steps of: obtaining from the limited UI device a device identifier via a first out-of-band channel. The device identifier is provided to the network service via a secure network link. A zero knowledge proof (ZKP) challenge is received from the network service. Configuration information is provided to the limited-UI device via a second out-of-band channel, the configuration information including information sufficient to enable the limited-UI device to connect to the local network access point. The ZKP challenge is provided to the limited-UI device via the second out-of-band channel. A secure channel key is received from the network service indicating a successful response from the limited-UI device to the ZKP challenge; and provided to the limited-UI device enabling the limited-UI device to access the network service.

IPC Classes  ?

  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
  • H04W 76/10 - Connection setup
  • H04W 12/06 - Authentication
  • H04W 4/70 - Services for machine-to-machine communication [M2M] or machine type communication [MTC]
  • H04W 12/04 - Key management, e.g. using generic bootstrapping architecture [GBA]
  • H04W 84/12 - WLAN [Wireless Local Area Networks]
  • H04W 12/00 - Security arrangementsAuthenticationProtecting privacy or anonymity

70.

Automatic exposure module for an image acquisition system

      
Application Number 15609314
Grant Number 10701277
Status In Force
Filing Date 2017-05-31
First Publication Date 2018-12-06
Grant Date 2020-06-30
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Andorko, Istvan

Abstract

A method for automatically determining exposure settings for an image acquisition system comprises maintaining a plurality of look-up tables, each look-up table being associated with a corresponding light condition and storing image exposure settings associated with corresponding distance values between a subject and the image acquisition system. An image of a subject is acquired from a camera module; and a light condition occurring during the acquisition is determined based on the acquired image. A distance between the subject and the camera module during the acquisition is calculated. The method then determines whether a correction of the image exposure settings for the camera module is required based on the calculated distance and the determined light condition; and responsive to correction being required: selects image exposure settings corresponding to the calculated distance from the look-up table corresponding to the determined light condition; and acquires a new image using the selected image exposure settings.

IPC Classes  ?

  • H04N 5/00 - Details of television systems
  • H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
  • G01S 3/00 - Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • H04N 5/243 - Circuitry for compensating for variation in the brightness of the object by influencing the picture signal

71.

Method for dynamically calibrating an image capture device

      
Application Number 15605159
Grant Number 10148945
Status In Force
Filing Date 2017-05-25
First Publication Date 2018-11-29
Grant Date 2018-12-04
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Malaescu, Alexandru
  • Nanu, Florin

Abstract

INIT) for each of the first and second determined distances; and the stored calibrated lens actuator settings are adjusted according to the determined calibration corrections.

IPC Classes  ?

  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
  • G03B 13/36 - Autofocus systems
  • G03B 43/00 - Testing correct operation of photographic apparatus or parts thereof
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

72.

Method for correcting an acquired image

      
Application Number 15942789
Grant Number 10515439
Status In Force
Filing Date 2018-04-02
First Publication Date 2018-11-15
Grant Date 2019-12-24
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

G.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 5/00 - Image enhancement or restoration
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06T 7/20 - Analysis of motion

73.

Portable system providing augmented vision of surroundings

      
Application Number 15654465
Grant Number 10491819
Status In Force
Filing Date 2017-07-19
First Publication Date 2018-11-15
Grant Date 2019-11-26
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Corcoran, Peter

Abstract

A portable system providing augmented vision of surroundings. In one embodiment the system includes a helmet, a plurality of camera units and circuitry to generate a composite field of view from channels of video data. The helmet permits a user to receive a first field of view in the surroundings based on optical information received directly from the surroundings with the user's natural vision. The camera units are mounted about the helmet to generate the multiple channels of video data. Each camera channel captures a different field of view of a scene in a region surrounding the helmet.

IPC Classes  ?

  • G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
  • H04N 13/376 - Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements
  • H04N 13/378 - Image reproducers using viewer tracking for tracking rotational head movements around an axis perpendicular to the screen
  • H04N 13/38 - Image reproducers using viewer tracking for tracking vertical translational head movements
  • H04N 13/398 - Synchronisation thereofControl thereof
  • A42B 3/04 - Parts, details or accessories of helmets
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 11/60 - Editing figures and textCombining figures or text
  • H04N 5/247 - Arrangement of television cameras
  • H04N 5/33 - Transforming infrared radiation
  • G02B 27/01 - Head-up displays
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • H04N 13/243 - Image signal generators using stereoscopic image cameras using three or more 2D image sensors

74.

Method for determining bias in an inertial measurement unit of an image acquisition device

      
Application Number 15468409
Grant Number 10097757
Status In Force
Filing Date 2017-03-24
First Publication Date 2018-09-27
Grant Date 2018-10-09
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

e is used to update a bias component value.

IPC Classes  ?

  • H04N 5/228 - Circuit details for pick-up tubes
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • G06T 7/20 - Analysis of motion

75.

Method of providing a sharpness measure for an image

      
Application Number 15872873
Grant Number 10311554
Status In Force
Filing Date 2018-01-16
First Publication Date 2018-09-06
Grant Date 2019-06-04
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Nanu, Florin
  • Bobei, Adrian
  • Malaescu, Alexandru
  • Clapon, Cosmin

Abstract

A method of providing a sharpness measure for an image comprises detecting an object region within an image; obtaining meta-data for the image; and scaling the chosen object region to a fixed size. A gradient map is calculated for the scaled object region and compared against a threshold determined for the image to provide a filtered gradient map of values exceeding the threshold. The threshold for the image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO/gain used for image acquisition. A sharpness measure for the object region is determined as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values.

IPC Classes  ?

  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06T 5/00 - Image enhancement or restoration
  • G06T 7/00 - Image analysis
  • G06T 7/13 - Edge detection
  • G06T 7/42 - Analysis of texture based on statistical description of texture using transform domain methods
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control

76.

Portrait lens system suitable for use in a mobile camera

      
Application Number 15913597
Grant Number 10606050
Status In Force
Filing Date 2018-03-06
First Publication Date 2018-09-06
Grant Date 2020-03-31
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Dainty, Christopher
  • Goncharov, Alexander
  • Sheil, Conor J.

Abstract

A portrait lens configuration for meeting handheld device form factor constraints. First and second meniscus lenses each have a reflective surface to provide internal reflections for transmitting light toward a focal plane. A third lens is positioned between the meniscus lenses and the focal plane. The first lens includes an anterior concave surface having a reflective material extending over a portion thereof. Light received by the first meniscus lens can be transmitted therethrough. The reflective material is positioned along the anterior concave surface to receive light transmitted therethrough and reflected back from the second lens. In an associated method the first meniscus lens is positioned to receive light through a first of two opposing refractive surfaces. After each lens provides an internal reflection, reflected light is transmitted through the second of the two opposing surfaces and then through a bore positioned within the second lens to the third lens.

IPC Classes  ?

  • G02B 17/08 - Catadioptric systems
  • G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
  • G02B 7/04 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
  • G03B 17/56 - Accessories
  • G02B 13/00 - Optical objectives specially designed for the purposes specified below
  • G02B 15/14 - Optical objectives with means for varying the magnification by axial movement of one or more lenses or groups of lenses relative to the image plane for continuously varying the equivalent focal length of the objective
  • G02B 17/00 - Systems with reflecting surfaces, with or without refracting elements
  • G03B 17/17 - Bodies with reflectors arranged in beam forming the photographic image, e.g. for reducing dimensions of camera

77.

Portrait lens system formed with an adjustable meniscus lens

      
Application Number 15913659
Grant Number 10514529
Status In Force
Filing Date 2018-03-06
First Publication Date 2018-09-06
Grant Date 2019-12-24
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Dainty, Christopher
  • Goncharov, Alexander
  • Sheil, Conor J.

Abstract

A portrait lens configuration for meeting handheld device form factor constraints. First and second meniscus lenses each have a reflective surface to provide internal reflections for transmitting light toward a focal plane. A third lens is positioned between the meniscus lenses and the focal plane. The first lens includes an anterior concave surface having a reflective material extending over a portion thereof. Light received by the first meniscus lens can be transmitted therethrough. The reflective material is positioned along the anterior concave surface to receive light transmitted therethrough and reflected back from the second lens. In an associated method the first meniscus lens is positioned to receive light through a first of two opposing refractive surfaces. After each lens provides an internal reflection, reflected light is transmitted through the second of the two opposing surfaces and then through a bore positioned within the second lens to the third lens.

IPC Classes  ?

  • G02B 17/00 - Systems with reflecting surfaces, with or without refracting elements
  • G02B 17/08 - Catadioptric systems
  • G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
  • G02B 7/04 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
  • G03B 17/56 - Accessories
  • G02B 13/00 - Optical objectives specially designed for the purposes specified below
  • G02B 15/14 - Optical objectives with means for varying the magnification by axial movement of one or more lenses or groups of lenses relative to the image plane for continuously varying the equivalent focal length of the objective
  • G03B 17/17 - Bodies with reflectors arranged in beam forming the photographic image, e.g. for reducing dimensions of camera

78.

Image processing method and system for iris recognition

      
Application Number 15427904
Grant Number 10275648
Status In Force
Filing Date 2017-02-08
First Publication Date 2018-08-09
Grant Date 2019-04-30
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Costache, Gabriel
  • Mccullagh, Barry
  • Carateev, Serghei

Abstract

A method of iris recognition comprises detecting a body region larger than and comprising at least one iris in an image and performing a first eye modelling on the detected body region. If successful, the result of first iris segmentation based on the first eye model is chosen. Otherwise, a first iris identification is performed on the detected body region. If successful, the result of second iris segmentation based on a second eye modelling is chosen. Otherwise, second iris identification is performed on the image, third eye modelling is performed on the result of the second iris identification, and third iris segmentation is performed on the result of the third eye modelling. If successful, the result of third iris segmentation based on a third eye modelling is chosen. An iris code is extracted from any selected iris segment of the image.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 7/11 - Region-based segmentation

79.

Image processing method and system for iris recognition

      
Application Number 15948199
Grant Number 10474894
Status In Force
Filing Date 2018-04-09
First Publication Date 2018-08-09
Grant Date 2019-11-12
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Andorko, Istvan
  • Drimbarean, Alexandru
  • Corcoran, Peter

Abstract

An image processing method for iris recognition of a predetermined subject, comprises acquiring through an image sensor, a probe image illuminated by an infra-red (IR) illumination source, wherein the probe image comprises one or more eye regions and is overexposed until skin portions of the image are saturated. One or more iris regions are identified within the one or more eye regions of said probe image; and the identified iris regions are analyzed to detect whether they belong to the predetermined subject.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/46 - Extraction of features or characteristics of the image
  • H04N 5/33 - Transforming infrared radiation
  • G06T 7/11 - Region-based segmentation
  • G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
  • G06K 9/20 - Image acquisition
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods

80.

Method of training a neural network

      
Application Number 15413312
Grant Number 10915817
Status In Force
Filing Date 2017-01-23
First Publication Date 2018-07-26
Grant Date 2021-02-09
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bazrafkan, Shabab
  • Lemley, Joe

Abstract

Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.

IPC Classes  ?

81.

Method for synthesizing a neural network

      
Application Number 15413283
Grant Number 10546231
Status In Force
Filing Date 2017-01-23
First Publication Date 2018-07-26
Grant Date 2020-01-28
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Bazrafkan, Shabab
  • Lemley, Joe

Abstract

Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.

IPC Classes  ?

  • G06N 3/04 - Architecture, e.g. interconnection topology

82.

Systems and methods for detecting data insertions in biometric authentication systems utilizing a secret

      
Application Number 15836719
Grant Number 11049210
Status In Force
Filing Date 2017-12-08
First Publication Date 2018-06-28
Grant Date 2021-06-29
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Corcoran, Peter
  • Drimbarean, Alexandru

Abstract

Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a Secret embedded into the data stream is compared to a Replica Secret upon receipt to confirm data transmission integrity.

IPC Classes  ?

  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • G06T 1/00 - General purpose image data processing
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04L 9/08 - Key distribution

83.

Systems and methods for detecting data insertions in biometric authentication systems using encryption

      
Application Number 15836755
Grant Number 10615973
Status In Force
Filing Date 2017-12-08
First Publication Date 2018-06-28
Grant Date 2020-04-07
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Corcoran, Peter
  • Drimbarean, Alexandru

Abstract

Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a data stream is encrypted with a secure encryption key for transmission, then decrypted upon receipt using a corresponding secure decryption key to confirm data transmission integrity.

IPC Classes  ?

  • H04L 9/08 - Key distribution
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures

84.

Systems and methods for detecting data insertions in biometric authentication systems using pseudo data segments

      
Application Number 15836759
Grant Number 11075750
Status In Force
Filing Date 2017-12-08
First Publication Date 2018-06-28
Grant Date 2021-07-27
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Corcoran, Peter
  • Drimbarean, Alexandru

Abstract

Systems and methods of detecting an unauthorized data insertion into a stream of data segments extending between electronic modules or between electronic components within a module, wherein a pseudo data segment included in the data stream upon transmission is detected upon receipt to confirm data transmission integrity.

IPC Classes  ?

  • G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
  • H04L 9/08 - Key distribution
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures

85.

Iris recognition workflow

      
Application Number 15811494
Grant Number 10880742
Status In Force
Filing Date 2017-11-13
First Publication Date 2018-06-21
Grant Date 2020-12-29
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Andorko, Istvan
  • Drimbarean, Alexandru

Abstract

In an embodiment, a method comprises determining, by a portable device, image capturing conditions by analyzing contents of a particular digital image of one or more first digital images captured by one or more image capturing devices. It is also determined whether the image capturing conditions indicate indoor image capturing conditions. If the image capturing conditions indicate indoor image capturing conditions, then it is determined whether the particular digital image includes a depiction of at least one eye. If so, an iris region, depicting the at least one eye, is segmented in the particular digital image. If the segmented iris region does not include iris valid information, then one or more image capturing devices capture one or more second digital images having an enhanced contrast. If the one or more second digital images include iris valid information, then iris code is extracted from the one or more second digital images.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04W 12/06 - Authentication
  • G06T 7/11 - Region-based segmentation
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 5/00 - Image enhancement or restoration
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • G06K 9/03 - Detection or correction of errors, e.g. by rescanning the pattern

86.

Systems and methods for authenticating a biometric device using a trusted coordinating smart device

      
Application Number 15841639
Grant Number 10146924
Status In Force
Filing Date 2017-12-14
First Publication Date 2018-06-14
Grant Date 2018-12-04
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Corcoran, Peter

Abstract

Systems and methods for authenticating a biometric device using a trusted coordinating smart device in accordance with embodiments of the invention are disclosed. In one embodiment, a process for enrolling a configurable biometric device with a network service includes obtaining a device identifier (ID) of the configurable biometric device using a coordinating smart device, communicating the device ID from the coordinating smart device to a network service, communicating a first challenge based on a challenge-response authentication protocol from the network service to the coordinating smart device, communicating the first challenge and a response uniform resource locator (URL) from the coordinating smart device to the configurable biometric device, generating a first response to the first challenge and communicating the first response to the network service utilizing the response URL, receiving a secure channel key by the coordinating smart device from the network service, communicating the secure channel key from the coordinating smart device to the configurable biometric device, performing a biometric enrollment process using the configurable biometric device including capturing biometric information from a user, and creating a secure communication link between the configurable biometric device and the network service using the secure channel key when the first response satisfies the challenge-response authentication protocol.

IPC Classes  ?

  • G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04L 9/08 - Key distribution
  • H04L 29/14 - Counter-measures to a fault
  • H04W 12/06 - Authentication

87.

Systems and methods for estimating and refining depth maps

      
Application Number 15654693
Grant Number 10462445
Status In Force
Filing Date 2017-07-19
First Publication Date 2018-01-25
Grant Date 2019-10-29
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Javidnia, Hossein
  • Corcoran, Peter

Abstract

A method for improving accuracy of depth map information derived from image data descriptive of a scene. In one embodiment Mutual Feature Map data are created based on initial disparity map data values and the image data descriptive of the scene. The Mutual Feature Map data are applied to create a series of weighting functions representing structural details that can be transferred to the first disparity values to restore degraded features or replace some of the first disparity values with values more representative of structural features present in the image data descriptive of the scene.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04N 13/128 - Adjusting depth or disparity
  • G06T 5/20 - Image enhancement or restoration using local operators
  • G06T 7/593 - Depth or shape recovery from multiple images from stereo images
  • H04N 13/257 - Colour aspects
  • H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
  • H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof

88.

Manifest file configuration based on manifest request characteristics

      
Application Number 15713179
Grant Number 10264042
Status In Force
Filing Date 2017-09-22
First Publication Date 2018-01-11
Grant Date 2019-04-16
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

89.

Redirects during manifest file configuration and serving of video segment files

      
Application Number 15713188
Grant Number 10116720
Status In Force
Filing Date 2017-09-22
First Publication Date 2018-01-11
Grant Date 2018-10-30
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol

90.

Image processing method and system for iris recognition

      
Application Number 15192186
Grant Number 09940519
Status In Force
Filing Date 2016-06-24
First Publication Date 2017-12-28
Grant Date 2018-04-10
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Andorko, Istvan
  • Drimbarean, Alexandru
  • Corcoran, Peter

Abstract

An image processing method for iris recognition of a predetermined subject, comprises acquiring through an image sensor, a probe image illuminated by an infra-red (IR) illumination source, wherein the probe image comprises one or more eye regions and is overexposed until skin portions of the image are saturated. One or more iris regions are identified within the one or more eye regions of said probe image; and the identified iris regions are analysed to detect whether they belong to the predetermined subject.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 7/00 - Image analysis
  • G06K 9/46 - Extraction of features or characteristics of the image
  • H04N 5/33 - Transforming infrared radiation

91.

Method and system for tracking an object

      
Application Number 15426413
Grant Number 10373052
Status In Force
Filing Date 2017-02-07
First Publication Date 2017-12-14
Grant Date 2019-08-06
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Dinu, Dragos
  • Munteanu, Mihai Constantin
  • Caliman, Alexandru

Abstract

A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06T 7/269 - Analysis of motion using gradient-based methods
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

92.

Determining manifest file data used in adaptive streaming video delivery

      
Application Number 15683462
Grant Number 10142386
Status In Force
Filing Date 2017-08-22
First Publication Date 2017-12-07
Grant Date 2018-11-27
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Gordon, Michael

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

93.

Biometric recognition system

      
Application Number 15472624
Grant Number 10628568
Status In Force
Filing Date 2017-03-29
First Publication Date 2017-10-05
Grant Date 2020-04-21
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Zamfir, Adrian
  • Miu, Andreea Oana
  • Florea, Corneliu

Abstract

A biometric recognition system for a hand held computing device incorporating an inertial measurement unit (IMU) comprising a plurality of accelerometers and at least one gyroscope is disclosed. A tremor analysis component is arranged to: obtain from the IMU, accelerometer signals indicating device translational acceleration along each of X, Y and Z axes as well as a gyroscope signal indicating rotational velocity about the Y axis during a measurement window. Each of the IMU signals is filtered to provide filtered frequency components for the signals during the measurement window. The accelerometer signals are combined to provide a combined filtered accelerometer magnitude signal for the measurement window. A spectral density estimation is provided for each of the combined filtered accelerometer magnitude signal and the filtered gyroscope signal. An irregularity is determined for each spectral density estimation; and based on the determined irregularities, the tremor analysis component attempts to authenticate a user of the device.

IPC Classes  ?

  • G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G01P 15/18 - Measuring accelerationMeasuring decelerationMeasuring shock, i.e. sudden change of acceleration in two or more dimensions
  • G01C 19/5776 - Signal processing not specific to any of the devices covered by groups

94.

Generating and using manifest files including content delivery network authentication data

      
Application Number 15615073
Grant Number 10084838
Status In Force
Filing Date 2017-06-06
First Publication Date 2017-09-21
Grant Date 2018-09-25
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Gordon, Michael
  • Morel, David

Abstract

Techniques for serving a manifest file of an adaptive streaming video include receiving a request for the manifest file from a user device. The video is encoded at different reference bitrates and each encoded reference bitrate is divided into segments to generate video segment files. The manifest file includes an ordered list of universal resource locators (URLs) that reference a set of video segment files encoded at a particular reference bitrate. A source manifest file that indicates the set of video segment files is identified based on the request. An issued manifest file that includes a first URL and a second URL is generated based on the source manifest file. The first URL references a first domain and the second URL references a second domain that is different from the first domain. The issued manifest file is transmitted to the user device as a response to the request.

IPC Classes  ?

  • G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities

95.

Real-time video frame pre-processing hardware

      
Application Number 15469392
Grant Number 10418001
Status In Force
Filing Date 2017-03-24
First Publication Date 2017-09-07
Grant Date 2019-09-17
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Zaharia, Corneliu
  • Bigioi, Petronel
  • Corcoran, Peter

Abstract

A dynamically reconfigurable heterogeneous systolic array is configured to process a first image frame, and to generate image processing primitives from the image frame, and to store the primitives and the corresponding image frame in a memory store. A characteristic of the image frame is determined. Based on the characteristic, the array is reconfigured to process a following image frame.

IPC Classes  ?

  • G06T 1/20 - Processor architecturesProcessor configuration, e.g. pipelining
  • G06F 15/80 - Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
  • G06T 5/00 - Image enhancement or restoration
  • G09G 5/393 - Arrangements for updating the contents of the bit-mapped memory
  • G06T 1/00 - General purpose image data processing
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • H04N 5/335 - Transforming light or analogous information into electric information using solid-state image sensors [SSIS]
  • H04N 9/67 - Circuits for processing colour signals for matrixing
  • G06T 1/60 - Memory management

96.

Method for correcting an acquired image

      
Application Number 15048224
Grant Number 09934559
Status In Force
Filing Date 2016-02-19
First Publication Date 2017-08-24
Grant Date 2018-04-03
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor Stec, Piotr

Abstract

G.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 5/00 - Image enhancement or restoration
  • G06T 7/20 - Analysis of motion
  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control

97.

Image processing system

      
Application Number 15380906
Grant Number 10460198
Status In Force
Filing Date 2016-12-15
First Publication Date 2017-06-29
Grant Date 2019-10-29
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Nicoara, Nicolae
  • Raceala, Cristina
  • Zaharia, Corneliu
  • Fulop, Szabolcs
  • Iovita, Oana

Abstract

An image processing system comprises a template matching engine (TME). The TME reads an image from the memory; and as each pixel of the image is being read, calculates a respective feature value of a plurality of feature maps as a function of the pixel value. A pre-filter is responsive to a current pixel location comprising a node within a limited detector cascade to be applied to a window within the image to: compare a feature value from a selected one of the plurality of feature maps corresponding to the pixel location to a threshold value; and responsive to pixels for all nodes within a limited detector cascade to be applied to the window having been read, determine a score for the window. A classifier, responsive to the pre-filter indicating that a score for a window is below a window threshold, does not apply a longer detector cascade to the window before indicating that the window does not comprise an object to be detected.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

98.

Method for configuring access for a limited user interface (UI) device

      
Application Number 14936457
Grant Number 10057261
Status In Force
Filing Date 2015-11-09
First Publication Date 2017-05-11
Grant Date 2018-08-21
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Corcoran, Peter
  • Bigioi, Petronel
  • Raducan, Ilariu

Abstract

A method operable by a computing device for configuring access for a limited user interface (UI) device to a network service via a local network access point is disclosed. The method comprises the steps of: obtaining from the limited UI device a device identifier via a first out-of-band channel. The device identifier is provided to the network service via a secure network link. A zero knowledge proof (ZKP) challenge is received from the network service. Configuration information is provided to the limited-UI device via a second out-of-band channel, the configuration information including information sufficient to enable the limited-UI device to connect to the local network access point. The ZKP challenge is provided to the limited-UI device via the second out-of-band channel. A secure channel key is received from the network service indicating a successful response from the limited-UI device to the ZKP challenge; and provided to the limited-UI device enabling the limited-UI device to access the network service.

IPC Classes  ?

  • H04L 29/06 - Communication control; Communication processing characterised by a protocol
  • H04W 76/02 - Connection set-up
  • H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
  • H04L 12/24 - Arrangements for maintenance or administration
  • H04W 12/06 - Authentication
  • H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
  • H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
  • H04W 76/10 - Connection setup
  • H04W 84/12 - WLAN [Wireless Local Area Networks]
  • H04W 12/04 - Key management, e.g. using generic bootstrapping architecture [GBA]
  • H04W 4/70 - Services for machine-to-machine communication [M2M] or machine type communication [MTC]

99.

Method for producing a histogram of oriented gradients

      
Application Number 15160835
Grant Number 09977985
Status In Force
Filing Date 2016-05-20
First Publication Date 2017-04-06
Grant Date 2018-05-22
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Munteanu, Mihai Constantin
  • Georgescu, Vlad
  • Zaharia, Corneliu
  • Suciu, Iulia

Abstract

n·gy, where n is any integer value with a magnitude greater than or equal to 1. At least one sector is associated with a bin; and a count of each instance of a pixel gradient of a cell associated with a bin is performed to provide a HOG for said cell.

IPC Classes  ?

  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/48 - Extraction of features or characteristics of the image by coding the contour of the pattern
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06T 1/20 - Processor architecturesProcessor configuration, e.g. pipelining

100.

Method and system for tracking an object

      
Application Number 15280916
Grant Number 10275709
Status In Force
Filing Date 2016-09-29
First Publication Date 2017-03-30
Grant Date 2019-04-30
Owner ADEIA MEDIA HOLDINGS INC. (USA)
Inventor
  • Munteanu, Mihai Constantin
  • Caliman, Alexandru
  • Dinu, Dragos

Abstract

A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06T 7/269 - Analysis of motion using gradient-based methods
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  1     2        Next Page