Perceptive Automata LLC

United States of America

Back to Profile

1-27 of 27 for Perceptive Automata LLC Sort by
Query
Aggregations
Date
2023 4
2022 1
2021 14
2020 6
Before 2020 2
IPC Class
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles 19
G06N 3/08 - Learning methods 14
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots 13
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads 10
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle 9
See more
Status
Pending 3
Registered / In Force 24
Found results for  patents

1.

FRAMEWORK FOR EVALUATION OF MACHINE LEARNING BASED MODEL USED FOR AUTONOMOUS VEHICLE

      
Application Number 18308622
Status Pending
Filing Date 2023-04-27
First Publication Date 2023-11-02
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Poltoraski, Sonia
  • Hartman, Till S.
  • Zaremba, Jeffrey Donald
  • Anthony, Samuel English
  • Goh, Chuan Yen Ian
  • Al Assad, Omar

Abstract

A system evaluates modifications to components of an autonomous vehicle (AV) stack. The system receives driving recommendations traffic scenarios based on user annotations of video frames showing each traffic scenario. For each traffic scenario, the system predicts driving recommendations based on the AV stack. The system determines a measure of quality of driving recommendation by comparing predicted driving recommendations based on the AV stack with the driving recommendations received for the traffic scenario. The measure of quality of driving recommendation is used for evaluating components of the AV stack. The system determines a driving recommendation for an AV corresponding to ranges of SOMAI (state of mind) score and sends signals to controls of the autonomous vehicle to navigate the autonomous vehicle according to the driving recommendation. The system identifies additional training data for training machine learning model based on the measure of driving quality.

IPC Classes  ?

  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 20/40 - ScenesScene-specific elements in video content

2.

SCENARIO BASED MONITORING AND CONTROL OF AUTONOMOUS VEHICLES

      
Application Number 18308626
Status Pending
Filing Date 2023-04-27
First Publication Date 2023-11-02
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Zaremba, Jeffrey Donald
  • Goh, Chuan Yen Ian
  • Al Assad, Omar
  • Hartman, Till S.
  • Poltoraski, Sonia
  • Anthony, Samuel English

Abstract

A system evaluates modifications to components of an autonomous vehicle (AV) stack. The system receives driving recommendations traffic scenarios based on user annotations of video frames showing each traffic scenario. For each traffic scenario, the system predicts driving recommendations based on the AV stack. The system determines a measure of quality of driving recommendation by comparing predicted driving recommendations based on the AV stack with the driving recommendations received for the traffic scenario. The measure of quality of driving recommendation is used for evaluating components of the AV stack. The system determines a driving recommendation for an AV corresponding to ranges of SOMAI (state of mind) score and sends signals to controls of the autonomous vehicle to navigate the autonomous vehicle according to the driving recommendation. The system identifies additional training data for training machine learning model based on the measure of driving quality.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
  • B60W 40/04 - Traffic conditions

3.

EVALUATION OF COMPONENTS OF AUTONOMOUS VEHICLES BASED ON DRIVING RECOMMENDATIONS

      
Application Number 18308634
Status Pending
Filing Date 2023-04-27
First Publication Date 2023-11-02
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Zaremba, Jeffrey Donald
  • Goh, Chuan Yen Ian
  • Al Assad, Omar
  • Hartman, Till S.
  • Poltoraski, Sonia
  • Anthony, Samuel English
  • Gowers, James

Abstract

A system evaluates modifications to components of an autonomous vehicle (AV) stack. The system receives driving recommendations traffic scenarios based on user annotations of video frames showing each traffic scenario. For each traffic scenario, the system predicts driving recommendations based on the AV stack. The system determines a measure of quality of driving recommendation by comparing predicted driving recommendations based on the AV stack with the driving recommendations received for the traffic scenario. The measure of quality of driving recommendation is used for evaluating components of the AV stack. The system determines a driving recommendation for an AV corresponding to ranges of SOMAI (state of mind) score and sends signals to controls of the autonomous vehicle to navigate the autonomous vehicle according to the driving recommendation. The system identifies additional training data for training machine learning model based on the measure of driving quality.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • B60W 40/06 - Road conditions
  • B60W 40/12 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to parameters of the vehicle itself

4.

Generating training data for machine learning based models for autonomous vehicles

      
Application Number 17730048
Grant Number 12097878
Status In Force
Filing Date 2022-04-26
First Publication Date 2023-10-26
Grant Date 2024-09-24
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Gaffin-Cahn, Elon

Abstract

A system receives information describing paths traversed by vehicles of a vehicle type, for example, a bicycle or a motorcycle. The system determines locations along the paths. For each location the system determines a measure of likelihood of encountering vehicles of the vehicle type in traffic at the location. The system selects a subset of locations based on the measure of likelihood and obtains sensor data captured at the subset of locations. The system uses the sensor data as training dataset for training a machine learning based model configured to receive input sensor data describing traffic and output a score used for navigation of autonomous vehicles. The machine learning model is provided to a vehicle, for example, an autonomous vehicle for navigation of the autonomous vehicle.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • B60W 40/04 - Traffic conditions

5.

System and method of predicting human interaction with vehicles

      
Application Number 17468516
Grant Number 11753046
Status In Force
Filing Date 2021-09-07
First Publication Date 2022-05-05
Grant Date 2023-09-12
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Anthony, Samuel English
  • Misra, Kshitij
  • Faller, Avery Wagner

Abstract

Systems and methods for predicting user interaction with vehicles. A computing device receives an image and a video segment of a road scene, the first at least one of an image and a video segment being taken from a perspective of a participant in the road scene and then generates stimulus data based on the image and the video segment. Stimulus data is transmitted to a user interface and response data is received, which includes at least one of an action and a likelihood of the action corresponding to another participant in the road scene. The computing device aggregates a subset of the plurality of response data to form statistical data and a model is created based on the statistical data. The model is applied to another image or video segment and a prediction of user behavior in the another image or video segment is generated.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G08G 1/16 - Anti-collision systems
  • G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G06F 18/40 - Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
  • G06N 5/01 - Dynamic search techniquesHeuristicsDynamic treesBranch-and-bound

6.

Navigation of autonomous vehicles using turn aware machine learning based models for prediction of behavior of a traffic entity

      
Application Number 17321253
Grant Number 11518413
Status In Force
Filing Date 2021-05-14
First Publication Date 2021-11-18
Grant Date 2022-12-06
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Anthony, Samuel English
  • Hartmann, Till S.
  • Maat, Jacob Reinier
  • Rose, Dylan James
  • Sylvestre, Kevin W.

Abstract

An autonomous vehicle collects sensor data of an environment surrounding the autonomous vehicle including traffic entities such as pedestrians, bicyclists, or other vehicles. The sensor data is provided to a machine learning based model along with an expected turn direction of the autonomous vehicle to determine a hidden context attribute of a traffic entity given the expected turn direction of the autonomous vehicle. The hidden context attribute of the traffic entity represents factors that affect the behavior of the traffic entity, and the hidden context attribute is used to predict future behavior of the traffic entity. Instructions to control the autonomous vehicle are generated based on the hidden context attribute.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • B60W 40/04 - Traffic conditions
  • G06N 3/08 - Learning methods
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

7.

Ground truth based metrics for evaluation of machine learning based models for predicting attributes of traffic entities for navigating autonomous vehicles

      
Application Number 17321297
Grant Number 11840261
Status In Force
Filing Date 2021-05-14
First Publication Date 2021-11-18
Grant Date 2023-12-12
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Hartmann, Till S.
  • Zaremba, Jeffrey D.
  • Anthony, Samuel English

Abstract

A system uses a machine learning based model to determine attributes describing states of mind and behavior of traffic entities in video frames captured by an autonomous vehicle. The system classifies video frames according to traffic scenarios depicted, where each scenario is associated with a filter based on vehicle attributes, traffic attributes, and road attributes. The system identifies a set of video frames associated with ground truth scenarios for validating the accuracy of the machine learning based model and predicts attributes of traffic entities in the video frames. The system analyzes video frames captured after the set of video frames to determine actual attributes of the traffic entities. Based on a comparison of the predicted attributes and actual attributes, the system determines a likelihood of the machine learning based model making accurate predictions and uses the likelihood to generate a navigation action table for controlling the autonomous vehicle.

IPC Classes  ?

  • G05D 1/02 - Control of position or course in two dimensions
  • G06N 20/00 - Machine learning
  • G06N 5/04 - Inference or reasoning models
  • G06N 3/08 - Learning methods
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
  • G06F 18/40 - Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
  • G06F 18/2113 - Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles

8.

Scenario identification for validation and training of machine learning based models for autonomous vehicles

      
Application Number 17321309
Grant Number 11919545
Status In Force
Filing Date 2021-05-14
First Publication Date 2021-11-18
Grant Date 2024-03-05
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Zaremba, Jeffrey D.
  • Hartmann, Till S.
  • Anthony, Samuel English

Abstract

A system uses a machine learning based model to determine attributes describing states of mind and behavior of traffic entities in video frames captured by an autonomous vehicle. The system classifies video frames according to traffic scenarios depicted, where each scenario is associated with a filter based on vehicle attributes, traffic attributes, and road attributes. The system identifies a set of video frames associated with ground truth scenarios for validating the accuracy of the machine learning based model and predicts attributes of traffic entities in the video frames. The system analyzes video frames captured after the set of video frames to determine actual attributes of the traffic entities. Based on a comparison of the predicted attributes and actual attributes, the system determines a likelihood of the machine learning based model making accurate predictions and uses the likelihood to generate a navigation action table for controlling the autonomous vehicle.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • G05D 1/02 - Control of position or course in two dimensions
  • G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
  • G06F 18/2113 - Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06F 18/40 - Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
  • G06N 3/08 - Learning methods
  • G06N 5/04 - Inference or reasoning models
  • G06N 20/00 - Machine learning
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

9.

Display panel of a programmed computer system with a graphical user interface

      
Application Number 29694709
Grant Number D0928804
Status In Force
Filing Date 2019-06-12
First Publication Date 2021-08-24
Grant Date 2021-08-24
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Faller, Avery Wagner
  • Anthony, Samuel English

10.

Display panel of a programmed computer system with a graphical user interface

      
Application Number 29694706
Grant Number D0928803
Status In Force
Filing Date 2019-06-12
First Publication Date 2021-08-24
Grant Date 2021-08-24
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Faller, Avery Wagner
  • Anthony, Samuel English

11.

Display panel of a programmed computer system with a graphical user interface

      
Application Number 29694707
Grant Number D0928177
Status In Force
Filing Date 2019-06-12
First Publication Date 2021-08-17
Grant Date 2021-08-17
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Faller, Avery Wagner
  • Anthony, Samuel English

12.

System and method of predicting human interaction with vehicles

      
Application Number 17190631
Grant Number 11981352
Status In Force
Filing Date 2021-03-03
First Publication Date 2021-06-17
Grant Date 2024-05-14
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Anthony, Samuel English
  • Misra, Kshitij
  • Faller, Avery Wagner

Abstract

Systems and methods for predicting user interaction with vehicles. A computing device receives an image and a video segment of a road scene, the first at least one of an image and a video segment being taken from a perspective of a participant in the road scene and then generates stimulus data based on the image and the video segment. Stimulus data is transmitted to a user interface and response data is received, which includes at least one of an action and a likelihood of the action corresponding to another participant in the road scene. The computing device aggregates a subset of the plurality of response data to form statistical data and a model is created based on the statistical data. The model is applied to another image or video segment and a prediction of user behavior in the another image or video segment is generated.

IPC Classes  ?

  • G06N 3/04 - Architecture, e.g. interconnection topology
  • B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06F 18/40 - Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
  • G06N 3/08 - Learning methods
  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
  • G08G 1/16 - Anti-collision systems
  • G06N 5/01 - Dynamic search techniquesHeuristicsDynamic treesBranch-and-bound
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]

13.

System and method of predicting human interaction with vehicles

      
Application Number 17190619
Grant Number 11987272
Status In Force
Filing Date 2021-03-03
First Publication Date 2021-06-17
Grant Date 2024-05-21
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Anthony, Samuel English
  • Misra, Kshitij
  • Faller, Avery Wagner

Abstract

Systems and methods for predicting user interaction with vehicles. A computing device receives an image and a video segment of a road scene, the first at least one of an image and a video segment being taken from a perspective of a participant in the road scene and then generates stimulus data based on the image and the video segment. Stimulus data is transmitted to a user interface and response data is received, which includes at least one of an action and a likelihood of the action corresponding to another participant in the road scene. The computing device aggregates a subset of the plurality of response data to form statistical data and a model is created based on the statistical data. The model is applied to another image or video segment and a prediction of user behavior in the another image or video segment is generated.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06F 18/40 - Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
  • G08G 1/16 - Anti-collision systems
  • G06N 5/01 - Dynamic search techniquesHeuristicsDynamic treesBranch-and-bound
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
  • G06V 10/62 - Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extractionPattern tracking

14.

Generating training datasets for training machine learning based models for predicting behavior of traffic entities for navigating autonomous vehicles

      
Application Number 17081202
Grant Number 12012118
Status In Force
Filing Date 2020-10-27
First Publication Date 2021-05-06
Grant Date 2024-06-18
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Faller, Avery Wagner

Abstract

A vehicle collects video data of an environment surrounding the vehicle including traffic entities, e.g., pedestrians, bicyclists, or other vehicles. The captured video data is sampled and the sampled video frames are presented to users to provide input on a traffic entity's state of mind. The system determines an attribute value that describes a statistical distribution of user responses for the traffic entity. If the attribute for a sampled video frame is within a threshold of the attribute of another video frame, the system interpolates attribute for a third video frame between the two sampled video frames. Otherwise, the system requests further user input for a video frame captured between the two sampled video frames. The interpolated and/or user based attributes are used to train a machine learning based model that predicts a hidden context of the traffic entity. The trained model is used for navigation of autonomous vehicles.

IPC Classes  ?

  • G06V 40/00 - Recognition of biometric, human-related or animal-related patterns in image or video data
  • B60W 30/095 - Predicting travel path or likelihood of collision
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06N 20/00 - Machine learning
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

15.

Adaptive sampling of stimuli for training of machine learning based models for predicting hidden context of traffic entities for navigating autonomous vehicles

      
Application Number 17081211
Grant Number 11615266
Status In Force
Filing Date 2020-10-27
First Publication Date 2021-05-06
Grant Date 2023-03-28
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Faller, Avery Wagner

Abstract

A vehicle collects video data of an environment surrounding the vehicle including traffic entities, e.g., pedestrians, bicyclists, or other vehicles. The captured video data is sampled and presented to users to provide input on a traffic entity's state of mind. The user responses on the captured video data is used to generate a training dataset. A machine learning based model configured to predict a traffic entity's state of mind is trained with the training dataset. The system determines input video frames and associated dimension attributes for which the model performs poorly. The dimension attributes characterize stimuli and/or an environment shown in the input video frames. The system generates a second training dataset based on video frames that have the dimension attributes for which the model performed poorly. The model is retrained using the second training dataset and provided to an autonomous vehicle to assist with navigation in traffic.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • G06N 20/00 - Machine learning
  • G05D 1/02 - Control of position or course in two dimensions
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots

16.

Neural networks for navigation of autonomous vehicles based upon predicted human intents

      
Application Number 17071115
Grant Number 11993291
Status In Force
Filing Date 2020-10-15
First Publication Date 2021-04-22
Grant Date 2024-05-28
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Mccurrie, Mel

Abstract

A system uses neural networks to determine intents of traffic entities (e.g., pedestrians, bicycles, vehicles) in an environment surrounding a vehicle (e.g., an autonomous vehicle) and generates commands to control the vehicle based on the determined intents. The system receives images of the environment captured by sensors on the vehicle, and processes the images using neural network models to determine overall intents or predicted actions of the one or more traffic entities within the images. The system generates commands to control the vehicle based on the determined overall intents of the traffic entities.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06N 3/045 - Combinations of networks
  • G06V 10/20 - Image preprocessing
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands

17.

Visualizing machine learning predictions of human interaction with vehicles

      
Application Number 17067470
Grant Number 11551030
Status In Force
Filing Date 2020-10-09
First Publication Date 2021-04-15
Grant Date 2023-01-10
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Cope, Stephen

Abstract

A computing device accesses video data displaying one or more traffic entities and generates a plurality of sequences from the video data. For each sequence, the computing device identifies a plurality of stimuli in the sequence and applies a machine learning model to generate an output describing the traffic entity. The computing device generates a data structure for storing, for each sequence, information describing the sequence and linking frame indexes of stimuli from the sequence to outputs of the machine learning model. The computing device stores the data structure in association with the video data. Responsive to receiving a selection of a sequence, the computing device loads video data for the sequence. Responsive to receiving a selection of a traffic entity within the video data, the computing device generates a graphical display element including the machine learning model output for the selected traffic entity.

IPC Classes  ?

  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles
  • G06F 3/04842 - Selection of displayed objects or displayed text elements
  • G06N 20/00 - Machine learning
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

18.

Neural network based prediction of hidden context of traffic entities for autonomous vehicles

      
Application Number 16932680
Grant Number 11572083
Status In Force
Filing Date 2020-07-17
First Publication Date 2021-01-28
Grant Date 2023-02-07
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Maat, Jacob Reinier

Abstract

An autonomous vehicle uses machine learning based models such as neural networks to predict hidden context attributes associated with traffic entities. The hidden context represents behavior of the traffic entities in the traffic. The machine learning based model is configured to receive a video frame as input and output likelihoods of receiving user responses having particular ordinal values. The system uses a loss function based on cumulative histogram of user responses corresponding to various ordinal values. The system identifies user responses that are unlikely to be valid user responses to generate training data for training the machine learning mode. The system identifies invalid user responses based on response time of the user responses.

IPC Classes  ?

  • G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
  • G06N 3/08 - Learning methods
  • B60W 40/09 - Driving style or behaviour
  • G05D 1/02 - Control of position or course in two dimensions
  • G06N 3/04 - Architecture, e.g. interconnection topology

19.

Filtering user responses for generating training data for machine learning based models for navigation of autonomous vehicles

      
Application Number 16932681
Grant Number 11763163
Status In Force
Filing Date 2020-07-17
First Publication Date 2021-01-28
Grant Date 2023-09-19
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Maat, Jacob Reinier

Abstract

An autonomous vehicle uses machine learning based models such as neural networks to predict hidden context attributes associated with traffic entities. The hidden context represents behavior of the traffic entities in the traffic. The machine learning based model is configured to receive a video frame as input and output likelihoods of receiving user responses having particular ordinal values. The system uses a loss function based on cumulative histogram of user responses corresponding to various ordinal values. The system identifies user responses that are unlikely to be valid user responses to generate training data for training the machine learning mode. The system identifies invalid user responses based on response time of the user responses.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
  • B60W 40/09 - Driving style or behaviour
  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06N 3/08 - Learning methods
  • G05D 1/02 - Control of position or course in two dimensions
  • G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
  • G06N 3/048 - Activation functions
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/776 - ValidationPerformance evaluation
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
  • G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis

20.

Machine learning based prediction of human interactions with autonomous vehicles

      
Application Number 16828823
Grant Number 11126889
Status In Force
Filing Date 2020-03-24
First Publication Date 2020-09-17
Grant Date 2021-09-21
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Anthony, Samuel English
  • Misra, Kshitij
  • Faller, Avery Wagner

Abstract

Systems and methods for predicting user interaction with vehicles. A computing device receives an image and a video segment of a road scene, the first at least one of an image and a video segment being taken from a perspective of a participant in the road scene and then generates stimulus data based on the image and the video segment. Stimulus data is transmitted to a user interface and response data is received, which includes at least one of an action and a likelihood of the action corresponding to another participant in the road scene. The computing device aggregates a subset of the plurality of response data to form statistical data and a model is created based on the statistical data. The model is applied to another image or video segment and a prediction of user behavior in the another image or video segment is generated.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G08G 1/16 - Anti-collision systems
  • G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
  • G06N 5/00 - Computing arrangements using knowledge-based models
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]

21.

Symbolic modeling and simulation of non-stationary traffic objects for testing and development of autonomous vehicle systems

      
Application Number 16709788
Grant Number 11667301
Status In Force
Filing Date 2019-12-10
First Publication Date 2020-08-06
Grant Date 2023-06-06
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Misra, Kshitij
  • Anthony, Samuel English

Abstract

A system performs modeling and simulation of non-stationary traffic entities for testing and development of modules used in an autonomous vehicle system. The system uses a machine learning based model that predicts hidden context attributes for traffic entities that may be encountered by a vehicle in traffic. The system generates simulation data for testing and development of modules that help navigate autonomous vehicles. The generated simulation data may be image or video data including representations of traffic entities, for example, pedestrians, bicyclists, and other vehicles. The system may generate simulation data using generative adversarial neural networks.

IPC Classes  ?

  • G01C 22/00 - Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers or using pedometers
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
  • B60W 40/04 - Traffic conditions
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06N 3/08 - Learning methods
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

22.

Probabilistic neural network for predicting hidden context of traffic entities for autonomous vehicles

      
Application Number 16783845
Grant Number 11467579
Status In Force
Filing Date 2020-02-06
First Publication Date 2020-08-06
Grant Date 2022-10-11
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Maat, Jacob Reinier
  • Anthony, Samuel English

Abstract

An autonomous vehicle uses probabilistic neural networks to predict hidden context attributes associated with traffic entities. The hidden context represents behavior of the traffic entities in the traffic. The probabilistic neural network is configured to receive an image of traffic as input and generate output representing hidden context for a traffic entity displayed in the image. The system executes the probabilistic neural network to generate output representing hidden context for traffic entities encountered while navigating through traffic. The system determines a measure of uncertainty for the output values. The autonomous vehicle uses the measure of uncertainty generated by the probabilistic neural network during navigation.

IPC Classes  ?

  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • G06N 7/00 - Computing arrangements based on specific mathematical models
  • G06N 3/08 - Learning methods
  • G05D 1/02 - Control of position or course in two dimensions
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles

23.

Automatic braking of autonomous vehicles using machine learning based prediction of behavior of a traffic entity

      
Application Number 16777386
Grant Number 11733703
Status In Force
Filing Date 2020-01-30
First Publication Date 2020-07-30
Grant Date 2023-08-22
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Anthony, Samuel English

Abstract

An autonomous vehicle uses machine learning based models to predict hidden context attributes associated with traffic entities. The system uses the hidden context to predict behavior of people near a vehicle in a way that more closely resembles how human drivers would judge the behavior. The system determines an activation threshold value for a braking system of the autonomous vehicle based on the hidden context. The system modifies a world model based on the hidden context predicted by the machine learning based model. The autonomous vehicle is safely navigated, such that the vehicle stays at least a threshold distance away from traffic entities.

IPC Classes  ?

  • G05D 1/02 - Control of position or course in two dimensions
  • B60W 30/095 - Predicting travel path or likelihood of collision
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • B60W 40/09 - Driving style or behaviour
  • G06N 20/00 - Machine learning
  • B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles

24.

Navigating autonomous vehicles based on modulation of a world model representing traffic entities

      
Application Number 16777673
Grant Number 11520346
Status In Force
Filing Date 2020-01-30
First Publication Date 2020-07-30
Grant Date 2022-12-06
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Anthony, Samuel English

Abstract

An autonomous vehicle uses machine learning based models to predict hidden context attributes associated with traffic entities. The system uses the hidden context to predict behavior of people near a vehicle in a way that more closely resembles how human drivers would judge the behavior. The system determines an activation threshold value for a braking system of the autonomous vehicle based on the hidden context. The system modifies a world model based on the hidden context predicted by the machine learning based model. The autonomous vehicle is safely navigated, such that the vehicle stays at least a threshold distance away from traffic entities.

IPC Classes  ?

  • G05D 1/02 - Control of position or course in two dimensions
  • B60W 30/095 - Predicting travel path or likelihood of collision
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • B60W 40/09 - Driving style or behaviour
  • G06N 20/00 - Machine learning
  • B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles

25.

Neural network based modeling and simulation of non-stationary traffic objects for testing and development of autonomous vehicle systems

      
Application Number 16709790
Grant Number 11772663
Status In Force
Filing Date 2019-12-10
First Publication Date 2020-06-11
Grant Date 2023-10-03
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor Anthony, Samuel English

Abstract

A system performs modeling and simulation of non-stationary traffic entities for testing and development of modules used in an autonomous vehicle system. The system uses a machine learning based model that predicts hidden context attributes for traffic entities that may be encountered by a vehicle in traffic. The system generates simulation data for testing and development of modules that help navigate autonomous vehicles. The generated simulation data may be image or video data including representations of traffic entities, for example, pedestrians, bicyclists, and other vehicles. The system may generate simulation data using generative adversarial neural networks.

IPC Classes  ?

  • B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • B60W 40/04 - Traffic conditions
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06N 3/08 - Learning methods
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

26.

System and method of predicting human interaction with vehicles

      
Application Number 16512560
Grant Number 10614344
Status In Force
Filing Date 2019-07-16
First Publication Date 2019-11-07
Grant Date 2020-04-07
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Anthony, Samuel English
  • Misra, Kshitij
  • Faller, Avery Wagner

Abstract

A computing device receives an image and a video segment of a road scene, the first at least one of an image and a video segment being taken from a perspective of a participant in the road scene and then generates stimulus data based on the image and the video segment. Stimulus data is transmitted to a user interface and response data is received, which includes at least one of an action and a likelihood of the action corresponding to another participant in the road scene. The computing device aggregates a subset of the plurality of response data to form statistical data and a model is created based on the statistical data. The model is applied to another image or video segment and a prediction of user behavior in the another image or video segment is generated.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G08G 1/16 - Anti-collision systems
  • G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
  • B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
  • G06N 5/00 - Computing arrangements using knowledge-based models
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]

27.

System and method of predicting human interaction with vehicles

      
Application Number 15830549
Grant Number 10402687
Status In Force
Filing Date 2017-12-04
First Publication Date 2019-01-10
Grant Date 2019-09-03
Owner PERCEPTIVE AUTOMATA LLC (USA)
Inventor
  • Anthony, Samuel English
  • Misra, Kshitij
  • Faller, Avery Wagner

Abstract

Systems and methods for predicting user interaction with vehicles. A computing device receives an image and a video segment of a road scene, the first at least one of an image and a video segment being taken from a perspective of a participant in the road scene and then generates stimulus data based on the image and the video segment. Stimulus data is transmitted to a user interface and response data is received, which includes at least one of an action and a likelihood of the action corresponding to another participant in the road scene. The computing device aggregates a subset of the plurality of response data to form statistical data and a model is created based on the statistical data. The model is applied to another image or video segment and a prediction of user behavior in the another image or video segment is generated.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G08G 1/16 - Anti-collision systems
  • G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
  • G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots