Wide-angle optical systems with protective and heating layers are disclosed. A wide-angle optical system can include a fisheye lens, a protective layer disposed over an outer surface of the fisheye lens, and heating layer disposed on an inner surface of the protective layer facing the fisheye lens. The heating layer may comprise a transparent conductive layer configured to provide heat to the protective layer in response to receiving electric current from an electric power supply.
G02B 1/14 - Protective coatings, e.g. hard coatings
G02B 1/16 - Optical coatings produced by application to, or surface treatment of, optical elements having an anti-static effect, e.g. electrically conducting coatings
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
Provided are methods for object orientation determination, which can include obtaining map parameters and group parameters and determining orientation data using said map and group parameters. Some methods described also include obtaining sensor data and using the sensor data for the determination of orientation data. Systems and computer program products are also provided.
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
3.
METHODS FOR PASSENGER AUTHENTICATION AND DOOR OPERATION FOR AUTONOMOUS VEHICLES
Disclosed are computer systems and techniques for authenticating a passenger of an autonomous vehicle and operating doors of the autonomous vehicle. For passenger authentication, the computer system is configured to receive a ride request, generate a passcode, transmit the passcode to a user account and the autonomous vehicle, authenticate the user using the passcode, and enable departure of the autonomous vehicle. For door operation, the computer system is configured to detect environmental conditions surrounding an autonomous vehicle, determine based on a set of operational conditions whether one or more doors of the autonomous vehicle are desirable for use, and operate a door if the door is safe to operate and desirable for operation.
B60R 25/24 - Means to switch the anti-theft system on or off using electronic identifiers containing a code not memorised by the user
B60R 13/00 - Elements for body-finishing, identifying, or decoratingArrangements or adaptations for advertising purposes
B60R 25/01 - Fittings or systems for preventing or indicating unauthorised use or theft of vehicles operating on vehicle systems or fittings, e.g. on doors, seats or windscreens
B60R 25/25 - Means to switch the anti-theft system on or off using biometry
B60R 25/31 - Detection related to theft or to other events relevant to anti-theft systems of human presence inside or outside the vehicle
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G08G 1/00 - Traffic control systems for road vehicles
Provided are methods for lateral path commitment, which can include obtaining scene data associated with a scene of an autonomous vehicle, the autonomous vehicle navigating the scene according to a first trajectory, detecting a change to the scene based on the scene data, and based on the change to the vehicle scene, generating a plurality of second trajectories, wherein the plurality of second trajectories are different from the first trajectory. Some methods described also include comparing the plurality of second trajectories with the first trajectory, selecting a particular trajectory of the plurality of second trajectories for the autonomous vehicle based on a lateral movement plan and the comparing the plurality of second trajectories with the first trajectory, and navigating the autonomous vehicle according to the particular trajectory. Systems and computer program products are also provided.
Provided are methods for customized tags for annotating sensor data, which can include receiving vehicle data associated with a vehicle, identifying a location of the vehicle based at least in part on the vehicle data, identifying a feature, in a map based coordinate system, associated with the location of the vehicle, transposing a location of the feature from the map based coordinate system to an image sensor based coordinate system, identifying a region of interest in the image sensor based coordinate system based at least in part on the feature, routing the region of interest to an image sensor for image signal processing of sensor data based on the region of interest. Systems and computer program products are also provided.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
H04N 23/88 - Camera processing pipelinesComponents thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
6.
ENRICHING FEATURE MAPS USING MULTIPLE PLURALITIES OF WINDOWS TO GENERATE BOUNDING BOXES
A perception system may be used to generate bounding boxes for objects in a vehicle scene. The perception system may receive images and feature maps corresponding to the received images. The perception system may generate multiple pluralities of windows and use the multiple pluralities of windows to enrich semantic data of the feature maps. The perception system may use the enriched semantic to generate one or more bounding boxes for objects in the vehicle scene.
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
B60R 1/28 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
G06T 3/16 - Spatio-temporal transformations, e.g. video cubism
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
7.
MOTION FORECASTING IN AUTONOMOUS VEHICLES USING A MACHINE LEARNING MODEL TRAINED WITH CYCLE CONSISTENCY LOSS
Systems and methods are disclosed for motion forecasting in autonomous vehicles using a machine learning model trained with cycle consistency loss. Some machine learning models are trained to predict future object motion based on past observed motion, using ground truth knowledge of future object motion. In practice, such models are often inaccurate and thus unsuitable for safety-critical operations. Disclosed herein is an improved training mechanism for an object prediction model, which training mechanism utilizes cycle consistency loss. This loss can be calculated using an inverted motion prediction-that is, given observed motion and a predicted future motion, how likely the predicted future motion, if passed through the model as if it were historical data, would result in a prediction of the observed motion. Training based on inverted or backward motion prediction can improve an ability of a machine learning model to accurately predict future motion based on observed motion.
Various methods and systems are disclosed to reduce the false alarm in a lidars having a lidar window surrounding the lidar system, by increasing the propagation loss for a portion of incident light that propagates within the thickness of the lidar window and may be guided toward the detection system of the lidar via an indirect path. Alternatively, or in addition, the shape of the lidar window may be tailored to direct the portion of incident light that propagates within the thickness of the lidar window away from the lidar detection system.
Among other things, techniques are described for controlling, using a control circuit, motion of a vehicle based objects identified using LiDAR. For example, respective classes of points of a point cloud are determined, and based on the determined respective classes of the points of the point cloud, objects in the vicinity of the vehicle are identified.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G01S 7/48 - Details of systems according to groups , , of systems according to group
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
10.
NAVIGATING MULTI-WAY STOP INTERSECTIONS WITH AN AUTONOMOUS VEHICLE
The subject matter described in this specification is directed to a system and techniques for operating an autonomous vehicle (AV) at a multi-way stop intersection. After detecting the AV is at a primary stopline of the multi-way stop intersection, a planned travel path though the multi-way stop intersection is obtained. If the planned travel path of the AV through the multi-way stop intersection satisfies a set of one or more clearance criteria, the AV proceeds past the primary stopline. The clearance criteria include a criterion that is satisfied in response to detecting the AV is clear to safely merge into a travel lane corresponding to the planned travel path.
Provided are methods for obstacle representation, which can include obtaining sensor data, determining a dynamic associated with an agent, generating obstacle data, and generating constraints based on obstacle data. Some methods described also include providing data to cause operation of an autonomous vehicle. Systems and computer program products are also provided.
Provided are methods for trajectory generation using a plurality of trajectories, which can include generating a first trajectory for a vehicle based on a first set of criterion, generating a second trajectory for the vehicle based on a second set of criterion, determining at least one of a probability of or a distance from a violation of a behavioral rule by the vehicle, selecting a particular trajectory from the first trajectory and the second trajectory based on the at least one of the probability of or the distance from the violation of the behavioral rule, and determining a path for the vehicle to operate along based on the particular trajectory. Systems and computer program products are also provided.
Provided are methods for location based parameters for an image sensor, which can include determining the geographic location of the vehicle, adjusting the parameters of the image sensor of the vehicle from a first setting to a second setting based on the geographic location of the vehicle, receiving sensor data associated with the image sensor based on the second setting, and processing the sensor data to generate an image. Systems and computer program products are also provided.
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G05D 1/249 - Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons from positioning sensors located off-board the vehicle, e.g. from cameras
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Among other things, we describe techniques for estimating a speed profile for a proposed trajectory for a vehicle and operating the vehicle along the proposed trajectory according to the speed profile, including a method for: obtaining, by a planning circuit on a vehicle, a proposed trajectory for the vehicle in response to a driving scenario; obtaining, by the planning circuit, an estimated speed profile, and a confidence score, wherein the confidence score represents a similarity of the estimated speed profile to an actual speed profile that would be generated by a control circuit for the proposed trajectory; determining whether the confidence score meets a confidence threshold; and in accordance with a determination that the confidence score exceeds the confidence threshold, operating, by a control circuit on the vehicle, the vehicle along the proposed trajectory.
Provided are methods for controlling the operation of an autonomous vehicle based on detect passenger behavior, which can include receiving, by one or more processors, sensor data from one or more sensors regarding a passenger compartment of a vehicle; determining, by the one or more processors, and based on the sensor data, that an object entered a zone of the passenger compartment while the vehicle is in motion; and causing, by the one or more processors, a modification to an operation to the vehicle based on the determination that the object entered the zone of the passenger compartment while the vehicle is in motion. Systems and computer program products are also provided.
Provided are methods for end-to-end perception system training using fused images, which can include fusing different types of images to form a fused image, extracting features from the fused image, calculating a loss, and modifying at least one network parameter of an image semantic network based on the loss. Systems and computer program products are also provided.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
Provided are methods for managing traffic light detections, which can include: deriving a first state of a traffic light at an intersection a vehicle is approaching, according to first detection data acquired by a first traffic light detection (TLD) system; deriving a second state of the traffic light at the intersection, according to second detection data acquired by a second TLD system that is independent from the first TLD system; determining traffic light information at the intersection based on at least one of (i) the first state or (ii) a result of checking whether the first state is same as the second state; and causing the vehicle to operate in accordance with the determined traffic light information at the intersection. Systems and computer program products are also provided.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Various imaging systems and methods are disclosed for identifying and eliminating spots associated with infrared light beams emitted by range finding and detection systems, on a digital image of a scene. An imaging system generates images of a single scene on image sensors having different spectral responses and uses the resulting image signals to reduce or eliminate spots on a modified digital image. Additionally, the imaging system uses the image signals to monitor the performance the image sensors and generate alert signals an indication of an image sensor malfunction is detected.
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
H04N 1/387 - Composing, repositioning or otherwise modifying originals
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/70 - Circuitry for compensating brightness variation in the scene
Provided are methods for addressing drivable artifacts for motion planning, which can include obtaining perception data characterizing an environment; detecting a presence of a drivable artifact in a portion of the environment that a vehicle is to traverse; and determining adjustments to one or more parameters of movement of the vehicle in the portion of the environment in response to detecting the presence of the drivable artifact. Some methods described include determining a plurality of candidate trajectories of a vehicle to traverse through a environment having a drivable artifact; computing respective costs associated with each of the plurality of candidate trajectories, the respective costs based at least on distances between the plurality of candidate trajectories and the drivable artifact; and selecting a particular trajectory from the plurality of candidate trajectories based on respective costs. Systems and computer program products are also provided.
Provided are methods for the distribution of lane changes using a multilayer lane graph. Some methods include evaluating pairs of nodes to construct a multilayer lane graph and inserting edges between nodes of a multilayer lane graph based on available lane operations between road segments. Some methods include assigning costs to respective edges of the multilayer lane graph, and navigating a route representing a lowest cost path through the multilayer lane graph. Systems and computer program products are also provided.
Provided are methods for hierarchical multi-object tracker for auto-labeling. Some methods include generating tracklets based on detections in sensor data. A weighted track graph is generated based on the tracklets, and the weight graph is converted to a bipartite graph. The bipartite graph is solved to determine at least one best path cover corresponding to respective observed objects, and the detections in the sensor data are labeled with a track identifier corresponding to the track representing the best path cover of a respective observed object. Systems and computer program products are also provided.
Provided are methods for evaluating a machine learning model's obstacle prediction, which can include determining that the machine learning model accurately predicted an obstacle based on a determination that the at least one predicted agent trajectory intersects with the predicted ego path at an intersection point and an indication from the ground truth data that the agent arrives at the intersection point before the ego vehicle.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
23.
AUTONOMOUS VEHICLE OPERATION USING LINEAR TEMPORAL LOGIC
Techniques are provided for autonomous vehicle operation using linear temporal logic. The techniques include using one or more processors of a vehicle to store a linear temporal logic expression defining an operating constraint for operating the vehicle. The vehicle is located at a first spatiotemporal location. The one or more processors are used to receive a second spatiotemporal location for the vehicle. The one or more processors are used to identify a motion segment for operating the vehicle from the first spatiotemporal location to the second spatiotemporal location. The one or more processors are used to determine a value of the linear temporal logic expression based on the motion segment. The one or more processors are used to generate an operational metric for operating the vehicle in accordance with the motion segment based on the determined value of the linear temporal logic expression.
Some methods for triggering cameras in an autonomous vehicle (AV). The methods include receiving, by a programmable circuit, Generalized Precision Time Protocol (gPTP) information for synchronizing the programmable circuit with a gPTP Grand Master (GM); generating, by the programmable circuit, a pulse per second (PPS) signal having a peri-od of one second based on the gPTP time information; generating, by the programmable circuit, a synchronizing signal having a period less than the period of the PPS signal; generating, by the programmable circuit, a triggering signal based on the PPS signal, wherein a period of the triggering signal is the same as the period of the synchronizing signal; and sending, by the programmable circuit and to one or more cameras of an AV, the triggering signal for triggering the one or more cameras to capture one or more images. Systems and computer program products are also provided.
Provided are methods for an extended beamspace in massive MIMO radar, which can include determining beamformed phase centers for non-uniform groups of transmit elements corresponding to a center of gravity of transmit elements in respective groups. The method may also include activating respective groups of transmit elements to output coherent signals, extracting received signals reflected from an environment to form an extended beamspace, wherein the received signals are obtained by receive elements. Systems and computer program products are also provided.
Provided are methods for autonomously navigating a road intersection by a vehicle, which can include determining a candidate trajectory for an autonomous vehicle to traverse a road intersection; identifying one or more additional vehicles in an environment of the autonomous vehicle; determining one or more predicted trajectories of the one or more additional vehicles through the road intersection; selecting at least some of the predicted trajectories based on a proximity between the candidate trajectory and each of the one or more prediction trajectories; determining, a precedence order based on the candidate trajectory and the selected trajectories; and causing the autonomous vehicle to traverse through the road intersection based on the candidate trajectory and the precedence order.
Provided are methods for homotopy extraction using a machine learning model, which can include obtaining sensor data and route data and determining homotopy data comprising constraint data, for example, based on sensor data and/or route data. Some methods described also include providing operation data associated with the homotopy data to cause the vehicle to operate based on the constraint data. Systems and computer program products are also provided.
This disclosure describes the use of optical sensors to detect and characterize the state of traffic lights to assist with the navigation of autonomous vehicles. In particular, a specific optical configuration is shown that includes both a fixed-exposure sensor and an auto-exposure sensor. Imagery from the two sensor types can be combined to more accurately characterize the state of traffic signals at any particular intersection. Systems and methods for analyzing only select regions of the imagery captured by the traffic light detection system are also described.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G08G 1/0962 - Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
H04N 23/71 - Circuitry for evaluating the brightness variation
H04N 23/743 - Bracketing, i.e. taking a series of images with varying exposure conditions
30.
Determining object mobility parameters using an object sequence
A system can use semantic images, lidar images, and/or 3D bounding boxes to determine mobility parameters for objects in the semantic image. In some cases, the system can generate virtual points for an object in a semantic image and associate the virtual points with lidar points to form denser point clouds for the object. The denser point clouds can be used to estimate the mobility parameters for the object. In certain cases, the system can use semantic images, lidar images, and/or 3D bounding boxes to determine an object sequence for an object. The object sequence can indicate a location of the particular object at different times. The system can use the object sequence to estimate the mobility parameters for the object.
G06V 30/262 - Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G01S 17/42 - Simultaneous measurement of distance and other coordinates
G01S 17/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
In an example implementation, a method includes a color filter array including a plurality of units arranged according to a spatially repeating pattern. Each of the units includes a color filter element having a first area, and an optically clear element having a second area, where the first planar is less than the second area. The color filter array is configured to receive light from an environment of the apparatus, filter at least a portion of the light using the units, and transmit at least a portion of the filtered light to an image sensor.
Provided are methods for 360 degree LiDAR cropping, which can include receiving, using at least one processor, data characterizing detection of a target by a LiDAR sensor, wherein the target is located within a field of view of detection of the sensor, and wherein the target and the sensor are coupled to or adjacent to a portion of the vehicle, determining, using the at least one processor, a modified field of view of the sensor, wherein the modified field of view is narrower than the field of view and wherein the determining is based on at least identifying one or more regions in the field of view that include the target, and providing, using the at least one processor, a calibration dataset that includes data associated with the modified field of view. Systems and computer program products are also provided.
A perception system may be used to generate bounding boxes for objects in a vehicle scene. The perception system may receive images and feature maps corresponding to the received images. The perception system may use radar and vision based images to generate one or more bounding boxes for objects in the vehicle scene.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Some systems, methods, and computer program products for augmenting bird's eye view object proposals described include: receiving camera images representing a perspective view of an environment relative to a vehicle; extracting image features from the camera images; generating one or more perspective bounding boxes containing one or more detected objects based on the image features; selecting one or more first object proposals represented in a bird's eye view (BEV) view of the environment based on the one or more perspective bounding boxes; and generating one or more BEV bounding boxes containing the one or more detected objects based on: the image features represented in the perspective view of the environment, the one or more first object proposals represented in the BEV view of the environment, and one or more generic object proposals represented in the BEV view of the environment. Systems and computer program products are also provided.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Provided are methods for object detection using radar and lidar fusion, which can include generating clusters combining clusters of point clouds for radar and lidar, respectively, from which fused features are determined using a deep learning model. Systems and computer program products are also provided.
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 13/42 - Simultaneous measurement of distance and other coordinates
A perception system may be used to generate bounding boxes for objects in a vehicle scene. The perception system may receive images of various modalities and feature maps corresponding to the received images. The perception system may generate object queries. The perception system may use the generated object queries to generate one or more bounding boxes for objects in the vehicle scene.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
38.
TRAJECTORY PLANNING UTILIZING A STATEFUL PLANNER AND A STATELESS PLANNER
Provided are methods for trajectory generation using an adjusted plurality of trajectories, which can include generating a plurality of trajectories for a vehicle from a plurality of poses, combinations of trajectories of the plurality of trajectories representing a plurality of paths for the vehicle through an environment, adjusting the plurality of trajectories to obtain an adjusted plurality of trajectories based on component data associated with the vehicle, selecting a first trajectory from the adjusted plurality of trajectories, and determining a path for the vehicle to operate along based on the first trajectory. Systems and computer program products are also provided.
Provided are methods for high definition map fusions for 3D object detection. Some methods described also include obtaining, with at least one processor, raster maps, vector maps, and point cloud data and extracting, with the at least one processor, features from the raster maps, vector maps, and point cloud data to generate respective bird's eye view (BEV) representations. The methods also include fusing, with the at least one processor, the BEV representation of the raster map features, the BEV representation of the vector map features, and the BEV representation of the point cloud features into a fused BEV image. Additionally, the methods include detecting, with the at least one processor, objects in the fused BEV image. Systems and computer program products are also provided.
G06F 16/587 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
40.
PRE-TRAINING MACHINE LEARNING MODELS WITH CONTRASTIVE LEARNING
Provided are methods for pre-training machine learning models with contrastive learning. Some methods described also include generating, with at least one processor, a perturbed dataset from a real dataset. The method includes pre-training, with the at least one processor, at least one component of a machine leaning model to perform an alternative task, wherein the machine learning model performs a primary task. The method also includes inserting, with the at least one processor, the pre-trained at least one component into the machine learning model that performs the primary task. Additionally, the method includes training, with the at least one processor, the machine learning model comprising the pre-trained at least one component to perform the primary task. Systems and computer program products are also provided.
Provided are methods for handling lane closures, which can include obtaining a lane graph including a plurality of nodes and a plurality of edges. A route from a first location to a second location is represented on the lane graph Some methods described also include obtaining at least one node corresponding to a lane segment comprising a lane closure along the route and pruning the lane graph based on the at least one node. Some methods described also include determining an updated route from a current location to the second location using the pruned lane graph and causing the vehicle to navigate along the updated route avoiding the lane closure. Systems and computer program products are also provided.
G01C 21/36 - Input/output arrangements for on-board computers
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
Disclosed is data augmentation for obstruction learning. In some embodiments, a method comprises: obtaining images of obstacles that obstruct an object of interest; randomly scaling the images; extracting portions of the randomly scaled images at random positions in the randomly scaled images; and training a machine learning model using the extracted portions. In some embodiments, a method comprises: capturing images of obstacles that are at least partially obstructing an object of interest (e.g., a traffic light); processing the images using a machine learning model that is trained on a dataset that includes selected portions of augmented images of obstacles (e.g., large vehicles) that obstruct the object of interest; predicting a state of the object of interest based on output of the machine learning model; planning a trajectory for a vehicle; and causing the vehicle to travel the trajectory.
Some methods for item loading assignment for AV delivery described include: receiving a message indicating arrival of an AV. In response to the reassign countdown duration being unexpired, sending a message requesting unlocking the AV upon the arrival of the AV; receiving a message comprising an unlock confirmation, wherein the unlock confirmation includes a relock countdown duration in response to unlocking the AV; receiving a message comprising loading information, wherein the loading information includes a loading countdown duration in response to opening the AV; and receiving a message comprising order information indicating a status of the order or an inquiry of the status of the order. In response to the reassign countdown duration being expired, receiving a message indicating that the assignment for the order is canceled for the AV. Systems and computer program products are also provided.
A perception system may be used to generate bounding boxes for objects in a vehicle scene. The perception system may receive images and feature maps corresponding to the received images. The perception system may correlate object queries from previous time steps with object queries from the current time step.
Provided are methods for processing radar data, which can include accessing radar analog-to-digital (ADC) data from at least one radar sensor of a vehicle, determining a range-Doppler (RD) data set based on the radar ADC data, determining a range-azimuth-Doppler (RAD) data set based on the RD data set, and determining at least one of (i) object data representing at least one object in an environment of the vehicle or (ii) a segmentation map representing the environment based on the RAD data set.
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 13/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
A perception system may be used to generate bounding boxes for objects in a vehicle scene. The perception system may receive images and feature maps corresponding to the received images. The perception system may link bounding boxes to bounding boxes from a previous time steps and identify false positive bounding boxes. The system can link 3D boxes of the same object from the different frames, by taking the 3D boxes in a time step as input. The system can sue transformer self-attention to exchange information between 3D boxes to learn global-informative box embeddings. Similarity between these learned embeddings can be used to link the boxes of the same object.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
B60W 30/095 - Predicting travel path or likelihood of collision
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
The present disclosure describes Camera-to-LiDAR (C2L) extrinsic calibration that leverages self-attention mechanisms through a Transformer-based backbone network to infer the 6-degree of freedom (DoF) relative transformation between sensors.
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
Techniques for operation of a vehicle using machine learning with motion planning include storing, using one or more processors of a vehicle located within an environment, a plurality of constraints for operating the vehicle within the environment. One or more sensors of the vehicle receive sensor data describing the environment. The one or more processors extract a feature vector from the stored plurality of constraints and the received sensor data. The feature vector includes a first feature describing an object located within the environment. A machine learning circuit of the vehicle is used to generate a first motion segment based on the feature vector. A number of violations of the stored plurality of constraints is below a threshold. The one or more processors operate the vehicle in accordance with the generated first motion segment.
Provided are methods for training and evaluating machine learning networks. The methods can include obtaining ground truth data representing images of an environment of a vehicle. The methods can include determining a first plurality of subsets of the ground truth data. The methods can include mapping the first plurality of subsets to a plurality of sensors. The methods can include determining a second plurality of subsets of the ground truth data by removing at least one selected subset from the first plurality of subsets. The methods can include inputting the second plurality of subsets to at least one machine learning network. The methods can include predicting a surrounding view of the environment using the at least one machine learning network. Vehicles and non-transitory computer-readable storage media are also provided.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Provided are methods for image depth estimation, which can include obtaining image associated with a scene of an autonomous vehicle, determining a first estimated depth for a plurality of points in the image, and generating a plurality of groups of points based on the first estimated depth for the plurality of points. Some methods described also include determining a second estimated depth for at least one point using a range specific depth estimation head, determining at least one object classification for the at least one point, and causing the autonomous vehicle to be navigated based on the second estimated depth for the at least one point and the at least one object classification for the at least one point. Systems and computer program products are also provided.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
A system receives a 3D image having multiple data points, and uses one or more filters, such as a distance filter, map filter, and/or height filter to remove certain 3D data points from the image. The system may group the data points and annotate them to identify unknown or unclassified objects within the image.
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
52.
CONFLICT ANALYSIS BETWEEN OCCUPANCY GRIDS AND SEMANTIC SEGMENTATION MAPS
Provided are methods, systems, and computer program products for programmatic detection of objects with an unknown or uncertain semantic class, and generation of a training data set for such objects that can facilitate further development of machine learning models trained to extract semantic information. Embodiments of the present disclosure can detect discrepancies between sensor data corresponding to multiple representations of an environment, and utilize the identified discrepancies to programmatically select a portion of the sensor data for use as a training data set, e.g., for input to a machine learning model.
Disclosed is an improved motion planner that safely and proactively considers worst-case agent behavior by generating a worst-case homotopy for every nominal homotopy. In some embodiments, a method comprises: generating a first set of maneuvers to be performed by a vehicle in a scenario, the first set of maneuvers based on an expected behavior of at least one agent proximate to the vehicle; generating a second set of maneuvers to be performed by the vehicle, the second set of maneuvers based on worst case behavior of the at least one agent proximate to the vehicle; generating a set of candidate trajectories based on the first set of maneuvers and the second set of maneuvers; selecting a trajectory from the set of candidate trajectories; and generating, with the at least one processor, at least one control signal to operate the vehicle based on the selected trajectory.
G05B 13/04 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
54.
GENERATING WORST-CASE CONSTRAINTS FOR AUTONOMOUS VEHICLE MOTION PLANNING
Disclosed is an improved motion planner that safely and proactively considers worst-case agent behavior by generating a worst-case homotopy for every nominal homotopy. In some embodiments, a method comprises: generating a first set of maneuvers to be performed by a vehicle in a scenario, the first set of maneuvers based on an expected behavior of at least one agent proximate to the vehicle; generating a second set of maneuvers to be performed by the vehicle, the second set of maneuvers based on worst case behavior of the at least one agent proximate to the vehicle; generating a set of candidate trajectories based on the first set of maneuvers and the second set of maneuvers; selecting a trajectory from the set of candidate trajectories; and generating, with the at least one processor, at least one control signal to operate the vehicle based on the selected trajectory.
Provided are methods for determining a trajectory, which can include obtaining, using the at least one processor, sensor data associated with an environment in which a vehicle is operating, wherein the environment comprises one or more agents including a first agent; determining, using the at least one processor, based on the sensor data, a first prediction associated with the first agent; determining, using at least one processor, based on the first prediction, a primary homotopy; determining, using the at least one processor, based on the primary homotopy and the first prediction, one or more contingency homotopies associated with a contingency; determining, using the at least one processor, based on the primary homotopy and the one or more contingency homotopies, a primary trajectory; and providing, using the at least one processor, operation data associated with the primary trajectory to cause the vehicle to operate based on the primary trajectory.
The current disclosure provides techniques for using human driving behavior to assist in decision making of an autonomous vehicle as the autonomous vehicle encounters various scenarios on the road. For each scenario, a model may be generated based on human driving behavior that governs how an autonomous vehicle maneuvers in that scenario. As a result of using these models, reliability and safety of autonomous vehicle may be improved. In addition, because the model is programmed into the autonomous vehicle, the autonomous vehicle, in many instances, need not consume resources to implement complex calculations to determine driving behavior in real-time.
Among other things, stored data is maintained indicative of potential stopping places that are currently feasible stopping places for a vehicle within a region. The potential stopping places are identified as part of static map data for the region. Current signals are received from sensors or one or more other sources current signals representing perceptions of actual conditions at one or more of the potential stopping places. The stored data is updated based on changes in the perceptions of actual conditions. The updated stored data is exposed to a process that selects a stopping place for the vehicle from among the currently feasible stopping places.
Provided are methods for travel path determination, which can include obtaining mapping data characterizing an environment, the mapping data indicating boundaries of a first road lane in the environment; identifying a portion of the first road lane as a narrowed road lane, the narrowed road lane having a reduced width in at least a portion of the narrowed road lane compared to a width of the first road lane; evaluating a plurality of candidate travel paths in a search space that includes the narrowed road lane and excludes at least a portion of the first road lane that is not included in the narrowed road lane; and determining a particular travel path for a vehicle through the narrowed road lane based on the evaluation of the plurality of candidate travel paths. The plurality of candidate travel paths include the particular travel path.
Provided are methods for route traversal using remote vehicle assistance, which can include a method. The method includes: identifying a condition on a route traveled by a vehicle that inhibits movement of the vehicle along the route; sending an assistance request to a remote operator; receiving an updated trajectory for the vehicle to travel along the route, the updated trajectory including one or more segments; evaluating whether the vehicle can be controlled to safely traverse a first segment with a safe stop upon completion of traversal of the first segment; determining that the vehicle can be controlled to safely traverse the first segment with a safe stop; and upon determining that the vehicle can be controlled to traverse the first segment with a safe stop upon completion of the traversal of the first segment, controlling the vehicle to traverse the first segment. Systems and computer program products are also provided.
Provided are methods, systems, and computer program products for defining and testing evolving event sequences. Some methods include specifying an event sequence tunnel in a scenario, wherein an entry space and an exit space of the event sequence tunnel are identified for a simulated agent. Dimensions of the event sequence tunnel are determined, and at least one factor is applied to dimensions of the event sequence tunnel at the entry space and propagated through the event sequence tunnel. The simulated agent is evaluated at the entry space until the exit space of the event sequence tunnel in a simulation. At least one consistent characteristic associated with the simulated agent is determined at the entry space, evolved, and replicated throughout respective event sequence tunnels. A response of an autonomous system to simulations of the scenario is evaluated in view of the at least one consistent characteristic of the simulated agent.
Provided are methods, systems, and computer program products for defining and testing evolving event sequences. Some methods include specifying an event sequence tunnel in a scenario, wherein an entry space and an exit space of the event sequence tunnel are identified for a simulated agent. Dimensions of the event sequence tunnel are determined, and at least one factor is applied to dimensions of the event sequence tunnel at the entry space and propagated through the event sequence tunnel. The simulated agent is evaluated at the entry space until the exit space of the event sequence tunnel in a simulation. At least one consistent characteristic associated with the simulated agent is determined at the entry space, evolved, and replicated throughout respective event sequence tunnels. A response of an autonomous system to simulations of the scenario is evaluated in view of the at least one consistent characteristic of the simulated agent.
G05B 13/04 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
G05D 1/02 - Control of position or course in two dimensions
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
62.
Processing a request signal regarding operation of an autonomous vehicle
Among other things, a vehicle drives autonomously on a trajectory through a road network to a goal location based on an automatic process for planning the trajectory without human intervention; and an automatic process alters the planning of the trajectory to reach a target location based on a request received from an occupant of the vehicle to engage in a speed-reducing maneuver.
B60W 50/10 - Interpretation of driver requests or demands
B60K 35/10 - Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
B60K 35/28 - Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics informationOutput arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the purpose of the output information, e.g. for attracting the attention of the driver
Among other things, techniques are described for path-based trajectory prediction (e.g., for autonomous driving). The technique includes: obtaining information about one or more agents in a portion of an environment where a vehicle is operating, the information including data associated with locations of the one or more agents in the portion of the environment; for each of at least one agent of the one or more agents, determining a reference path for the agent based on a location of the agent in the environment, and predicting a future trajectory of the agent based on the reference path; and operating the vehicle based on one or more predicted future trajectories of the at least one agent of the one or more agents.
Provided are methods for turn signal assignment in complex maneuvers, which can include receiving position information associated with a position of a vehicle. Some methods described also include determining, hierarchically and based at least on the position information, whether to activate a turn signal of a vehicle prior to activation of the turn signal based on a turn of the vehicle at a roadway intersection. Some methods described also include determining a time to activate the turn signal prior to the turn of the vehicle at the roadway intersection. Some methods described also include transmitting a control signal to activate the turn signal at the determined time. Systems and computer program products are also provided.
B60Q 1/26 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
B60Q 1/34 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating change of drive direction
B60Q 1/50 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
65.
DATA PIPELINE FOR CAMERAS WITH MULTIPLE COLOR CHANNELS
Provided are methods for processing raw data for sensors with multiple color channels, which can include receiving raw image data associated with an image sensor, wherein the raw image data includes a plurality of groups of pixels including a first group of pixels associated with a first filter and a second group of pixels associated with a second filter. The methods may also include generating, from the raw image data, a plurality of wavelength-based data groups including a first wavelength-based data group and a second wavelength-based data group. The first wavelength-based data group may include first pixels corresponding to the first group of pixels and the second wavelength-based data group may include second pixels corresponding to the second group of pixels. The methods may also include encoding the first wavelength-based data group and the second wavelength-based data group. Systems and computer program products are also provided.
G02F 1/1335 - Structural association of cells with optical devices, e.g. polarisers or reflectors
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
66.
GUIDED GENERATION OF TRAJECTORIES FOR REMOTE VEHICLE ASSISTANCE
Provided are methods for generating a proposed trajectory for a vehicle at a remote vehicle assistance system. One or more user inputs defining the proposed trajectory may be received at the remote vehicle assistance system. The proposed trajectory may be validated for compliance with one or more constraints imposed by a motion planner of the vehicle. Examples of constraints include the proposed trajectory being within a drivable surface and not colliding with one or more objects present in a surrounding environment of the vehicle. If the remote vehicle assistance system is unable to validate the proposed trajectory, the proposed trajectory may be redrawn instead of being sent to the vehicle. Systems and computer program products are also provided.
Provided are methods for turn signal assignment in complex maneuvers, which can include receiving position information associated with a position of a vehicle (102). Some methods described also include determining, hierarchically and based at least on the position information, whether to activate a turn signal of a vehicle (102) prior to activation of the turn signal based on a turn of the vehicle (102) at a roadway intersection. Some methods described also include determining a time to activate the turn signal prior to the turn of the vehicle (102) at the roadway intersection. Some methods described also include transmitting a control signal (516, 518, 520, 522, 524) to activate the turn signal at the determined time. Systems and computer program products are also provided.
B60Q 1/34 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating change of drive direction
B60Q 1/40 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating change of drive direction having automatic return to inoperative position
68.
IDENTIFYING NEW CLASSES OF OBJECTS IN ENVIRONMENTS OF VEHICLES
A method may include applying a first machine learning model trained to perform an open-set detection by at least identifying, based at least on image data indicative of one or more objects present in at least one environment in which one or more vehicles operate, at least one new class of objects. A dataset including a plurality of classes of objects may be updated to include the at least one new class of objects. In some cases, the dataset may be further updated to include a label associated with the at least one new class of objects. A second machine learning model may be trained, or in some cases updated, based at least on the updated dataset including the at least one new class of objects. Related systems and computer program products are also provided.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Provided are methods for optimizing alerts for vehicles experiencing stuck conditions, which can include receiving, using at least one processor, data associated with a distance between a location of a vehicle and a destination; determining, using the at least one processor, a derivative of the distance between the location of the vehicle and the destination with respect to a window of time; determining, using the at least one processor, a threshold based on the data associated with the distance between the location of the vehicle and the destination; comparing the derivative to the threshold; and based on the comparison, generating data representing at least one alert indicative of a stuck condition of the vehicle. Systems and computer program products are also provided.
Some methods described include: receiving, with at least one processor, a point cloud from a pose graph; receiving, with the at least one processor, an image from a driving log of a vehicle, the image corresponding to the point cloud from the pose graph; obtaining, with the at least one processor, image pixel labels for the image; projecting, with the at least one processor, the point cloud in a point cloud coordinate system to an image coordinate system based on the image pixel labels; generating, with the at least one processor, a six-dimensional colored point cloud by combining the point cloud and color information from the image; and transforming, with the at least one processor, the six-dimensional colored point cloud into five-dimensional map tiles to form a colored map layer. Systems and computer program products are also provided.
Provided are methods for communicating precedence using vehicle to everything (V2X) messages, which include broadcasting at least one outgoing message associated with an upcoming intersection, and receiving at least one incoming message to at least one other vehicle approaching the upcoming intersection. Some methods described also include determining a priority conflict exists, and updating the respective priority number based on the respective stopped time and the at least one other stopped time. Some methods described also include updating the respective conflict flag based on the updated respective priority number, and determining whether the priority conflict exists based on the updated respective conflict flag. Some methods described also include determining a consensus priority order in response to determining that the priority conflict does not exist. Systems and computer program products are also provided.
Provided are methods for generating a proposed trajectory for a vehicle at a remote vehicle assistance system. One or more user inputs defining the proposed trajectory may be received at the remote vehicle assistance system. The proposed trajectory may be validated for compliance with one or more constraints imposed by a motion planner of the vehicle. Examples of constraints include the proposed trajectory being within a drivable surface and not colliding with one or more objects present in a surrounding environment of the vehicle. If the remote vehicle assistance system is unable to validate the proposed trajectory, the proposed trajectory may be redrawn instead of being sent to the vehicle. Systems and computer program products are also provided.
Provided are methods for a track refinement network. In examples, center boxes are obtained from a record of driving data, wherein a center box is a center of a sequence of boxes along a track, and the track is associated with a tracked object detected within the sequence of boxes, each respective box comprising a center, a size, and an orientation. Track windows are generated around respective center boxes, wherein a track window corresponds to a respective center box along the track. Track windows are cropped and normalized with respect to center boxes to enable single refinement model for multiple object classes. Point cloud features and trajectory features are extracted from the cropped and normalized track windows. The point cloud features and trajectory features are input into a track refinement network, wherein the track refinement network uses features from the entire track to output a refined center, a refined size, and a refined orientation of each respective center box.
Provided are methods for generation of representations of radar data. Some methods described include: receiving ADC raw data of a radar sensor of a vehicle; performing range FFT, Doppler FFT, and azimuth FFT on the ADC raw data; generating a 1D range heat map tensor representing the range FFT, a 2D RD heat map tensor representing a combination of the range FFT and the Doppler FFT, a 2D RA heat map tensor representing a combination of the range FFT and the azimuth FFT, or a 3D RAD matrix tensor representing a combination of the range FFT, the Doppler FFT, and the azimuth FFT; and inputting at least one of the 1D range heat map tensor, the 2D RD heat map tensor, the 2D RA heat map tensor, or the 3D RAD matrix tensor, to a machine learning model for detecting objects on a road network around the vehicle.
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
75.
METHODS AND SYSTEMS FOR TRAFFIC LIGHT LABELLING VIA MOTION INFERENCE
Provided are methods for offline perception motion inference, which can include obtaining map data indicative of an environment and obtaining data associated with at least one agent. The method can include determining a trajectory for the agent and matching the trajectory of the agent with a lane connector. The method can also include determining a traffic light parameter. Systems and computer program products are also provided.
Provided are devices for camera cleaning and flare reduction for vehicles, which can include a cleaning device for cleaning a transparent window of a housing containing a camera system. The devices can include optical panels with different properties that can be moved in front of the transparent window for flare reduction. Methods are provided which can include analyzing at least one image to determine a presence of an optical flare within the at least one image, and, based on detecting the optical flare, causing at least one optical panel to be moved into a position in front of a lens of the imaging device. Computer program products are also provided.
Provided are methods for graph forward search exploration, which can include detecting a plurality of obstacles along a first trajectory of a vehicle. Some methods described also include determining a plurality of valid combinations of a plurality of trajectories to handle the plurality of obstacles. Some methods described also include generating a reduced decision tree based at least on the valid combinations of the plurality of trajectories by at least excluding a second trajectory of the plurality of trajectories associated with an obstacle of a plurality of obstacles based on a position of the obstacle being outside of a corridor defined by a spatial range and/or a temporal range. Some methods described also include selecting an optimal trajectory of the vehicle from the plurality of trajectories of the reduced decision tree. Systems and computer program products are also provided.
Vehicle sensor lens hoods for reduction of straylight on an image sensor lens, which can include a body, a first aperture, and a second aperture. The lens hood can further be associated with a structure, such as a fascia, of a vehicle. Vehicles and autonomous vehicles products are also provided.
Provided are methods for dynamic orchestration of modems in an autonomous vehicle, which can include an SoC determining performance parameters from a plurality of modems and controlling communication of data between ECUs and the modems. Some methods described also include an SoC acting as a forwarding daemon, communicating with a switch, and or providing configuration data. Systems and computer program products are also provided.
This application relates to sensor systems and related devices and methods. A sensor device can include an image sensor configured to generate image data based on infrared light received at the image sensor, a primary lens positioned to direct light from a scene onto the image sensor, and an array of microlenses positioned between the image sensor and the primary lens, each microlens configured to focus the light from the scene onto a corresponding portion of the image sensor. A method can include receiving plenoptic image data based on infrared light, generating a two-dimensional image of a scene, and determining a depth to an object represented by the pixel based on the plenoptic image data.
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
Provided are methods for. Some methods described include; receiving, with at least one processor, a point cloud from a pose graph; receiving, with the at least one processor, an image from a driving log of a vehicle, the image corresponding to the point cloud from the pose graph; obtaining, with the at least one processor, image pixel labels for the image; projecting, with the at least one processor, the point cloud in a point cloud coordinate system to an image coordinate system based on the image pixel labels; generating, with the at least one processor, a six-dimensional colored point cloud by combining the point cloud and color information from the image; and transforming, with the at least one processor, the six-dimensional colored point cloud into five-dimensional map tiles to form a colored map layer. Systems and computer program products are also provided.
Provided are methods for camera-assisted LiDAR data verification. A vehicle (such as an autonomous vehicle) has multiple sensors mounted at various locations on the vehicle. Data from these sensors can be used for object detection. In object detection, sensor data is analyzed to annotate portions of the sensor data with confidence scores that indicate the presence of a particular object class instance within a respective portion of the data captured by a sensor. Systems and computer program products are also provided.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
Among other things, techniques are described for path-based trajectory prediction (e.g., for autonomous driving). The technique includes: obtaining information about one or more agents in a portion of an environment where a vehicle is operating, the information including data associated with locations of the one or more agents in the portion of the environment; for each of at least one agent of the one or more agents, determining a reference path for the agent based on a location of the agent in the environment, and predicting a future trajectory of the agent based on the reference path; and operating the vehicle based on one or more predicted future trajectories of the at least one agent of the one or more agents.
Provided are systems, methods, and computer program products for ensemble based vehicle motion planning. A model ensemble including a routing model and multiple planning models may be trained and applied to generate a trajectory for navigating a vehicle in a scenario. In some cases, the routing model may select, from multiple candidate trajectories generated by the planning models based on the scenario, the trajectory generated by the best performing planning model. Alternatively, the routing model may successively activate one or more of the planning models to generate one or more candidate trajectories based on the scenario until the routing model identifies a trajectory satisfying one or more criteria.
Provided are methods for graph forward search exploration, which can include detecting a plurality of obstacles along a first trajectory of a vehicle. Some methods described also include determining a plurality of valid combinations of a plurality of trajectories to handle the plurality of obstacles. Some methods described also include generating a reduced decision tree based at least on the valid combinations of the plurality of trajectories by at least excluding a second trajectory of the plurality of trajectories associated with an obstacle of a plurality of obstacles based on a position of the obstacle being outside of a corridor defined by a spatial range and/or a temporal range. Some methods described also include selecting an optimal trajectory of the vehicle from the plurality of trajectories of the reduced decision tree. Systems and computer program products are also provided.
Provided are methods for a track refinement network. In examples, center boxes are obtained from a record of driving data, wherein a center box is a center of a sequence of boxes along a track, and the track is associated with a tracked object detected within the sequence of boxes, each respective box comprising a center, a size, and an orientation. Track windows are generated around respective center boxes, wherein a track window corresponds to a respective center box along the track. Track windows are cropped and normalized with respect to center boxes to enable single refinement model for multiple object classes. Point cloud features and trajectory features are extracted from the cropped and normalized track windows. The point cloud features and trajectory features are input into a track refinement network, wherein the track refinement network uses features from the entire track to output a refined center, a refined size, and a refined orientation of each respective center box.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
87.
DEVICES, SYSTEMS, AND METHODS FOR CAMERA CLEANING AND FLARE REDUCTION FOR VEHICLES
Provided are devices for camera cleaning and flare reduction for vehicles, which can include a cleaning device for cleaning a transparent window of a housing containing a camera system. The devices can include optical panels with different properties that can be moved in front of the transparent window for flare reduction. Methods are provided which can include analyzing at least one image to determine a presence of an optical flare within the at least one image, and, based on detecting the optical flare, causing at least one optical panel to be moved into a position in front of a lens of the imaging device. Computer program products are also provided.
Provided are methods for customized tags for annotating sensor data, which can include receiving vehicle data associated with a vehicle, identifying a location of the vehicle based at least in part on the vehicle data, identifying a feature, in a map based coordinate system, associated with the location of the vehicle, transposing a location of the feature from the map based coordinate system to an image sensor based coordinate system, identifying a region of interest in the image sensor based coordinate system based at least in part on the feature, routing the region of interest to an image sensor for image signal processing of sensor data based on the region of interest. Systems and computer program products are also provided.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Provided are methods for rolling shutter compensation, which can include causing a rolling shutter camera to capture an image of an environment; determining a first position in the environment, the first position captured in the image; and determining, by the at least one processor, a second position in the image that corresponds to the first position in the environment. Determining the second position includes applying a first approximation that relative movement between the rolling shutter camera and the first position is linear during the capture of the image, applying a second approximation that a distortion function characterizing the rolling shutter camera is linear, and based at least on the first approximation and the second approximation, determining a particular time at which light representative of the first position in the environment is detected by the rolling shutter camera. Systems and computer program products are also provided.
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
90.
ADAPTIVE READOUT CIRCUIT FOR TDC-BASED TIME-OF-FLIGHT (TOF) LIDAR
Various methods and systems are disclosed to improve detection probability of time of flight lidar systems by adaptively controlling the characteristics of a filter that filters signals received from a sensor of a detection system of the lidar, prior to event detection. A lidar system can generate background signals for individual pixels of the sensor and use the background signals to adaptively control the filter to improve the detection probability and reduce the false alarm rate of the lidar system.
Provided are methods, systems, and storage media for random traffic generation. Methods include determining parameters of a simulation including a volume, simulated agent types, and an simulated agent density. Initiating the simulation by a seed that identifies at least a starting location and a goal location of the simulation. Methods also include assigning goals to simulated agents within the volume, and executing the simulation wherein the volume is updated responsive to motion of the simulated vehicle.
Provided are methods and systems for lane change and intent determination. A method for operating an autonomous vehicle is provided. The method includes obtaining, first scene data associated with a scene of an autonomous vehicle. The method includes generating a plurality of trajectories for the autonomous vehicle. The method includes selecting a trajectory from the plurality of trajectories. The method includes determining a vehicle action intent of the selected trajectory. The method includes combining the vehicle action intent of the selected trajectory with a set of vehicle action intents to form a plurality of vehicle action intents. The set of vehicle action intents correspond to a set of trajectories generated prior to the selected trajectory from second scene data. The method includes selecting a vehicle action for the autonomous vehicle and causing the autonomous vehicle to initiate performance of the vehicle action based on the selecting the action.
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
Vehicle sensor lens hoods for reduction of straylight on an image sensor lens, which can include a body, a first aperture, and a second aperture. The lens hood can further be associated with a structure, such as a fascia, of a vehicle. Vehicles and autonomous vehicles products are also provided.
G03B 11/04 - Hoods or caps for eliminating unwanted light from lenses, viewfinders, or focusing aids
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G03B 30/00 - Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
Provided are methods for lateral path commitment, which can include obtaining scene data associated with a scene of an autonomous vehicle, the autonomous vehicle navigating the scene according to a first trajectory, detecting a change to the scene based on the scene data, and based on the change to the vehicle scene, generating a plurality of second trajectories, wherein the plurality of second trajectories are different from the first trajectory. Some methods described also include comparing the plurality of second trajectories with the first trajectory, selecting a particular trajectory of the plurality of second trajectories for the autonomous vehicle based on a lateral movement plan and the comparing the plurality of second trajectories with the first trajectory, and navigating the autonomous vehicle according to the particular trajectory. Systems and computer program products are also provided.
Various methods and systems are disclosed to improve detection probability of time of flight lidar systems by adaptively controlling the characteristics of optical probe signals emitted by the lidar system. A lidar system can generate background signals for individual pixels of a sensor of the lidar detection system and use the background signals to adaptively control the pulse coded optical probe signal to improve the detection probability and reduce the false alarm rate of the lidar system.
Provided are methods for dynamic orchestration of modems in an autonomous vehicle, which can include an SoC determining performance parameters from a plurality of modems and controlling communication of data between ECUs and the modems. Some methods described also include an SoC acting as a forwarding daemon, communicating with a switch, and or providing configuration data. Systems and computer program products are also provided.
H04L 67/61 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Provided are methods for attribute verification to enable destination which can include obtaining a ride request requesting a ride for an item from a starting location to a destination, obtaining, credential data indicative of one or more credentials indicative of the item, verifying the one or more credentials indicative of the item, and generating, route data indicative of a route indicative of the ride by an autonomous vehicle based on the ride request. Systems and computer program products are also provided.
A perception system may be used to generate bounding boxes for objects in a vehicle scene. The perception system may receive images and feature maps corresponding to the received images. The perception system may generate scene dependent radar-based object queries. The perception system may use the generated scene dependent radar-based object queries and scene independent object queries to generate one or more bounding boxes for objects in the vehicle scene.
Provided are methods for systems and methods for autonomous driving based on humandriven data, which can include obtaining sensor data associated with an environment in which a vehicle operates, determining a set of candidate trajectories, determining a human-driven trajectory, generating a trajectory score for one or more candidate trajectories of the set of candidate trajectories, and causing an output to be provided to a device based on the trajectory score associated with the one or more candidate trajectories, wherein the output comprises one or more of: the human-driven trajectory, the one or more candidate trajectories, and the one or more trajectory scores. Systems and computer program products are also provided.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 30/165 - Control of distance between vehicles, e.g. keeping a distance to preceding vehicle automatically following the path of a preceding lead vehicle, e.g. "electronic tow-bar"
Provided are methods for gridlock prevention, which can include obtaining sensor data, obtaining an intersection parameter, and determining a constraint. Some methods described also include generating trajectories and providing data associated with a selected trajectory, such as for operation of an autonomous vehicle along the trajectory. Systems and computer program products are also provided.