Techniques for improving the performance of an autonomous vehicle (AV) by automatically annotating objects surrounding the AV are described herein. A system can obtain sensor data from a sensor coupled to the AV and generate an initial object trajectory for an object using the sensor data. Additionally, the system can determine a fixed value for the object size of the object based on the initial object trajectory. Moreover, the system can generate an updated initial object trajectory, wherein the object size corresponds to the fixed value. Furthermore, the system can determine, based on the sensor data and the updated initial object trajectory, a refined object trajectory. Subsequently, the system can generate a multi-dimensional label for the object based on the refined object trajectory. A motion plan for controlling the AV can be generated based on the multi-dimensional label.
Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g., fusing features derived from image data, light detection and ranging (LIDAR) data, and/or other sensor modalities) at both the point-wise and region of interest (ROI)-wise level, resulting in fully fused feature representations.
G06T 7/55 - Depth or shape recovery from multiple images
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06T 11/60 - Editing figures and textCombining figures or text
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
Techniques for improving the performance of an autonomous vehicle (AV) are described herein. A system can determine a plan for the AV in a driving scenario that optimizes an initial cost function of a control algorithm of the AV. The system can obtain data describing an observed human driving path in the driving scenario. Additionally, the system can determine for each cost dimension in the plurality of cost dimensions, a quantity that compares the estimated cost to the observed cost of the observed human driving path. Moreover, the system can determine a function of a sum of the quantities determined for each cost dimension in the plurality of cost dimensions. Subsequently, the system can use an optimization algorithm to adjust one or more weights of the plurality of weights applied to the plurality of cost dimensions to optimize the function of the sum of the quantities.
Systems and methods are provided for generating data indicative of a friction associated with a driving surface, and for using the friction data in association with one or more vehicles. In one example, a computing system can detect a stop associated with a vehicle and initiate a steering action of the vehicle during the stop. The steering action is associated with movement of at least one tire of the vehicle relative to a driving surface. The computing system can obtain operational data associated with the steering action during the stop of the vehicle. The computing system can determine a friction associated with the driving surface based at least in part on the operational data associated with the steering action. The computing system can generate data indicative of the friction associated with the driving surface.
B60T 8/1763 - Brake regulation specially adapted to prevent excessive wheel slip during vehicle deceleration, e.g. ABS responsive to the coefficient of friction between the wheels and the ground surface
B60T 8/171 - Detecting parameters used in the regulationMeasuring values used in the regulation
B60T 8/1755 - Brake regulation specially adapted to control the stability of the vehicle, e.g. taking into account yaw rate or transverse acceleration in a curve
A LIDAR system defining a first axis and a second axis is provided. The LIDAR system includes a first plurality of emitters and a second plurality of emitters. At least one of the first plurality of emitters is configured to emit a first laser beam at a first wavelength. Additionally, at least one of the second plurality of emitters is configured to emit a second laser beam at a second wavelength that is different than the first wavelength. The LIDAR system includes an optic configured to direct the first laser beam and the second laser beam in different directions to widen a field of view of the LIDAR system along the second axis.
Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first plurality of testing parameters for an autonomous vehicle testing scenario associated with a plurality of performance metrics based at least in part on a first sampling rule. The method can include simulating the autonomous vehicle testing scenario using the first plurality of testing parameters to obtain a first scenario output. The method can include evaluating an optimization function that evaluates the first scenario output to obtain simulation error data that corresponds to a performance metric. The method can include determining a second sampling rule associated with the performance metric. The method can include obtaining a second plurality of testing parameters for the autonomous vehicle testing scenario based at least in part on the second sampling rule.
Systems and methods for generating object segmentations across videos are provided. An example system can enable an annotator to identify objects within a first image frame of a video sequence by clicking anywhere within the object. The system processes the first image frame and a second, subsequent, image frame to assign each pixel of the second image frame to one of the objects identified in the first image frame or the background. The system refines the resulting object masks for the second image frame using a recurrent attention module based on contextual features extracted from the second image frame. The system receives additional user input for the second image frame and uses the input, in combination with the object masks for the second image frame, to determine object masks for a third, subsequent, image frame in the video sequence. The process is repeated for each image in the video sequence.
A sensor platform for mounting a Lidar unit and a camera to an autonomous vehicle is provided. The sensor platform includes a Lidar mount couplable to the autonomous vehicle. The Lidar mount includes a first surface and a second surface that is different from the first surface. The sensor platform further includes a camera mount couplable to the Lidar mount. When the Lidar unit is coupled to the Lidar mount, the Lidar unit is coupled to the first surface of the Lidar mount. Furthermore, when the camera mount is coupled to the Lidar mount, the camera mount is coupled to the second surface of the Lidar mount.
Systems and methods for generating performance metrics for autonomous vehicle systems are provided. The performance metrics include two complementary metrics that evaluate a machine-learning object prediction model relative to a number of potential trajectories of an autonomous vehicle. The performance metrics include an avoidance metric that quantifies a probability that a region occupied by a real-world or simulated object is reached by the autonomous vehicle, the region is not blocked by another object, and the region is not blocked by a prediction output by the machine-learning object prediction model. The performance metrics also include an availability metric that quantifies a probability that a simulated or real-world object is not located within a region, the region is not blocked by another simulated or real-world object, and the autonomous vehicle is unnecessarily blocked by the prediction output before the autonomous vehicle reaches the particular footprint.
The present disclosure is directed to altering vehicle paths. In particular, a computing system can access map data for a geographic area. The computing system can obtain target zone data describing a target zone within the geographic area. The computing system can determine an altered nominal path to traverse the target zone. The computing system can designate a portion of the altered nominal path as a designated action region associated with the target zone. The computing system can generate a longitudinal plan for an autonomous vehicle through the geographic area based on the altered nominal path. The computing system can generate a target velocity for one or more portions of the nominal path within the designated action region. The computing system can generate a trajectory for the autonomous vehicle based on the target velocity and the altered nominal path.
Systems and methods for basis path generation are provided. In particular, a computing system can obtain a target nominal path. The computing system can determine a current pose for an autonomous vehicle. The computing system can determine, based at least in part on the current pose of the autonomous vehicle and the target nominal path, a lane change region. The computing system can determine one or more merge points on the target nominal path. The computing system can, for each respective merge point in the one or more merge points, generate a candidate basis path from the current pose of the autonomous vehicle to the respective merge point. The computing system can generate a suitability classification for each candidate basis path. The computing system can select one or more candidate basis paths based on the suitability classification for each respective candidate basis path in the plurality of candidate basis paths.
G01C 21/26 - NavigationNavigational instruments not provided for in groups specially adapted for navigation in a road network
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
12.
LIGHT DETECTION AND RANGING (LIDAR) SYSTEM HAVING A POLARIZING BEAM SPLITTER
A LIDAR system includes a plurality of LIDAR units. Each of the LIDAR units includes a housing defining a cavity. Each of the LIDAR units further includes a plurality of emitters disposed within the cavity. Each of the plurality of emitters is configured to emit a laser beam. The LIDAR system includes a rotating mirror and a retarder. The retarder is configurable in at least a first mode and a second mode to control a polarization state of a plurality of laser beams emitted from each of the plurality of LIDAR units. The LIDAR system includes a polarizing beam splitter positioned relative to the retarder such that the polarizing beam splitter receives a plurality of laser beams exiting the retarder. The polarizing beam is configured to transmit or reflect the plurality of laser beams exiting the retarder based on the polarization state of the laser beams exiting the retarder.
A LIDAR system is provided. The LIDAR system includes a plurality of emitters respectively configured to emit a light signal along a transmit path. The LIDAR system includes a plurality of optics positioned along the transmit path. The plurality of optics includes a collimator optic having a primary optical power along a first axis. The plurality of optics further include one or more transmit optics positioned along the transmit path between the plurality of emitters and the collimator optic. Furthermore, the one or more transmit optics have a primary optical power along a second axis.
A LIDAR system includes a LIDAR unit. The LIDAR unit includes a housing defining a cavity. The LIDAR unit further include a plurality of emitters disposed on a circuit board within the cavity. The emitters respectively emit a laser beam along a transmit path. The LIDAR system further includes a first optic rotatable about a first axis at a first rotational speed and a second optic rotatable about a second axis at a second rotational speed that is faster than the first rotational speed. The first optic is positioned relative to the LIDAR unit such that a plurality of laser beams exiting the LIDAR unit pass through the first optic. The second optic is positioned relative to the first optic such that a plurality of refracted laser beams exiting the prism disk reflect off of the second optic.
Systems and methods for a moveable cover panel of an autonomous vehicle is provided. A vehicle can include a front panel disposed proximate to the front end of the passenger compartment, a vehicle motion control device located at the front panel, and a cover panel located at the front panel. The cover panel moveable relative to the front panel between an isolating position and an exposing position. The cover panel can isolate the vehicle motion control device from the passenger compartment when in the isolating position and expose the vehicle motion control device to the passenger compartment when in the exposing position. A method can include obtaining vehicle data identifying an operational mode, state, and/or status of the vehicle, determining a first position of the cover panel, and initiating a positional change for the cover panel based on the vehicle data and the first position.
B60R 13/02 - Trim mouldingsLedgesWall linersRoof liners
B60R 21/205 - Arrangements for storing inflatable members in their non-use or deflated conditionArrangement or mounting of air bag modules or components in dashboards
B62D 1/183 - Steering columns yieldable or adjustable, e.g. tiltable adjustable between in-use and out-of-use positions, e.g. to improve access
16.
PASSENGER SEATS AND DOORS FOR AN AUTONOMOUS VEHICLE
An autonomous can include one or more configurable passenger seats to accommodate a plurality of different seating configurations. For instance, the one or more passenger seats can include a passenger seat defining a seating orientation. The passenger seat can be configurable in a first configuration in which the seating orientation is directed towards a forward end of the autonomous vehicle and a second configuration in which the seating orientation is directed towards a rear end of the autonomous vehicle. The passenger seat can include a seatback rotatable about a pivot point on a base of the passenger seat to switch between the first configuration and the second configuration. Alternatively, or additionally, the autonomous vehicle can include a door assembly pivotably fixed to a vehicle body of the autonomous vehicle such that a swept path of the door assembly when moving between an open position and a closed position is reduced.
B60N 2/01 - Arrangement of seats relative to one another
B60N 2/20 - Seats specially adapted for vehiclesArrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable the back-rest being tiltable, e.g. to permit easy access
B60N 2/30 - Non-dismountable seats storable in a non-use position, e.g. foldable spare seats
B60N 2/856 - Head-rests movable or adjustable tiltable movable to an inoperative or stowed position
B60N 2/874 - Head-rests movable or adjustable movable to an inoperative or stowed position
17.
SYSTEMS AND METHODS FOR SEAT RECONFIGURATION FOR AUTONOMOUS VEHICLES
Systems and methods for reconfiguring seats of an autonomous vehicle is provided. The method includes obtaining service request data that includes a service selection and request characteristics. The method includes obtaining data describing an initial seat configuration for each of a plurality of seats of an autonomous vehicle assigned to the service request. The initial seat configuration can include a seat position and a seat orientation for each of the plurality of seats. The method includes generating, based on the initial cabin configuration and the service request data, seat adjustment instructions configured to adjust the initial seat configuration of at least one of the seats. The method includes providing the seat adjustment instructions to the autonomous vehicle assigned to the service request.
A method of scanning a surrounding environment using a light detection and ranging, LIDAR, system is disclosed. The method comprises emitting a light signal from an emitter (220) through one or more first lenses (322) along a transmit path (320) of a channel (300) of the LIDAR system to provide a transmit signal (321) into a surrounding environment; imparting a first motion (350) to the one or more first lenses (322) to provide for steering of the transmit signal (321) in the surrounding environment; detecting, at a receiver, a return signal (331) corresponding to a reflection of the transmit signal (321), the return signal (331) received via the one or more second lenses disposed along the receive path (331). An actuator (340) used to impart motion to a first axis collimating lens (322) in the transmit path (320) can also be used to impart motion to a focus lens (338) in the receive path (330). The actuator can oscillate the first axis collimating lens (322) in the vertical direction so as to steer the transmit signal (321) in the surrounding environment along a vertical axis (10). An actuator (345) can impart an oscillating motion to a second axis lens (324) along e.g. a horizontal axis (20). The LIDAR system can be implemented onboard an autonomous vehicle to generate more robust point clouds for enhanced object detection.
A LIDAR system is provided. The LIDAR system includes an emitter. The emitter includes a light source and one or more lenses positioned along a transmit path. The light source is configured to emit a primary laser beam through the one or more lenses in the transmit path to provide a transmit beam. The LIDAR system includes a receiver spaced apart from the emitter. The receiver includes one or more lenses positioned along a receive path such that the one or more lenses receive a reflected laser beam. The LIDAR system includes an optical element positioned along the transmit path. The optical element is configured to direct a portion of the primary laser beam in a direction towards the receive path as a secondary laser beam.
Systems and methods are directed to a method for assured autonomous vehicle compute processing. The method can include providing sensor data to first and second functional circuitry of an autonomy computing system. The first and second functional circuitry can be configured to generate first and second outputs associated with a first autonomous compute function. The method can include generating, by the first and second functional circuitry in response to the sensor data, first and second output data associated with the first autonomous compute function. The method can include generating, by monitoring circuitry of the autonomy computing system, comparative data associated with differences between the first output data and the second output data. The method can include generating one or more vehicle control signals for the autonomous vehicle based at least in part on the comparative data.
The present disclosure is directed to controlling state transitions in an autonomous vehicle. In particular, a computing system can initiate the autonomous vehicle into a no-authorization state upon startup. The computing system can receive an authorization request. The computing system determines whether the authorization request includes a request to enter the first or second mode of operations, wherein the first mode of operations is associated with the autonomous vehicle being operated without a human operator and the second mode of operations is associated with the autonomous vehicle being operable by a human operator. The computing system can transition the autonomous vehicle from the no-authorization state into a standby state in response to determining the authorization request includes a request to enter the first mode of operations or into a manual-controlled state in response to determining the authorization request is a request to enter the second mode of operations.
Systems, devices, and methods for cleaning a sensor are provided. A sensor can include a housing, a rotatable disc, a plurality of nozzles, an inlet configured to receive a pressurized fluid, and one or more distribution channels configured to direct the pressurized fluid from the inlet to the rotatable disc. The housing can include a curved lens and a channel circumferentially positioned around a base of the housing. The rotatable disc can be positioned within the channel and configured to rotate around at least a portion of the housing within the channel. The plurality of nozzles can be positioned on the rotatable disc. The rotatable disc can direct the pressurized fluid onto at least a portion of the curved lens via the plurality of nozzles, thereby causing the rotatable disc to rotate around at least the portion of the housing.
In some examples, a control system for a vehicle can detect a set of ultrasonic signals generated by a mobile computing device of a user. Additionally, the control system can determine a pin code from the set of ultrasonic signals. Moreover, the control system can perform one or more vehicle operations to initiate fulfillment of a transport request that is associated with the determined pin code, upon the user being determined to be within a given proximity distance of the vehicle.
H04W 4/40 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
B60R 25/01 - Fittings or systems for preventing or indicating unauthorised use or theft of vehicles operating on vehicle systems or fittings, e.g. on doors, seats or windscreens
G01S 5/18 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
24.
SYSTEMS AND METHODS FOR SERVICING MESSAGES IN AN AUTONOMOUS VEHICLE
Systems and methods for servicing vehicle messages utilizing a common communications infrastructure are provided. A method includes obtaining, by a vehicle computing system onboard an autonomous vehicle, a message including a cryptographic signature and contextual information, a request, or a command. The method includes determining an originating sender associated with the message based on the cryptographic signature and determining a message privilege for the message based on the originating sender. The originating sender is a process that generated the message. The method includes determining a response for the contextual information, the request, or the command based on the message privilege. The response is indicative of an acceptance or rejection of contextual information, an assignment of computing resources to a request, or an acceptance or rejection of a command. The method includes performing a vehicle action based on the response.
Systems, methods, tangible non-transitory computer-readable media, and devices associated with radar validation and calibration are provided. For example, target positions for targets can be determined based on imaging devices. The targets can be located at respective predetermined positions relative to the imaging devices. Radar detections of the targets can be generated based on radar devices. The radar devices can be located at a predetermined position relative to the imaging devices. Filtered radar detections can be generated based on performance of filtering operations on the radar detections. A detection error can be determined for the radar devices based on calibration operations performed using the filtered radar detections and the target positions determined based on the one or more imaging devices. Furthermore, the radar devices can be calibrated based on the detection error.
Systems and methods of the present disclosure are directed to a method. The method can include obtaining simplified scenario data associated with a simulated scenario. The method can include determining, using a machine-learned perception-prediction simulation model, a simulated perception-prediction output based at least in part on the simplified scenario data. The method can include evaluating a loss function comprising a perception loss term and a prediction loss term. The method can include adjusting one or more parameters of the machine-learned perception-prediction simulation model based at least in part on the loss function.
Systems and methods for integrating radar and LIDAR data are disclosed. In particular, a computing system can access radar sensor data and LIDAR data for the area around the autonomous vehicle. The computing system can determine, using the one or more machine-learned models, one or more objects in the area of the autonomous vehicle. The computing system can, for a respective object, select a plurality of radar points from the radar sensor data. The computing system can generate a similarity score for each selected radar point. The computing system can generate weight associated with each radar point based on the similarity score. The computing system can calculate predicted velocity for the respective object based on a weighted average of a plurality of velocities associated with the plurality of radar points. The computing system can generate a proposed motion plan based on the predicted velocity for the respective object.
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 13/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
28.
SYSTEMS AND METHODS FOR OBJECT DETECTION AND MOTION PREDICTION BY FUSING MULTIPLE SENSOR SWEEPS INTO A RANGE VIEW REPRESENTATION
Systems and methods for detecting objects and predicting their motion are provided. In particular, a computing system can obtain a plurality of sensor sweeps. The computing system can determine movement data associated with movement of the autonomous vehicle. For each sensor sweep, the computing system can generate an image associated with the sensor sweep. The computing system can extract, using the respective image as input to one or more machine-learned models, feature data from the respective image. The computing system can transform the feature data into a coordinate frame associated with a next time step. The computing system can generate a fused image. The computing system can generate a final fused image. The computing system can predict, based, at least in part, on the final fused representation of the plurality of sensors sweeps from the plurality of sensor sweeps, movement associated with the feature data at one or more time steps in the future.
Systems and methods for cooling a vehicle computing system are provided. A computing system can include a cooling baseplate including a first planar cooling surface and a second planar cooling surface. The computing system can further include one or more computing devices including a processor blade positioned on the first planar cooling surface, a coprocessor blade positioned on the second planar cooling surface, and a flexible connector coupled between the processor blade and the coprocessor blade. The flexible connector can be configured to transfer at least one of data or electric power between the processor blade and the coprocessor blade. The first planar cooling surface can be configured to transfer heat from the processor blade to a cooling fluid via conduction. The second planar cooling surface can be configured to transfer heat from the coprocessor blade to the cooling fluid via conduction.
The present disclosure is directed to a system for calibrating cameras with a fixed focal point. In particular, a camera calibration system comprising one or more computing devices can project a plurality of fiducial markers on a target surface using the plurality of collimators. The camera calibration system can capture, using the camera, a plurality of images of the target surface with the camera, wherein the camera is rotated between each captured image in the plurality of images. The camera calibration system can compare the plurality of images with a ground truth projection. The camera calibration system can generate calibration data based on the comparison of the plurality of images with the ground truth projection. The camera calibration system can store the calibration data for use in rectifying the camera.
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
31.
CONDITIONAL ENTROPY CODING FOR EFFICIENT VIDEO COMPRESSION
The present disclosure is directed to video compression using conditional entropy coding. An ordered sequence of image frames can be transformed to produce an entropy coding for each image frame. Each of the entropy codings provide a compressed form of image information based on a prior image frame and a current image frame (the current image frame occurring after the prior image frame). In this manner, the compression model can capture temporal relationships between image frames or encoded representations of the image frames using a conditional entropy encoder trained to approximate the joint entropy between frames in the image frame sequence.
H04N 19/463 - Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
32.
SYSTEMS AND METHODS FOR VEHICLE-TO-VEHICLE COMMUNICATIONS FOR IMPROVED AUTONOMOUS VEHICLE OPERATIONS
Systems and methods for vehicle-to-vehicle communications are provided. An example computer-implemented method includes obtaining, by a computing system onboard a first autonomous vehicle, sensor data associated with an environment of the first autonomous vehicle. The method includes determining, by the computing system, an intermediate environmental representation of at least a portion of the environment of the first autonomous vehicle based at least in part on the sensor data. The method includes generating, by the computing system, a compressed intermediate environmental representation by compressing the intermediate environmental representation of at least the portion of the environment of the first autonomous vehicle. The method includes communicating, by the computing system, the compressed intermediate environmental representation to a second autonomous vehicle.
H04W 4/46 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
Systems and methods for vehicle motion planning based on uncertainty are provided. A method can include obtaining scene data descriptive of one or more objects within a surrounding environment of the autonomous vehicle. The method can include determining one or more subproblems based at least in part on the scene data. In some implementation, each of the one or more subproblems can correspond to at least one object within the surrounding environment of the autonomous vehicle. The method can include generating one or more branching policies based at least in part on the one or more subproblems. In some implementations, each of the one or more branching policies can include scene data associated with the autonomous vehicle and one or more objects within the surrounding environment of the autonomous vehicle. The method can include determining one or more costs associated each of the one or more branching policies. The method can include selecting a motion plan based at least in part on the one or more costs associated with each of the one or more branching policies. The method can include providing the motion plan for use in controlling a motion of the autonomous vehicle.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 30/095 - Predicting travel path or likelihood of collision
Systems and methods for motion planning by a vehicle computing system of an autonomous vehicle are provided. The vehicle computing system can input sensor data to a machine-learned system including one or more machine-learned models. The computing system can obtain, as an output of the machine-learned model(s), motion prediction(s) associated with object(s) detected by the system. The system can convert a shape of the object(s) into a probability of occupancy by convolving an occupied area of the object(s) with a continuous uncertainty associated with the object(s). The system can determine a probability of future occupancy of a plurality of locations in the environment at future times based at least in part on the motion prediction(s) and the probability of occupancy of the object(s). The system can provide the motion prediction(s) and the probability of future occupancy of the plurality of locations to a motion planning system of the autonomous vehicle.
The present disclosure is directed to a system for generating customized command toolboxes for remote operators in a service system that includes autonomous vehicles. The system receives a request for remote assistance from an autonomous vehicle. The system determines, from a local storage location, vehicle data associated with the autonomous vehicle. The system selects a subset of remote assistance actions from a predetermined set of remote assistance actions. The system displays, in a remote assistance user interface, one or more user interface elements indicative of the subset of remote assistance actions. The system determines one or more remote assistance actions from the subset of remote assistance actions based at least in part on a user input associated with the one or more user interface elements. The system transmits one or more control signals associated with the one or more remote assistance actions.
Aspects of the present disclosure involve systems, methods, and devices for fault detection in a Lidar system. A fault detection system obtains incoming Lidar data output by a Lidar system during operation of an AV system. The incoming Lidar data includes one or more data points corresponding to a fault detection target on an exterior of a vehicle of the AV system. The fault detection system accesses historical Lidar data that is based on data previously output by the Lidar system. The historical Lidar data corresponds to the fault detection target. The fault detection system performs a comparison of the incoming Lidar data with the historical Lidar data to identify any differences between the two sets of data. The fault detection system detects a fault condition occurring at the Lidar system based on the comparison.
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
A machine-learned image compression model includes a first encoder configured to generate a first image code based at least in part on first image data. The first encoder includes a first series of convolutional layers configured to generate a first series of respective feature maps based at least in part on the first image. A second encoder is configured to generate a second image code based at least in part on second image data and includes a second series of convolutional layers configured to generate a second series of respective feature maps based at least in part on the second image and disparity-warped feature data. Respective parametric skip functions associated convolutional layers of the second series are configured to generate disparity -warped feature data based at least in part on disparity associated with the first series of respective feature maps and the second series of respective feature maps.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
G06N 3/04 - Architecture, e.g. interconnection topology
Systems, methods, tangible non-transitory computer-readable media, and devices associated with depth estimation are provided. For example, a feature representation associated with stereo images including a first and second plurality of points can be accessed. Sparse disparity estimates associated with disparities between the first and second plurality of points can be determined. The sparse disparity estimates can be based on machine-learned models that estimate disparities based on comparisons of the first plurality of points to the second plurality of points. Confidence ranges associated with the disparities between the first and second plurality of points can be determined based on the sparse disparity estimates and the machine-learned models. A disparity map for the stereo images can be generated based on using the confidence ranges and machine-learned models to prune the disparities outside the confidence ranges. Furthermore, three-dimensional depth estimates associated with the stereo images can be generated based on the disparity map.
Systems, methods, tangible non-transitory computer-readable media, and devices associated with object perception and prediction of object motion are provided. For example, a plurality of temporal instance representations can be generated. Each temporal instance representation can be associated with differences in the appearance and motion of objects over past time intervals. Past paths and candidate paths of a set of objects can be determined based on the temporal instance representations and current detections of objects. Predicted paths of the set of objects using a machine-learned model trained that uses the past paths and candidate paths to determine the predicted paths. Past path data that includes information associated with the predicted paths can be generated for each object of the set of objects respectively.
Systems and methods are directed to improvements for secure communications between client systems and a vehicle integration platform associated with a service provider entity. In one example, a communication infrastructure is provided which includes a vehicle integration platform that includes a plurality of application programming interfaces configured to facilitate communication among clients. The communication infrastructure includes a security integration system which is configured to receive and validate a client certificate forwarded to the vehicle integration platform from a client and determine an identity of the client and an origin of a request associated with the client certificate. The security integration system is configured to generate a certificate signing request associated with the client certificate based in part on the identity of the client and obtain an operational certificate for the client based in part on the certificate signing request to establish ability for client authentication within the vehicle integration platform.
Systems and methods for controlling autonomous vehicles are provided. Assisted autonomy tasks facilitated by operators for a plurality of autonomous vehicles can be tracked in order to generate operator attributes for each of a plurality of operators. The attributes for an operator can be based on tracking one or more respective assisted autonomy tasks facilitated by the operator. The operator attributes can be used to facilitate enhanced remote operations for autonomous vehicles. For example, request parameters can be obtained in response to a request for remote assistance associated with an autonomous vehicle. An operator can be selected to assist with autonomy tasks for the autonomous vehicle based at least in part on the operator attributes for the operator and the request parameters associated with the request. Remote assistance for the first autonomous vehicle can be initiated, facilitated by the first operator in response to the request for remote assistance.
Various examples are directed to systems and methods for controlling an autonomous vehicle. For example, a navigator system at an autonomous vehicle may generate a plurality of local routes beginning at a vehicle location and extending to a plurality of local route end points. The navigator system may access general route cost data, the general route cost data describing general route costs from the plurality of local route end points to a trip end point. The navigator system may select the first local route of the plurality of routes based at least in part on the general route cost data. A vehicle autonomy system at the autonomous vehicle may begin to control the autonomous vehicle along the first local route.
Aspects of the present disclosure include systems, methods, and devices to facilitate pick-up/drop-off zone (PDZ) handoffs between autonomous vehicles. Consistent with some embodiments, a pick-up/drop-off zone (PDZ) is located based on detecting a first autonomous vehicle stopped at a stopping location. A system determines, based on one or more criteria, whether to request the first autonomous vehicle to remain stopped at the stopping location to create an opportunity for a second autonomous vehicle to claim the PDZ. An amount of time for the first autonomous vehicle to remain stopped at the stopping location is determined based on the one or more criteria. A request to remain stopped at the stopping location is transmitted to a vehicle autonomy system of the first autonomous vehicle based on satisfaction of the one or more criteria. The request specifies the amount of time for the first autonomous vehicle to remain at the stopping location.
The present disclosure is directed to systems and methods associated with a communication infrastructure. The communication infrastructure includes a vehicle integration platform that includes a plurality of application programming interfaces configured to facilitate message communication among clients. The communication infrastructure includes a registration authority system configured to receive certificate signing requests from the clients and to generate client-specific credentials for establishing a predetermined time period of ability for client authentication within the vehicle integration platform. The communication infrastructure includes a certificate authority system configured to normalize requests received from the clients via the registration authority system such that the client-specific credentials are established according to an approved hierarchy of licensing certificates. The communication infrastructure includes a security library accessible by each of the clients and by components of the vehicle integration platform. The security library is configured to provide implementation logic for signing messages and implementation logic for validating messages.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
Disclosed are autonomous vehicles that may autonomously navigate at least a portion of a route defined by a service request allocator. The autonomous vehicle may, at a certain portion of the route, request remote assistance. In response to the request, an operator may provide input to a console that indicates control positions for one or more vehicle controls such as steering position, brake position, and/or accelerator position. A command is sent to the autonomous vehicle indicating how the vehicle should proceed along the route. When the vehicle reaches a location where remote assistance is no longer required, the autonomous vehicle is released from manual control and may then continue executing the route under autonomous control.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Various examples are directed to systems and methods for routing an autonomous vehicle. For example, a system may access temporal data comprising a first temporal data item. The first temporal data item may describe a first roadway condition, a first time, and a first location. The system may also access a routing graph that comprises a plurality of route components and determine that a first route component of the routing graph corresponds to the first location. The system may generate a constrained routing graph at least in part by modifying the first route component based at least in part on the first roadway condition. The system may additionally generate a route for an autonomous vehicle using the constrained routing graph; and cause the autonomous vehicle to begin traversing the route.
Various examples described herein are directed to systems and methods for dispatching trips to a plurality of autonomous vehicles. For example, a dispatch system may access trip result data describing a plurality of trips executed by a set of autonomous vehicles of a first vehicle type. The dispatch system may determine a first routing constraint for autonomous vehicles of the first vehicle type using the trip result data and select an autonomous vehicle to execute a first new trip using the first routing constraint. The dispatch system may send a request to execute the first new trip to the selected autonomous vehicle.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
A network computing system can coordinate on-demand transport serviced by transport providers operating throughout a transport service region. The transport providers can comprise a set of internal autonomous vehicles (AVs) and a set of third-party AVs. The system can receive a transport request from a requesting user of the transport service region, where the transport request indicates a pick-up location and a destination. The system can determine a subset of the transport providers to service the respective transport request, and executing a selection process among the subset of the transport providers to select a transport provider to service the transport request. The system may then transmit a transport assignment to the selected transport provider to cause the selected transport provider to service the transport request.
Systems and methods are provided for generating data indicative of a friction associated with a driving surface, and for using friction data as part of controlling autonomous vehicle operations. In one example, a computing system can detect an event including at least one of an acceleration, a deceleration, or a stop associated with an autonomous vehicle and obtain, in response to detecting the event, operational data associated with the autonomous vehicle during the event. The computing system can determine, based at least in part on the operational data, data indicative of a friction associated with a surface upon which the autonomous vehicle is traveling during the event. The computing system can control the autonomous vehicle based at least in part on the data indicative of the friction associated with the surface.
B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
B60T 8/1763 - Brake regulation specially adapted to prevent excessive wheel slip during vehicle deceleration, e.g. ABS responsive to the coefficient of friction between the wheels and the ground surface
51.
Driving surface friction estimations using vehicle steering
Systems and methods are provided for generating data indicative of a friction associated with a driving surface, and for using the friction data in association with one or more vehicles. In one example, a computing system can detect a stop associated with a vehicle and initiate a steering action of the vehicle during the stop. The steering action is associated with movement of at least one tire of the vehicle relative to a driving surface. The computing system can obtain operational data associated with the steering action during the stop of the vehicle. The computing system can determine a friction associated with the driving surface based at least in part on the operational data associated with the steering action. The computing system can generate data indicative of the friction associated with the driving surface.
Systems and methods are provided for generating data indicative of a friction associated with a driving surface, and for using the friction data in association with one or more vehicles. In one example, a computing system can detect a stop associated with a vehicle and initiate a steering action of the vehicle during the stop. The steering action is associated with movement of at least one tire of the vehicle relative to a driving surface. The computing system can obtain operational data associated with the steering action during the stop of the vehicle. The computing system can determine a friction associated with the driving surface based at least in part on the operational data associated with the steering action. The computing system can generate data indicative of the friction associated with the driving surface.
G01C 21/00 - NavigationNavigational instruments not provided for in groups
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
Systems and methods for generating sparse geographic data for autonomous vehicles are provided. In one example embodiment, a computing system can obtain sensor data associated with at least a portion of a surrounding environment of an autonomous vehicle. The computing system can identify a plurality of lane boundaries within the portion of the surrounding environment of the autonomous vehicle based at least in part on the sensor data and a first machine-learned model. The computing system can generate a plurality of polylines indicative of the plurality of lane boundaries based at least in part on a second machine-learned model. Each polyline of the plurality of polylines can be indicative of a lane boundary of the plurality of lane boundaries. The computing system can output a lane graph including the plurality of polylines.
Among other things, a documentation of a crash involving a vehicle is generated automatically. Telematics data is received that has been produced by one or more sensors associated with a telematics device at the vehicle. Based on the telematics data, a vehicle crash period is determined that begins at a start time and ends at an end time of the vehicle crash. Based on the telematics data, one or more metrics are determined associated with the vehicle during the vehicle crash period. Based on one or more metrics, a human-readable documentation of the vehicle crash is generated automatically.
B60R 21/0132 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to vehicle motion parameters
G07C 5/00 - Registering or indicating the working of vehicles
B60R 21/01 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents
55.
Disabling onboard input devices in an autonomous vehicle
Systems and methods for disabling autonomous vehicle input devices are provided. In one example embodiment, a computer implemented method includes identifying an operating mode of an autonomous vehicle. The method includes determining one or more vehicle input devices to be disabled based at least in part on the operating mode of the autonomous vehicle. The vehicle input devices are located onboard the autonomous vehicle. The method includes disabling the one or more vehicle input devices based at least in part on the identified operating mode of the autonomous vehicle such that an input by a user with respect to the one or more vehicle input devices does not affect an operation of the autonomous vehicle.