The present disclosure relates to autonomous driving scenario obstacle prediction, which is implemented by means of a trained prediction model. The trained prediction model comprises a global encoder, a first local encoder, and a first decoder. The method comprises: acquiring state information and environment information of each obstacle in a target scenario from the perspective of a vehicle; inputting the state information and the environment information of each obstacle in the target scenario into the global encoder to obtain first feature information; acquiring prompt information of a target obstacle; inputting the prompt information of the target obstacle into the first local encoder to obtain second feature information; and taking the first feature information as input information of the first decoder, taking the second feature information as prompt information of the first decoder, and inputting the input information and the prompt information into the first decoder to obtain prediction information of the target obstacle.
The present disclosure relates to a flight path determining method and apparatus, a storage medium, and an electronic device. According to an example, the method includes: determining, according to a flight origin and destination of an aircraft and a topological graph corresponding to a plurality of blocks in a target space, a block path that the aircraft passes through from the flight origin to the flight destination from the plurality of blocks, where the flight origin and destination are in the target space, there are a plurality of spatial points with position information in the target space, and each block includes a spatial point group obtained by grouping the plurality of spatial points; determining entrance and exit information of each block in the block path according to the position information of each spatial point and the block path; and determining a flight path of the aircraft according to the position information of each spatial point, position information of the flight origin and destination, connections between the plurality of spatial points, the block path, and the entrance and exit information of each block in the block path.
A method for training a migration scene-based trajectory prediction model is provided, a first trajectory prediction model and a plurality of candidate training samples are obtained; for any candidate training sample, a reference value corresponding to the candidate training sample is determined according to at least one of a trajectory feature corresponding to the candidate training sample or a prediction result of the first trajectory prediction model for the candidate training sample; target training samples are selected from the plurality of candidate training samples according to the reference values corresponding to the plurality of candidate training samples; and the first trajectory prediction model is trained according to the target training samples, to obtain a second trajectory prediction model, where the second trajectory prediction model is configured to predict traveling trajectories of obstacles in a migration scene.
G05D 101/15 - Details of software or hardware architectures used for the control of position using artificial intelligence [AI] techniques using machine learning, e.g. neural networks
A foldable propeller and an unmanned aerial vehicle. The foldable propeller (10) comprises a blade adapter (1), blades (2), and first washers (3); the blade adapter is used for being connected to a driving member (20); the blades are rotatably connected to the blade adapter; the first washers each comprise an elastic portion (31) and an additional portion (32) which are connected to each other; and one end surface of the elastic portion abuts against the blade adapter and the other end surface of the elastic portion abuts against the corresponding blade. The elastic modulus of the additional portion is greater than that of the elastic portion. The foldable propeller has the advantages of being smooth in folding, convenient to maintain and long in service life.
A risk identification method for autonomous driving. The method comprises: acquiring a predicted spatio-temporal trajectory of a first obstacle and a planned spatio-temporal trajectory of an autonomous vehicle, wherein the first obstacle is a dynamic obstacle in the current scenario (step S11); establishing a spatio-temporal interaction scenario graph of the current scenario according to the predicted spatio-temporal trajectory of the first obstacle, the planned spatio-temporal trajectory of the autonomous vehicle and a first map parameter (step S12); and acquiring spatio-temporal interaction features of the current scenario according to the spatio-temporal interaction scenario graph of the current scenario, inputting the spatio-temporal interaction features of the current scenario into a pre-trained model, and determining the interaction safety of the autonomous vehicle in the current scenario by means of the pre-trained model, wherein the interaction safety is used for indicating a risk of autonomous driving (step S13).
B60W 30/095 - Predicting travel path or likelihood of collision
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
B60W 40/10 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to vehicle motion
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
G05D 1/24 - Arrangements for determining position or orientation
6.
UNMANNED VEHICLE LOCALIZATION, AND AUTONOMOUS VEHICLE
An unmanned vehicle localization method, and an autonomous vehicle, which can prevent a sudden change in a localization trajectory which is caused by abnormal absolute pose information during a traveling process of an unmanned vehicle, thereby enhancing the robustness of a vehicle localization system. The unmanned vehicle localization method comprises: determining first relative motion data and second relative motion data, which are measured between a moment T1 and a moment T2, wherein the first relative motion data is obtained on the basis of a LiDAR inertial mileage calculation method, and the second relative motion data is determined on the basis of a plurality of groups of angular velocity and acceleration information, which are measured by means of an inertial measurement unit between the moment T1 and the moment T2 (101); and according to a first pose at the moment T1, absolute pose information at the moment T2, the first relative motion data and the second relative motion data, performing state-cloning-based fusion localization on an unmanned vehicle, so as to obtain a second pose at the moment T2 (102).
The present disclosure provides an unmanned device control method and apparatus, a storage medium, and an electronic device. In the method, thrust force required for an unmanned device and a target rotation speed required for providing the thrust force to the unmanned device are determined according to state information and/or environment information of the unmanned device, a voltage inputted into a motor of the unmanned device is adjusted according to the target rotation speed, and the unmanned device is controlled by using an adjusted voltage.
This specification discloses a marker allocation method and apparatus in an unmanned aerial vehicle airport and an unmanned aerial vehicle landing method and apparatus. According to an airport shape and an airport size of an unmanned aerial vehicle airport and a standard shape and a standard size of a takeoff and landing point, a target layout of an unmanned aerial vehicle airport that includes multiple takeoff and landing points is determined. Further, an initial takeoff and landing point is determined from the multiple takeoff and landing points included in the target layout. Markers respectively allocated to the multiple takeoff and landing points are determined from a predetermined marker set that includes markers of different image content, by using the initial takeoff and landing point as a start point, according to a predetermined search algorithm, and with a constraint that similarity between a marker of any one of the multiple takeoff and landing points and markers of other takeoff and landing points in a specified neighborhood thereof is the lowest. In this method, a position and a correspondence between each takeoff and landing point and each marker in a range of the airport do not need to be manually determined, thereby improving efficiency of allocating a marker to an unmanned aerial vehicle.
G05D 1/244 - Arrangements for determining position or orientation using passive navigation aids external to the vehicle, e.g. markers, reflectors or magnetic means
An unmanned device control method and apparatus, a storage medium, and an electronic device. An unmanned device is controlled to move according to a preplanned target path; current environment information of the unmanned device is obtained; according to the current environment information of the unmanned device, a target subpath on which the unmanned device is located is determined, from target subpaths included in the target path, as a designated subpath; and a control strategy is then determined according to a scenario type corresponding to the designated subpath, and a determined control strategy is used to control the unmanned device.
An unmanned aerial vehicle control process comprises: a signal receiving apparatus receiving electromagnetic wave signals periodically transmitted by a signal transmitting apparatus (S101); according to the electromagnetic wave signals, determining a Doppler frequency value at the current position where an unmanned aerial vehicle is located (S102); according to the Doppler frequency value at the current position, controlling the unmanned aerial vehicle to fly to a target position according to a target flight path (S103), the target flight path comprising flight paths respectively corresponding to the current flight direction and the target flight direction of the unmanned aerial vehicle, and the target flight direction and the current flight direction having an orthogonal relationship; according to a plurality of Doppler frequency values of the unmanned aerial vehicle determined in the target flight path, determining a target landing position corresponding to the unmanned aerial vehicle (S104), the position below the target landing position being the position where the signal transmitting apparatus or the signal receiving apparatus is located; and controlling the unmanned aerial vehicle to land from the target landing position (S105).
An outer rotor motor, comprising a stator assembly (10) and a rotor assembly (20). The rotor assembly (20) comprises a plurality of permanent magnets (21) arranged in a circumferential direction of the stator assembly (10), wherein two adjacent permanent magnets (21) have opposite polarities; each permanent magnet (21) comprises a first portion (211) and a second portion (212), which are symmetrically arranged along a preset axis; and a preset included angle is formed between a direction in which the first portion (211) and the second portion (212) are extended, and the preset axis (213).
H02K 1/28 - Means for mounting or fastening rotating magnetic parts on to, or to, the rotor structures
H02K 9/06 - Arrangements for cooling or ventilating by ambient air flowing through the machine having means for generating a flow of cooling medium with fans or impellers driven by the machine shaft
H02K 5/20 - Casings or enclosures characterised by the shape, form or construction thereof with channels or ducts for flow of cooling medium
The present application discloses a rotor. The rotor comprises a blade; a plurality of protruding structures protrude from the surface of the blade; the plurality of protruding structures are sequentially arranged at intervals in the spanwise direction of the blade; and a height difference is formed between every two adjacent protruding structures.
A logistics box and an unmanned aerial vehicle distribution system. The logistics box comprises a box body (1), a transmission mechanism (2) and a bottom door (3). The box body (1) comprises a bottom opening (11), the transmission mechanism (2) is mounted on the box body (1), the bottom door (3) is slidably arranged at the bottom opening (11) and connected to the transmission mechanism (2), and the bottom door (3) is capable of opening or closing the bottom opening (11) under the driving of the transmission mechanism (2).
The present application discloses an orientation information determining method, comprising: acquiring point cloud data and image data obtained by photographing a target obstacle by an autonomous vehicle; performing first feature extraction on the point cloud data of the target obstacle to obtain a geometric feature of the target obstacle, and performing second feature extraction on the image data of the target obstacle to obtain an appearance feature of the target obstacle; on the basis of the geometric feature of the target obstacle, determining a candidate orientation set of the target obstacle, the candidate orientation set comprising a plurality of pieces of candidate orientation information, and the plurality of pieces of candidate orientation information indicating the orientations of a plurality of surfaces of the target obstacle; on the basis of the appearance feature of the target obstacle, determining first orientation information of the target obstacle; and determining second orientation information from the candidate orientation set as the orientation information of the target obstacle, wherein an orientation indicated by the second orientation information matches an orientation indicated by the first orientation information.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
An autonomous driving decision planning method: in response to the existence of an obstacle vehicle in the environment of an autonomous vehicle, controlling the autonomous vehicle to drive according to a heuristic route; in the process of the autonomous vehicle driving according to the heuristic route, obtaining related information of the obstacle vehicle; according to the related information of the obstacle vehicle, determining a driving intention of the obstacle vehicle; and at least according to the driving intention of the obstacle vehicle, performing autonomous driving decision planning on the autonomous vehicle.
A positioning method, comprising the following steps: during a positioning process, estimating a local positioning pose of a robot according to a VIO algorithm (101); estimating a global positioning pose of the robot according to a map feature point matching algorithm (102); determining whether the estimation of the global positioning pose is consistent with that of the local positioning pose (103); if the estimations are consistent, executing pose fusion and update for the robot on the basis of a fused state vector (104), wherein the fused state vector comprises a VIO local positioning variable and a global positioning variable, the VIO local positioning variable comprises the speed of the robot and a sensor bias, and the speed and the sensor bias are kept in a Schmitt state; and if the estimations are inconsistent, determining an observation error of the global positioning pose, performing coordinate system conversion on the fused state vector according to the observation error, and executing pose update of the robot according to the fused state vector, which has been subjected to coordinate system conversion (105).
This application discloses a beacon for guiding landing of an unmanned aerial vehicle. The beacon includes at least three levels of patterns: one first-level pattern and at least one second-level pattern, where the at least one second-level pattern is superposed above the first-level pattern, and an area of the second-level pattern is less than that of the first-level pattern.
B64U 70/95 - Means for guiding the landing UAV towards the platform, e.g. lighting means
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
A data calibration method, comprising: first determining, according to collected data, visual navigation data and satellite navigation data of an unmanned device at a current moment (S100); then determining, according to a resolving type of the satellite navigation data at the current moment, at least one piece of calibration data for calibration from the satellite navigation data (S102); then determining a conversion relationship between a first coordinate system of the visual navigation data and a second coordinate system of the satellite navigation data under a constraint that at least one of the position and the speed of the visual navigation data and the calibration data is the same (S104); and then according to the determined conversion relationship, the visual navigation data and the calibration data, determining a correction amount corresponding to the visual navigation data, and calibrating the visual navigation data according to the correction amount (S106).
G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
G01S 19/48 - Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
During a sensor data calibration process, image data, radar data and inertial data at multiple moments during a movement process of a test device are acquired (S100); a motion trajectory of the test device is determined according to the inertial data and the image data, and the movement velocity of the test device under an image reference frame is determined according to the motion trajectory (S102); and the Doppler velocity in the movement process of the test device is determined according to the radar data, and the Doppler velocity and the movement velocity of the test device under the image reference frame are registered so as to calibrate sensor data (S104).
A method and apparatus for allocating beacon codes in an unmanned aerial vehicle airport, and an unmanned aerial vehicle landing method and apparatus. The method for allocating beacon codes in an unmanned aerial vehicle airport comprises: determining, according to the airport shape and the airport size of an unmanned aerial vehicle airport and a standard shape and a standard size of each takeoff/landing point, a target layout of the unmanned aerial vehicle airport that contains a plurality of takeoff/landing points; then, determining an initial takeoff/landing point from the plurality of takeoff/landing points contained in the target layout; and by taking the initial takeoff/landing point as a starting point and taking, according to a preset search algorithm, the similarity between a beacon code of any one of the plurality of takeoff/landing points and a beacon code of the other takeoff/landing points in a specified neighboring region thereof being the lowest as a constraint, determining, from a preset beacon code set containing beacon codes of different image contents, beacon codes to be respectively allocated to the plurality of takeoff/landing points. By means of the method, there is no need to manually determine the positions and correlation of takeoff/landing points and beacon codes within an airport range, thereby improving the allocation efficiency of beacon codes of unmanned aerial vehicles.
A propeller and an aircraft. The propeller comprises a propeller blade (1) and a spoiler portion (2). At least part of the spoiler portion (2) protrudes out of a suction surface of the propeller blade (1), and the spoiler portion (2) extends in a spanwise direction of the propeller blade (1).
An unmanned device control method and apparatus, a storage medium, and an electronic device. The method comprises: controlling an unmanned device to move in a pre-planned target path (S102); acquiring current environment information of the unmanned device (S104); determining, according to the current environment information of the unmanned device, from target sub-paths a target sub-path where the unmanned device is located as a specified sub-path (S106); and determining a control strategy according to the scenario type corresponding to the specified sub-path, and using the control strategy to control the unmanned device (S108).
Provided are a camera system and an unmanned device. The camera system comprises a camera and a circuit assembly, the camera is provided with an MIPI CSI-2 output interface, and the circuit assembly comprises a substrate and a repeater. An input interface is provided on the substrate, and is connected to the MIPI CSI-2 output interface. The repeater is provided on the substrate, the repeater is connected to the input interface, and the repeater is further configured to connect more than two processors.
An obstacle trajectory prediction method and apparatus are provided. In some embodiments, a global interaction feature under joint action of a vehicle and obstacles is determined according to historical status information and current status information of the vehicle, historical status information and current status information of the obstacles, and a future motion trajectory planned by the vehicle; an individual interaction feature of a to-be-predicted obstacle is determined according to the global interaction feature and current status information of the to-be-predicted obstacle; and a future motion trajectory of the to-be-predicted obstacle is predicted through the individual interaction feature and information about an environment around the vehicle.
B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G06N 3/049 - Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
26.
Visual positioning based on a plurality of image frames
A visual positioning method and apparatus are provided. In some embodiments, the method includes: acquiring a video captured by an image sensor; determining visual positioning information respectively corresponding to a plurality of key image frames in the video; determining a capture pose transformation relationship between each of the plurality of key image frames according to inertial navigation information of the image sensor recorded when taking the video; performing, according to the visual positioning information corresponding to each of the plurality of key image frames, graph optimization processing on the visual positioning information corresponding to each of the plurality of key image frames by using the capture pose transformation relationship between each of the plurality of key image frames as an edge constraint; and determining, according to a result of the graph optimization processing, a visual positioning result of the image sensor when taking the video.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
The present disclosure relates to an unmanned aerial vehicle (UAV) landing platform and a UAV landing system. The UAV landing platform comprises: a positioning board, which is provided with an identification code for a UAV to identify a landing position; a light-emitting portion, which is used for emitting light toward the positioning board, wherein the light-emitting portion comprises a first light-emitting portion located at the bottom portion of the positioning board, and the positioning board is a light-scattering component that may be penetrated by light emitted by the first light-emitting portion; an ambient light collection device, which is used for collecting light intensity information of the environment in which the positioning board is located; and a control device, which is coupled to the ambient light collection device and coupled to a lighting member in the light-emitting portion and which is used for controlling, according to the light intensity information, the lighting member to be in an on state or off state. In this way, the light emitted by the first light-emitting portion may light up the positioning board from the bottom portion. When ambient light is insufficient, the UAV may also obtain a clear image of the identification code on the positioning board, thereby accurately identifying the landing position and ensuring the precise landing of the UAV in cloudiness, night-time and other environments.
A map generation method, comprising: obtaining multiple frames of point clouds acquired by a point cloud acquisition device in a target area (S101); determining whether the signal strength of the target area meets a preset high-strength requirement (S102); determining a point cloud matching pair from the multiple frames of point clouds according to a determination result of whether the signal strength of the target area meets the preset high-strength requirement, and determining respective target pose information of the multiple frames of point clouds according to the point cloud matching pair (S103), wherein there is a same point between two frames of point clouds that constitute the point cloud matching pair; and generating a map of the target area according to the respective target pose information of the multiple frames of point clouds (S104). By constructing the point cloud matching pair, the respective target pose information of the multiple frames of point clouds is determined according to the point cloud matching pair.
Provided are an unmanned device control method and apparatus, and a storage medium and an electronic device. The method comprises: by means of status information and/or environmental information of an unmanned device, determining a thrust required by the unmanned device, and a target rotation speed which is required to provide the thrust required by the unmanned device; and adjusting, according to the target rotation speed, a voltage input into an electric motor of the unmanned device, and then controlling the unmanned device by using the adjusted voltage.
A data fusion method is provided. In one embodiment, the method comprises: acquiring a rotation angle of a laser transmitter of a lidar; selecting, according to a predetermined correspondence between rotation angle intervals and image sensors, an image sensor corresponding to a rotation angle interval in which the obtained rotation angle of the laser emitter is located as a specified image sensor, sending a trigger signal to the specified image sensor, to enable the specified image sensor to acquire an image, receiving the image and a point cloud that is acquired and returned by the lidar within the rotation angle interval in which the obtained rotation angle is located, and fuse information of pixels in the image and information of points in the point cloud according to pose change information of a vehicle in a process of acquiring the image and the point cloud.
The disclosure relates to a method for guiding landing of an unmanned aerial vehicle. The method for guiding landing of unmanned aerial vehicle includes: determining location information of the unmanned aerial vehicle over a target airdrome by using a plurality of position detectors in an airdrome auxiliary positioning system; generating correction guidance information according to an offset vector between the location information and target location information, where the target location information is information representing any location within signal coverage of a guidance beacon of the target airdrome; and sending the correction guidance information to the unmanned aerial vehicle, where the correction guidance information is used to guide the unmanned aerial vehicle to fly into the signal coverage of the guidance beacon.
The present disclosure relates to a method and system for splicing multiple channels of images, and a readable storage medium and an unmanned vehicle. The method is applied to a field programmable gate array platform, and the method comprises: for each of multiple channels of images to be spliced, executing the following processing to obtain multiple rows of row buffer data of each of said multiple channels of images: repeatedly executing the process of acquiring row data, and on the basis of a preset mapping table, writing pixel data comprised in the row data into a row buffer area corresponding to said channel of image to obtain row buffer data until a first preset condition is met, so as to obtain the multiple rows of row buffer data of said channel of image; writing the multiple rows of row buffer data of each of said multiple channels of images into a frame buffer area, and executing filling processing on the multiple rows of row buffer data of each said channel of image, so as to obtain spliced frame buffer data; and preprocessing the spliced frame buffer data, so as to obtain a spliced image.
A trajectory planning method and apparatus. The trajectory planning method is applied to an autonomous vehicle and comprises: establishing a prediction control model of a vehicle according to the current state information and control information of the vehicle in a vehicle traveling process; according to the central line of the current lane of a road where the vehicle is located, establishing a prediction state transformation model of the vehicle; establishing a prediction road constraint model of the vehicle according to the current map information of the vehicle and the current obstacle information of the road where the vehicle is located; and according to the prediction control model, the prediction state transformation model, and the prediction road constraint model, obtaining target state information of the vehicle at a plurality of future moments, and according to the target state information, obtaining a planned trajectory of the vehicle within a time period formed by the plurality of future moments.
A migration scene-based trajectory prediction model training method, comprising: (101) obtaining a first trajectory predication model and multiple candidate training samples, the first trajectory predication model being a trajectory predication model obtained by means of training of training samples in a preset geographic scene; (102) for any one candidate training sample, determining a reference value corresponding to the candidate training sample according to a trajectory feature corresponding to the candidate training sample and/or a prediction result of the first trajectory predication model for the candidate training sample; (103) selecting a target training sample from the multiple candidate training samples according to reference values corresponding to the multiple candidate training samples; and (104) training the first trajectory predication model according to the target training sample to obtain a second trajectory prediction model, the second trajectory prediction model being configured to predict a traveling trajectory of an obstacle in a migration scene.
A path planning method and apparatus for an unmanned device, which method is applied to an unmanned device, such as an unmanned vehicle, in the field of unmanned driving. The process of path planning for an unmanned device involves: according to the direction of a planning path, determining, on the planning path, a calibration planning position with a direction opposite to a speed direction of a reference path; in the reference path and according to the point closest to the calibration planning position, determining a calibration reference position; and then, with the calibration planning position as a starting point and according to a path after the calibration reference position in the reference path, replanning a planning path after the calibration planning position.
The present application relates to a battery mounting structure and a vehicle. The battery mounting structure comprises a mounting base body, a battery tray used for detachably mounting a battery, a pressing element, and a stopper; the mounting base body has an accommodating space used for accommodating the battery tray; the battery tray is detachably mounted in the accommodating space by means of a mounting port; the stopper is movably disposed on the battery tray to have a pressing position for pressing the battery and a release position for releasing the battery; the pressing element is fixedly disposed on the mounting base body and has a locking state and an unlocking state; when the battery tray is mounted in the accommodating space by means of the mounting port, the pressing element is in a locking state in which the stopper is locked at the pressing position; when the battery tray is moved out of the accommodating space by means of the mounting port, the pressing element can unlock the stopper, so that the stopper is moved to the release position. By means of the battery mounting structure, quick battery replacement can be achieved, and the battery mounting structure is particularly suitable for battery mounting of various vehicles using batteries as power such as unmanned vehicles.
H01M 50/244 - Secondary casingsRacksSuspension devicesCarrying devicesHolders characterised by their mounting method
H01M 50/249 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders specially adapted for aircraft or vehicles, e.g. cars or trains
H01M 50/262 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders with fastening means, e.g. locks
H01M 50/264 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders with fastening means, e.g. locks for cells or batteries, e.g. straps, tie rods or peripheral frames
A method for lateral control of a driverless device, comprising: acquiring control data and state data of a driverless device at a plurality of historical moments, and acquiring state data of the driverless device at the current moment (S100); inputting the acquired control data and state data as inputs into an extreme learning machine to obtain a first weighting and a second weighting at the current moment (S102); and, on the basis of the first weighting, the second weighting, a portion of the control data at a historical moment, a portion of the state data at a historical moment, and the state data at the current moment, determining control data at the current moment and, on the basis of the control data at the current moment, controlling the driving of the driverless device (S104).
An unmanned device acquires sensing data of surrounding obstacles; determines, for each obstacle, at least one predicted track of the obstacle in a future period of time based on the sensing data; determines, for each moment in the future period of time and according to the predicted track corresponding to the obstacle, a collision probability that a collision with the obstacle occurs at each position in a target region at the moment; and determines a global collision probability that the collision with the obstacle occurs in the entire target region at the moment. According to the global collision probability corresponding to each obstacle at each moment, the unmanned device controls the unmanned device in the future period of time.
A trajectory planning method, applied to autonomous vehicles. The trajectory planning method comprises: acquiring current positioning information of a vehicle, map information, and static obstacle information of a road where the vehicle is located (S11); obtaining a preliminary planned driving area of the vehicle according to the positioning information, the map information, and the static obstacle information, the preliminarily planned driving area comprising at least one transverse width section (S12); acquiring, at a plurality of future time points, position information of each dynamic obstacle on the road where the vehicle is located, and determining a drivable trajectory section of the vehicle at each future time point according to the preliminarily planned driving area, the position information of each dynamic obstacle at the plurality of future time points, and an obstacle avoidance decision (S13); and obtaining, according to the drivable trajectory sections, a target planed trajectory of the vehicle within a time period formed by the plurality of future time points (S14). By means of the present application, the vehicle passing efficiency and avoidance capability can be improved.
A laser radar synchronization method and apparatus, a readable storage medium, and an unmanned driving device. The method comprises: when a multi-laser radar system is started, determining, for each laser radar in the system, a deflection angle of a radar coordinate system of the laser radar relative to a target coordinate system (S101); determining a mapping value of a target orientation in the target coordinate system under the radar coordinate system of the laser radar according to the deflection angle, and taking the mapping value as an initial orientation of the laser radar under the radar coordinate system thereof (S102); and controlling each laser radar to synchronously perform rotary scanning with a respective initial orientation as a start (S103).
A vehicle-mounted visual acquisition module and a cleaning device therefor, and an unmanned vehicle, wherein the cleaning device for the vehicle-mounted visual acquisition module comprises a stator base (1), a light-transmitting housing (2), a driving device (3), and a belt transmission device (4). The light-transmitting housing (2) is rotatably mounted on the stator base (1), and is used for covering a vehicle-mounted visual acquisition assembly (6). The driving device (3) is disposed on the stator base (1) and located outside the light-transmitting housing (2). An input end of the belt transmission device (4) is connected to the driving device (3), and an output end of the belt transmission device (4) is connected to the light-transmitting housing (2). The driving device (3) drives, by means of the belt transmission device (4), the light-transmitting housing (2) to rotate. The belt transmission device (4) is relatively simple in structure and low in cost, and can absorb part of vibration energy in transmission, so that vibration caused in the transmission can be reduced, and the light-transmitting housing (2) can rotate more stably.
The present application relates to the technical field of unmanned aerial vehicles, and discloses a beacon, a beacon generation method, a beacon generation apparatus, and a device. The beacon comprises patterns of at least three levels: a first-level pattern, at least two mutually different second-level patterns, and at least two mutually different third-level patterns. The at least two mutually different second-level patterns and the at least two mutually different third-level patterns are scattered and superimposed on the first-level pattern, and there is no overlap between any two of the at least two mutually different second-level patterns and the at least two mutually different three-level patterns. The area of each second-level pattern is smaller than that of the first-level pattern, and the area of each third-level pattern is smaller than the area of the second-level pattern.
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
A control method and apparatus for an unmanned device. The control method comprises: a control center acquiring a delivery state corresponding to a first unmanned device (S101); when it is determined that the delivery state satisfies a preset condition, determining a target delivery article from at least one delivery article conveyed by the first unmanned device (S102); sending a first control instruction, wherein the first control instruction is used for controlling a second unmanned device, such that the second unmanned device conveys, according to a delivery route planned by the control center, the borne target delivery article to a delivery point corresponding to the target delivery article (S103), wherein the first unmanned device carries at least one second unmanned device; and when it is determined that the second unmanned device has travelled away from the first unmanned device, sending a second control instruction to the first unmanned device, wherein the second control instruction is used for controlling the first unmanned device to continue to travel (S104).
Provided in the present application is a traffic signal light detection method, comprising: monitoring the location of a driverless device (S100); on the basis of the location of the driverless device, the planned path of a task being executed, and a pre-constructed map, determining location information of a target signal light (S102); when the driverless device is within a preset range of the target signal light, on the basis of the monitored most recent location of the driverless device, preset camera external parameters, and the location information of the target signal light, determining an angle adjustment parameter of a target camera of the driverless device (S104); on the basis of the angle adjustment parameter, adjusting the collection angle of the target camera such that the target camera faces the target signal light (S106); and collecting images by means of the target camera and detecting information of the state of the target signal light in the images (S108).
A remote control method for an unmanned vehicle (210), which belongs to the technical field of remote control. The method comprises: determining a travel area of an unmanned vehicle (210) according to positioning information of the unmanned vehicle (210) (step 110); determining, according to the communication quality of cloud servers (230) in the travel area with the unmanned vehicle (210), a cloud server (230) that establishes a communication connection with the unmanned vehicle (210) (step 120); selecting, according to a preset communication line length between a remote agent (220) and the cloud server (230), the remote agent (220) that controls the unmanned vehicle (210) (step 130); and establishing a communication connection between the unmanned vehicle (210), the cloud server (230), and the remote agent (220), and starting the remote agent (220) to remotely control the unmanned vehicle (210) by means of the cloud server (230) (step 140).
The present application discloses a fault detection method and apparatus, and a storage medium and an electronic device. According to embodiments of the present application, each piece of hardware or software on a driverless device is no longer detected independently, and the current comprehensive sensing results of multiple driverless devices are obtained; and a first difference between the current comprehensive sensing result of each driverless device and the current comprehensive sensing results of other driverless devices is determined, and whether a fault occurs in said driverless device is detected according to the first difference. The comprehensive sensing result is not a result independently output by a certain hardware or software, but a final sensing result integrating all hardware and software, and therefore, in the method, whether the fault which can affect an automatic driving function occurs in said driverless device can be detected on the whole on the basis of the final sensing result of said driverless device, the maintenance costs of said driverless device can be reduced, and the use efficiency of said driverless device is improved.
An unmanned device control method and apparatus, a storage medium, and an electronic device. The unmanned device control method comprises: monitoring a motion state of a first unmanned device when performing a task according to a pre-planned path (S100); determining, according to the monitored motion state of the first unmanned device, whether an anomaly occurs when the first unmanned device performs the task (S102); if an anomaly occurs when the first unmanned device performs the task, scheduling a second unmanned device to perform an environmental survey (S104); re-planning a path for the first unmanned device according to the environmental survey result (S106); and controlling, according to a re-planned path, the first unmanned device to perform a task according to the re-planned path (S108).
An unmanned aerial vehicle delivery method and a cargo storage cabinet for unmanned aerial vehicle delivery. The delivery method comprises: in response to an application docking request of a first unmanned aerial vehicle, determining use states of all parking aprons comprised in a docking platform corresponding to the first unmanned aerial vehicle, wherein the application docking request is sent when it is detected that the first unmanned aerial vehicle is located within a set range of the docking platform; and if it is determined according to the use states that all the parking aprons comprised in the docking platform have been occupied, sending a waiting instruction to the first unmanned aerial vehicle so as to enable the first unmanned aerial vehicle to, according to the waiting instruction, fly to a waiting area for landing. The described method can effectively reduce the power consumption of the unmanned aerial vehicle when waiting for an available parking apron, while improving the safety of the unmanned aerial vehicle when waiting for an available parking apron.
An unmanned aerial vehicle monitoring system, method and apparatus, and a storage medium. An unmanned aerial vehicle (100) in the unmanned aerial vehicle monitoring system sends a current location information of the unmanned aerial vehicle (100) to a server (102), and the server (102) forwards the current location information to a terminal (104); the terminal (104) obtains route information of the unmanned aerial vehicle (100) from the server (102), and displays a planned flight path of the unmanned aerial vehicle (100) and a current location of the unmanned aerial vehicle (100) in a pre-constructed three-dimensional environment model.
An image acquisition method and apparatus are provided. By controlling a motion device, at least one of an image acquisition device or a photographed target object moves under the driving of the motion device, so that a sample image including the target object can be acquired in a preset designated acquisition pose after movement, thereby improving the accuracy of a relative pose between the image acquisition device and the target object during acquisition, reducing human intervention during acquisition, improving the automation degree, and providing the possibility for subsequent services that need to be performed according to sample images captured with relatively high shooting pose accuracy.
Disclosed in the present description are a delivery method and apparatus using an unmanned aerial vehicle. According to an example of the method, the method comprises: after obtaining the order information of an order to be processed, determining, on the basis of the order information, a flight route for executing a delivery task corresponding to the order to be processed; according to delivery object information and the flight route in the order information, and the attribute information of each unmanned aerial vehicle in a starting point delivery station, determining an operation condition for executing the delivery task corresponding to the order to be processed; and according to the battery information of each unmanned aerial vehicle in the starting point delivery station and the operation condition for executing the delivery task, determining a target unmanned aerial vehicle for executing the delivery task, and allocating the order to the target unmanned aerial vehicle, so that the target unmanned aerial vehicle executes the delivery task.
A method for trajectory planning, an apparatus, a storage medium, and an electronic device are provided. A constraint set of a space including a target device is determined according to a velocity of an unmanned device and velocities of designated obstacles, so that during optimization of a preliminary reference trajectory, a solution can be obtained with the space in the constraint set as a solution space under the constraint of the constraint set, so as to ensure that the solution space is a convex space, and relatively satisfactory reference trajectory points can be solved.
A landing control method for an unmanned aerial vehicle. The steps of the method comprise: in response to receiving a landing request of a target unmanned aerial vehicle, acquiring flight information of a preceding unmanned aerial vehicle (S21), wherein the preceding unmanned aerial vehicle is an unmanned aerial vehicle sharing the same parking platform with the target unmanned aerial vehicle, and the flight information comprises the moment when the preceding unmanned aerial vehicle leaves the parking platform after completing landing; determining a target spatial sub-region, in which the target unmanned aerial vehicle is located, in a landing airspace corresponding to the parking platform (S22), wherein the landing airspace comprises a plurality of spatial sub-regions which do not overlap each other; and when the preceding unmanned aerial vehicle has not left the parking platform, controlling, according to the flight information, the target unmanned aerial vehicle to pass through the target spatial sub-region to land on the parking platform, so that the moment when the target unmanned aerial vehicle lands on the parking platform is later than the moment when the preceding unmanned aerial vehicle leaves the parking platform (S23).
An unmanned delivery vehicle, an unmanned delivery system, method and apparatus, and a medium, relating to the field of unmanned delivery. The unmanned delivery vehicle comprises a memory, a wireless communication component, a processor, and a reminder component. The memory is used for storing a first terminal identifier, and the first terminal identifier is used for identifying a mobile terminal carrying pickup information. The wireless communication component is used for monitoring a second terminal identifier, and the second terminal identifier is broadcasted and sent by the mobile terminal located on the peripheral side of the unmanned delivery vehicle. The processor is used for obtaining the first terminal identifier, and matching the first terminal identifier and the second terminal identifier. The reminder component is used for presenting pickup reminder information when the first terminal identifier matches the second terminal identifier, and the pickup reminder information is used for indicating that goods to be delivered matching the pickup reminder information are placed on the unmanned delivery vehicle.
A method and apparatus for controlling an unmanned driving device are provided. An unmanned driving device obtains a travel trajectory of the unmanned driving device as a first trajectory, obtains a travel trajectory of a target object around as a second trajectory, and determines a spatial relation between the unmanned driving device and the target object according to the first trajectory and the second trajectory. Subsequently, the unmanned driving device can code the spatial relation according to a preset coding manner to obtain spatial coding information, where each code in the spatial coding information is configured to represent a spatial relation between the unmanned driving device and the target object at a moment corresponding to the code.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/02 - Control of position or course in two dimensions
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
56.
DRIVING RISK IDENTIFICATION METHOD, STORAGE MEDIUM AND ELECTRONIC DEVICE
Embodiments provide extracting information respectively corresponding to predetermined dimensions from environment information corresponding to an unmanned driving environment. In some embodiments, the information respectively corresponding to the dimensions is input into an identification model to obtain a driving feature. Then a risk value representing a driving risk degree of an unmanned device is determined, and a maximum variation of the information corresponding to at least one dimension is determined when a variation of the driving feature is less than a predetermined threshold. A maximum variation of the information corresponding to each dimension is used as a risk contribution feature. A variation representative value of the information corresponding to each dimension is determined from the risk contribution feature. According to the variation representative values of the dimensions, a driving risk factor corresponding to the risk value is determined based on the driving feature.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06N 3/04 - Architecture, e.g. interconnection topology
A positioning method and apparatus for an unmanned driving device. A positioning deviation function is obtained in advance through fitting on the basis of the proportion of regions that do not obstruct satellite signals among multiple regions in history and the positioning deviation in multiple regions. During the positioning of an unmanned driving device, the confidence of results of satellite positioning currently performed can be determined according to the proportion of the regions that do not obstruct satellite signals in a target region where the unmanned driving device is currently located and the pre-fitted positioning deviation function. Furthermore, according to the confidence of the results and the satellite positioning location determined by means of a satellite positioning method, fusion positioning is performed on the unmanned driving device.
A method and apparatus for acquiring a flight route, and a computer device and a readable storage medium, which belong to the technical field of airspace management. The method for acquiring a flight route comprises: step 201: acquiring the start location and the destination location of an aircraft, and determining a plurality of reference flight routes of the aircraft on the basis of the start location and the destination location; step 202: determining a cost numerical value of each reference flight route on the basis of first information of each of the plurality of reference flight routes, wherein the first information of any reference flight route is used to indicate the degree of congestion of a ground mapping route corresponding to any of the reference flight routes; and step 203: selecting a target flight route of the aircraft from the plurality of reference flight routes on the basis of the cost numerical value of each reference flight route.
The present disclosure discloses a target object recognition method and apparatus. By extracting regions corresponding to target objects in an acquired image, the problem of a decrease in recognition accuracy caused by noise is solved. Moreover, by means of two-stage recognition, the region corresponding to each target object and the recognition result of each target object can be further accurately determined, and finally, the final recognition result is determined by means of the two-stage recognition results, so that the accuracy of target object recognition is improved.
A trajectory prediction method and apparatus, a storage medium, and an electronic device are provided. In embodiments of this disclosure, according to a historical trajectory of a designated target and a historical trajectory of each obstacle, a historical interaction feature between the designated target and each obstacle is determined, and a motion trajectory of the designated target is predicted to obtain an initial predicted trajectory. A future interaction feature between each obstacle and the designated target is then determined according to the initial predicted trajectory and a planned trajectory of each obstacle. According to the future interaction feature, a final predicted trajectory of the designated target is obtained.
B60W 50/00 - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G06N 3/04 - Architecture, e.g. interconnection topology
G16Y 20/10 - Information sensed or collected by the things relating to the environment, e.g. temperatureInformation sensed or collected by the things relating to location
G16Y 40/50 - SafetySecurity of things, users, data or systems
Disclosed in the present description are a method and apparatus for training a recognition model. The method comprises: performing blurring processing on an acquired unannotated target object image, so as to determine a first blurred image and a second blurred image; determining a recognition result of the second blurred image by means of a recognition model to be trained; determining annotations to the first blurred image according to the recognition result of the second blurred image; and according to the annotations to the first blurred image, training said recognition model by using the first blurred image as a first training sample. Accordingly, it is unnecessary to annotate a sample manually, thereby reducing the generation time and cost of a training sample, and improving the training efficiency of a recognition model.
An obstacle tracking method, a storage medium, and an electronic device are provided. In various embodiments for obstacles in every two frames of laser point clouds, first, the obstacles in the two frames of laser point clouds are matched according to types of the obstacles in the two frames of laser point clouds. Next, unmatched obstacles in the two frames of laser point clouds are matched according to point cloud data of the unmatched obstacles in the two frames of laser point clouds. After two times of matching, motion states of the obstacles in the two frames of laser point clouds are updated.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestriansRecognition of traffic objects, e.g. traffic signs, traffic lights or roads
A data noise reduction method and apparatus are provided. In some embodiments, the method includes: obtaining to-be-processed point cloud data; for each of points in the to-be-processed point cloud data, determining a point feature corresponding to the point, the point feature comprising at least one of a spatial distribution difference feature between the point and other points adjacent to the point or a point group distribution feature corresponding to a point group comprising all points in a local space in which the point is located; recognizing a noise point from the to-be-processed point cloud data according to point features corresponding to the points in the to-be-processed point cloud data; and performing noise reduction on the to-be-processed point cloud data according to the recognized noise point.
An obstacle tracking method and apparatus, a storage medium and an unmanned driving device are provided. An obstacle aggregation region in a first point cloud may be determined according to position information of each obstacle in the first point cloud acquired by the unmanned driving device. Then, an aggregated obstacle and a non-aggregated obstacle in the second point cloud acquired by the unmanned driving device are determined according to the obstacle aggregation region. In addition, a matching result of each aggregated obstacle is respectively determined based on a group matching rule, and a matching result of each non-aggregated obstacle is respectively determined based on a non-group matching rule. Finally, an obstacle tracking result is determined according to the matching result of each obstacle in the second point cloud.
First, environment image data acquired by an unmanned driving device is obtained, and for each piece of reference image data matching the environment image data, a predicted pose of the unmanned driving device when acquiring the environment image data is determined according to an actual pose corresponding to the reference image data; and then pose deviation representation information of the reference image data is determined according to the predicted pose and actual poses corresponding to other pieces of reference image data. Finally, target image data is selected from the reference image data, and a pose of the unmanned driving device when acquiring the environment image data is determined.
An unmanned aerial vehicle landing method and apparatus. The method comprises: performing semantic segmentation on a ground image acquired by an unmanned aerial vehicle to determine the type of each ground area corresponding to the ground image (S100); determining a landing area of the unmanned aerial vehicle (S102); controlling the unmanned aerial vehicle to reach a position above the landing area and then descend to the landing area (S104); and tracking the landing area by means of image stabilization according to images of the landing area continuously acquired in the descending process of the unmanned aerial vehicle, so as to adjust a landing trajectory of the unmanned aerial vehicle until the unmanned aerial vehicle reaches the landing area (S106). In the process of autonomous landing of the unmanned aerial vehicle, the landing area is determined by means of the image semantic segmentation, and the landing area is tracked by means of the image stabilization, thereby avoiding dependence on a positioning chip, and improving the landing efficiency and safety of the unmanned aerial vehicle.
The present disclosure discloses a communication encryption system, method and apparatus. An unmanned device in the system may generate a key by means of a symmetric encryption algorithm and send same to a control station; and the control station in the system may, according to a pre-stored device identifier of the unmanned device, encrypt target data to be sent, determine first hybrid data, and re-encrypt the first hybrid data according to a received key so as to obtain first ciphertext and send same to the unmanned device. After receiving the first ciphertext, the unmanned device may decrypt the first ciphertext by means of the key generated by the symmetric encryption algorithm to obtain the first hybrid data, and then decrypt the obtained first hybrid data according to a device identifier thereof to obtain target data. In the system, two parties perform XOR encryption and decryption on the transmitted target data by means of the device identifier of the unmanned device, which further enhances the security of information transmission.
A self-driving control system comprises a plurality of self-driving systems (101), an evaluation module (102) and a control module (103), wherein the self-driving systems (101) can be in a running state at the same time, and each can send, to the evaluation module (102) and in the running state, target feature information of an unmanned device in the current travelling state; the evaluation module (102) is used for evaluating, according to the target feature information, the matching degree between each self-driving system (101) and the current travelling environment information, determining a target self-driving system according to the matching degree between each self-driving system (101) and the current travelling environment information, and sending an identifier of the target self-driving system to the control module (103); and the control module (103) is used for determining, according to a self-driving system that currently controls the unmanned device to travel, and the target self-driving system, a target control policy, and switching, according to the target control policy, to the target self-driving system to control the unmanned device to travel.
The present invention relates to an unmanned aerial vehicle control method, apparatus and system, a storage medium, and an electronic device. The method for an unmanned aerial vehicle comprises: receiving flight task information sent by a flight task management terminal; for each flight task segment, establishing a first communication connection with a first remote control end corresponding to the flight task segment by means of a first communication module of the unmanned aerial vehicle, so that the unmanned aerial vehicle can receive a control instruction sent by the first remote control end when the unmanned aerial vehicle flies in the flight task segment; when the unmanned aerial vehicle flies to a remote control connection point of the flight task segment, establishing a second communication connection with a second remote control end corresponding to the next flight task segment of the flight task segment by means of a second communication module of the unmanned aerial vehicle, so that the unmanned aerial vehicle can receive a control instruction sent by the second remote control end; and disconnecting the first communication connection with the first remote control end when the unmanned aerial vehicle flies to the end point of the flight task segment.
A sensor module, a sensor assembly, and a driverless vehicle. The sensor module (100) comprises a housing (1) and a camera assembly (2). The housing (1) comprises a top plate (11), a bottom plate (12), a light-transmissive side plate (13), and support members (14). The top plate (11) and the bottom plate (12) are provided opposite to each other. The light-transmissive side plate (13) is located between the top plate (11) and the bottom plate (12). The support members (14) extend between the top plate (11) and the bottom plate (12). The two ends of each support member (14) are respectively detachably connected to the top plate (11) and the bottom plate (12). The light-transmissive side plate (13) is sandwiched between the top plate (11) and the bottom plate (12). The top plate (11), the bottom plate (12), and the light-transmissive side plate (13) jointly enclose to form an accommodation space for accommodating the camera assembly (2). The housing (1) of the sensor module (100) is convenient to disassemble and assemble, such that the camera assembly (2) located inside the housing (1) can be conveniently and quickly repaired.
A control method and device for an unmanned apparatus. The control method comprises the following steps: acquiring state data of an unmanned apparatus at the current time (S101); predicting, according to the state data, a control amount for the unmanned apparatus at the next time and taking same as a reference control amount (S102); then inputting the state data into a control amount error prediction model, to obtain an actual control amount error for the unmanned apparatus at the next time (S103); determining, according to the actual control amount error and the determined reference control amount, an actual control amount for the unmanned apparatus at the next time (S104); and controlling the unmanned apparatus according to the actual control amount (S105).
Vehicle control is provided, including: obtaining vehicle information of a target vehicle and environmental information of a reference environment in which the target vehicle is located; obtaining a target matrix based on the vehicle information and the environmental information; splitting the target matrix to obtain a plurality of sub-matrices; and obtaining target driving control information of the target vehicle based on matrix elements in the sub-matrices and driving control information of a surrounding vehicle of the target vehicle.
A camera module and an unmanned vehicle. The camera module comprises: a camera (10); a mounting base (20), the camera (10) being mounted on the mounting base (20); a light-transmitting protective cover (30) fitted over the camera (10) and movably provided on the mounting base (20); and a driving mechanism (40) located on an inner side of the light-transmitting protective cover (30) and used for driving the light-transmitting protective cover (30) to vibrate with respect to the camera (10) and the mounting base (20).
B08B 7/02 - Cleaning by methods not provided for in a single other subclass or a single group in this subclass by distortion, beating, or vibration of the surface to be cleaned
74.
VEHICLE CONTROL METHOD AND APPARATUS, AND STORAGE MEDIUM, ACTIVE SAFETY SYSTEM AND VEHICLE
A vehicle control method and apparatus, and a storage medium, an active safety system and a vehicle. The method comprises: acquiring traveling state information of a vehicle, wherein the traveling state information comprises a front collision time value and a rear collision time value of the vehicle, the front collision time value represents a collision time value when the vehicle collides with a first obstacle in front of the vehicle, and the rear collision time value represents a collision time value when the vehicle collides with a second obstacle behind the vehicle; determining, on the basis of the front collision time value, that the vehicle is at risk of a front collision, and determining, on the basis of the rear collision time value, that the vehicle is at risk of a rear collision; and if a front collision risk value is greater than a rear collision risk value, braking the vehicle.
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B60T 7/22 - Brake-action initiating means for automatic initiationBrake-action initiating means for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle
B60W 30/095 - Predicting travel path or likelihood of collision
Various embodiments a method and apparatus for unmanned driving behavior decision-making and model training, and an electronic device. The method includes: acquiring sample data, wherein the sample data includes a sample image; extracting a sample feature vector corresponding to the sample data, wherein a feature vector of the sample image is extracted by manifold dimension reduction; and based on the sample feature vector, training by semi-supervised learning to obtain a target decision-making model, wherein the target decision-making model is used for decision-making classification.
Embodiments of the present disclosure provide a control method of an unmanned vehicle and an unmanned vehicle, which have excellent safety. The control method of the unmanned vehicle includes: detecting vibration information and running attitude information of the unmanned vehicle; according to the vibration information, the running attitude information and a running status of the unmanned vehicle, determining a condition of the unmanned vehicle, wherein the running status of the unmanned vehicle includes a stop status and a driving status; and when the condition of the unmanned vehicle is abnormal, controlling the unmanned vehicle according to an abnormal condition coping strategy.
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
G05D 1/02 - Control of position or course in two dimensions
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
G07C 5/00 - Registering or indicating the working of vehicles
A shelf (100), comprising a tray (10), a base (20), and a lock structure. The tray (10) is used for placing an article; the tray (10) is rotatably connected to the base (20) by means of a spherical hinge (40); the lock structure is used for locking the tray (10) to the base (20). Also provided are a store sign-in method using the shelf (100), and a sign-in device.
F16M 11/14 - Means for attachment of apparatusMeans allowing adjustment of the apparatus relatively to the stand allowing pivoting in more than one direction with ball-joint
F16M 11/04 - Means for attachment of apparatusMeans allowing adjustment of the apparatus relatively to the stand
G06Q 10/08 - Logistics, e.g. warehousing, loading or distributionInventory or stock management
H04W 4/02 - Services making use of location information
A body housing, comprising a first shell (11) and a second shell (12) arranged at an interval along an inside-outside direction. The first shell (11) and the second shell (12) are configured such that the first shell (11) and the second shell (12) can be deformed to abut against each other when a vehicle collision occurs, so as to send a collision signal to a controller (2) of the vehicle by means of electric connection therebetween. Also disclosed is a vehicle having the body housing.
B60R 21/0136 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to actual contact with an obstacle
B60R 19/02 - Bumpers, i.e. impact receiving or absorbing members for protecting vehicles or fending off blows from other vehicles or objects
B60Q 9/00 - Arrangement or adaptation of signal devices not provided for in one of main groups
A path planning method, comprising: acquiring waybills which have not been processed by a target transport capacity (101), the waybills recording locations to be visited; sorting the waybills, and performing path planning on said locations in the waybills according to a sorting order of the waybills, to obtain a sequence of said locations (102); taking the sequence of said locations as an initial sequence (103), and randomly removing some of said locations from the initial sequence, to obtain a remaining sequence (104); re-inserting the removed locations into the remaining sequence, to obtain a recombined sequence (105); partially adjusting the recombined sequence, to obtain a target sequence (106); and when a preset end condition is satisfied, generating a path planning result on the basis of the target sequence (107).
An unmanned aerial vehicle service management system, a method, a readable storage medium, and an electronic device. Unmanned aerial vehicle service management systems (10 and 20) are commucatively connected to an external airspace management center (30). The airspace management center (30) is connected to the at least two unmanned aerial vehicle service management systems (10 and 20). The unmanned aerial vehicle service management systems (10 and 20) each comprise: an unmanned aerial vehicle service management device (101 and 201) and an unmanned aerial vehicle traffic management device (102 and 202). The unmanned aerial vehicle traffic management device (102 and 202) receives from the unmanned aerial vehicle service management device (101) current flight information of a first target unmanned aerial vehicle (103) and transmits to the airspace management center (30), receives from the airspace management center (30) current flight information and/or prompt information of a second target unmanned aerial vehicle (203) executing a second service in the other unmanned aerial vehicle service management system (20), and controls, on the basis of the current flight information and/or prompt information of the second target unmanned aerial vehicle (203) and of the current flight information of the first target unmanned aerial vehicle (103), the flight of the first target unmanned aerial vehicle (103) to execute a first service.
An information search method and apparatus, a search gateway, and a storage medium, which relate to the technical field of the Internet. The method comprises: receiving a first search request sent by a terminal, and acquiring first search request information corresponding to the first search request (201); modifying the first search request information in response to a need to degrade the first search request and that first service data corresponding to the first search request information is not present in a cache, so as to obtain second search request information, second service data corresponding to the second search request information being present in the cache (202); according to the second search request information, acquiring the second service data corresponding to the second search request information from the cache (203); and returning the second service data to the terminal (204).
A method for ranking a business object is provided. The method includes: obtaining a historical behavior record; extracting at least one of discrete feature information or continuous feature information of at least one business object from the historical behavior record; inputting at least one of discrete feature information or continuous feature information of each business object into a prediction model obtained by pre-training, and predicting a ranking score of each business object; and ranking each business object according to the ranking score of each business object.
Disclosed is communication control, applied to a server in communication with a client. A plurality of link channels is established between the client and the server. The communication control comprises: detecting whether a network fault occurs in a client according to network connection situations of a client; in response to the network fault in the client, determining the type of network fault in the client; selecting, according to the type of network fault, a target link channel matching the type of network fault from a plurality of link channels; and communicating with the client on the basis of the target link channel.
A delivery order allocation method and device, a storage medium, and an electronic apparatus. The delivery order allocation method comprises: forming, for multiple target delivery orders, one or more combined package solutions according to an acceptance rate of historical delivery order packages accepted by a delivery resource, the acceptance rate being determined on the basis of acceptance behavior of the delivery resource with respect to the historical delivery order packages, and the combined package solutions being used to combine the multiple target delivery orders into one or more target delivery order packages; determining a target combined package solution from the one or more combined package solutions according to a delivery duration of each target delivery order package; and allocating, to the delivery resource, a target delivery order package corresponding to the target combined package solution.
G06Q 10/08 - Logistics, e.g. warehousing, loading or distributionInventory or stock management
G06Q 10/06 - Resources, workflows, human or project managementEnterprise or organisation planningEnterprise or organisation modelling
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06Q 50/28 - Logistics, e.g. warehousing, loading, distribution or shipping
A method and device for search intent identification, an electronic device, and a storage medium. The method comprises: identifying a first search intent set of search requests on the basis of a dichotomous model (S110); identifying a second search intent set of the search requests on the basis of a multichotomous model, the search intents in the second search intent set being provided with intent strength values (S120); finding the intersection of the first search intent set and the second search intent set, attributing the search intents in the intersection as search intents corresponding to the search requests, and determining a search intent display scheme on the basis of the intent strength values of the search intents in the intersection (S130).
A method and system for guiding unmanned aerial vehicle landing, an auxiliary positioning system, and an unmanned aerial vehicle. The method for guiding unmanned aerial vehicle landing comprises: determining, by means of multiple location detectors in an auxiliary positioning system of a take-off and landing port, location information of an unmanned aerial vehicle in the airspace above a target take-off and landing port (S11); generating corrected guidance information according to an offset vector between the location information and target location information, the target location information representing information of any location within a signal coverage of a guidance beacon in the target take-off and landing port (S12); and sending the corrected guidance information to the unmanned aerial vehicle, wherein the corrected guidance information is used to guide the unmanned aerial vehicle to fly into the signal coverage of the guidance beacon (S13).
The method disclosed in the present application comprises: acquiring travel data during a travel, the travel data comprising recording data and heartbeat data; splitting the travel data into n travel segments, each travel segment comprising a recording segment corresponding the recording data and a heartbeat segment corresponding to the heartbeat data, and n being a positive integer; calculating respective importance degree parameters of the n travel segments, the importance degree parameters being used for representing importance degrees of the travel segments during the travel; and correspondingly storing the recording segments in the n travel segments and the respective importance degree parameters of the n travel segments.
A polysemant meaning learning method is provided. The method includes extracting a plurality of first target terms and at least one adjacent term combinations of each first target term; obtaining a capsule network model by training by taking the encoding of each first target term as an input vector and the encoding of each adjacent term combination corresponding to each first target term as an output vector; when a to-be-recognized second target term is recognized, inputting the second target term into the capsule network model, and determining a plurality of obtained intermediate vectors as feature vectors of the second target term; and clustering the feature vectors with a cosine similarity greater than a similarity threshold to generate representative terms of one or more categories and determining one or more meanings of the one or more categories.
G06K 9/72 - Methods or arrangements for recognition using electronic means using context analysis based on the provisionally recognised identity of a number of successive patterns, e.g. a word
G06K 9/62 - Methods or arrangements for recognition using electronic means
Disclosed is a form of content pushing, comprising: acquiring a candidate push list on the basis of a search request of a terminal, the candidate push list comprising at least one item of candidate push content; acquiring a historical behavior list of a user, the historical behavior list comprising negative influence content; determining, on the basis of each item of negative influence content in the historical behavior list and each item of candidate push content in the candidate push list, the click rate for each item of candidate push content in the candidate push list; determining push content according to the click rate for each item of candidate push content in the candidate push list; and sending the push content to the terminal.
The present disclosure relates to a method and device for flight path determination, a storage medium, and an electronic apparatus. According to an embodiment of the present application, the method comprises: determining, according to a flight origin and a flight destination of an aerial vehicle and a topology diagram corresponding to multiple blocks in a target space, a block path from the multiple blocks, along which the aerial vehicle moves from the flight origin to the flight destination, wherein the flight origin and the flight destination are located in the target space, multiple spatial points having location information exist in the target space, and each block contains a spatial point group obtained after grouping has been performed on the multiple spatial points; determining entrance/exit information of each block on the block path according to the location information of each spatial point and the block path; and determining a flight path of the aerial vehicle according to the location information of each spatial point, location information of the flight origin and the flight destination, connection relationships between the multiple spatial points, the block path, and the entrance/exit information of each block on the block path.
An aviation region construction method comprises: receiving region construction data (S11); determining a target region according to the region construction data (S12), wherein the target region is a flyable two-dimensional region; generating a plurality of first positioning nodes in the target region (S13); and generating a target aviation region according to a flyable altitude range and the plurality of first positioning nodes (S14), wherein the target aviation region is a flyable three-dimensional region. Therefore, by means of generating the first positioning nodes after the target region is constructed, the method may be applied to the aviation region management of various types of region construction data without needing to be based on existing routes and virtual pipelines, thus effectively broadening the scope of use of the aviation region construction method. Moreover, the target aviation region is generated on the basis of the first positioning nodes and the flyable altitude range, so that the node distribution of the aviation region is more comprehensive and accurate. Thus, the accuracy of aviation region management may be improved, and the utilization rate of the aviation region may be increased.
An unmanned aerial vehicle control method, comprising: in response to a take-off/land request for an unmanned aerial vehicle, determining a take-off/land path for the unmanned aerial vehicle, wherein the take-off/land path comprises a plurality of diversion take-off/land segments and a vertical take-off/land segment, and a projection onto a horizontal plane of two endpoints of the diversion take-off/land segments do not overlap (S110); causing the unmanned aerial vehicle to sequentially complete flights of the plurality of diversion take-off/land segments and the vertical take-off/land segment, so as to achieve take-off or landing, wherein the unmanned aerial vehicle completes a flight of a take-off/land segment under the condition that the take-off/land segment is free (S120).
A data fusion method and apparatus, and an applicable system. The applicable system comprises a vehicle (1), and a laser radar (2) and at least one image sensor (3) which are arranged on the vehicle (1), wherein the laser radar (2) collects a point cloud in a manner of rotating a laser transmitter. The method comprises: a processor (5) acquiring a rotation angle of a laser transmitter of the laser radar (2) (S200); according to a predetermined correlation between each rotation angle interval and each image sensor (3), selecting the image sensor (3) corresponding to an interval where the acquired rotation angle is located as a designated image sensor (3) (S202); sending a trigger signal to the designated image sensor (3), such that the designated image sensor (3) collects an image (S204); receiving the image, and the point cloud which is collected and returned by the laser radar (2) and is within the rotation angle interval where the acquired rotation angle is located (S206); and according to posture change information of the vehicle (1) during the process of collecting the image and the point cloud, fusing information of a pixel point in the image and information of a point in the point cloud (S208).
Provided in the embodiments of the present application are a recommendation information generation method and apparatus, an electronic device and a readable storage medium. The method comprises: according to the correlation between a user identifier, an object identifier and a context attribute parameter, determining multiple paths which each comprise a first node comprising a context attribute parameter, a second node comprising a user identifier and a third node comprising an object identifier; according to each sub parameter value comprised in the first node and the third node on the multiple paths, grouping the multiple paths to obtain multiple groups of paths; and according to respective path attribute values of the multiple groups of paths, generating recommendation information for an object to be recommended. By using the method provided in the embodiments of the present application, the process of generating information to be recommended is combined with context attribute parameters and path attribute values in historical data of the field to which the recommendation information belongs, so that the recommendation information for an object to be recommended is richer and has references, and can further delight and appeal to a user.
This application provides a method of dispatching the distribution. According to an example, the method of dispatching the distribution includes: planning, based on at least one combination of at least one target order and at least one target distributor, a distribution path of each target distributor after being assigned with a target order under each combination; calculating a distribution efficiency indicator and an order taking willingness indicator of the distribution path under each combination that are associated with the assignment of the target order to the target distributor; and selecting, based on the distribution efficiency indicator and the order taking willingness indicator of each combination, an optimal combination from the at least one combination for dispatching the distribution.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06Q 10/08 - Logistics, e.g. warehousing, loading or distributionInventory or stock management
Disclosed are a simulation testing method and apparatus. The method comprises: determining the position where an image collector in a simulation environment is located in the simulation environment (S201); creating an environmental particle generator in the simulation environment according to the position (S202); a simulation system controlling the image collector and the environmental particle generator to move in the simulation environment, and controlling the environmental particle generator to generate environmental particles, wherein the relative position between the image collector and the environmental particle generator remains unchanged when the two move in the simulation environment (S203); then, acquiring a simulation environment image which is collected by the image collector when the environmental particle generator generates the environmental particles (S204); and carrying out simulation testing according to the simulation environment image (S205).
Disclosed are a trajectory prediction method and apparatus for an obstacle. According to the embodiments of the present disclosure, the method comprises: determining a global interaction characteristic under the joint action of a vehicle and each obstacle according to historical state information and current state information of the vehicle and each obstacle, and a future motion trajectory planned by the vehicle itself; according to the global interaction characteristic and current state information of an obstacle to be predicted, determining an individual interaction characteristic of the obstacle to be predicted; and by means of the individual interaction characteristic and surrounding environment information of the vehicle, predicting a future motion trajectory of the obstacle to be predicted.
Disclosed are a data presentation method and apparatus, an electronic device and a computer-readable medium. The method comprises: with every two services as a service group, calculating a bidirectional service indicator of each service group on the basis of service data of each service group (101); creating a presentation unit array on the basis of the number of services (102); and dividing each presentation unit into two areas, and presenting the bidirectional service indicator of each service group in a mirrored manner in two areas of a corresponding presentation unit (103). The presentation of data of services having a bidirectional relationship is supported, and the readability of the presented data is improved.
Disclosed in the present application is a method. The method comprises: in response to a search request, acquiring search scenario information associated with the search request; according to the search scenario information and the search request, generating a composite feature used for identifying a search intention; and inputting the composite feature into a search intention identification module, and acquiring a search intention identification result outputted by the search intention identification module.
Disclosed in the present application are a user interaction method and device used for searching, a mobile terminal, and a storage medium. According to one example of the user interaction method used for searching, one or more search result aggregation blocks and navigation items of the search result aggregation blocks are determined in response to a search request from a user; the navigation items of the search result aggregation blocks and at least a part of the search result aggregation blocks are displayed according to the display priority of the search result aggregation blocks; and then, linkage adjustment is performed on display focuses of the search result aggregation blocks and the display focuses of the navigation items of the search result aggregation blocks in response to an interaction request from the user.