Methods and systems are described herein for detecting motion-induced errors received from inertial-type input devices and for generating accurate vehicle control commands that account for operator movement. These methods and systems may determine, using motion data from inertial sensors, whether the hand/arm of the operator is moving in the same motion as the body of the operator, and if both are moving in the same way, these systems and methods may determine that the motion is not intended to be a motion-induced command. However, if the hand/arm of the operator is moving in a different motion from the body of the operator, these methods and systems may determine that the operator intended the motion to be a motion-induced command to a vehicle.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
Methods and systems are described herein for determining three-dimensional locations of objects within identified portions of images. An image processing system may receive an image and an identification of location within an image. The image may be input into a machine learning model to detect one or more objects within the identified location. Multiple images may then be used to generate location estimations of those objects. Based on the location estimations, an accurate three-dimensional location may be calculated.
Methods and systems are described herein for a payload management system that may detect that a payload has been attached to an uncrewed vehicle and determine whether the payload is a restricted payload or an unrestricted payload. Based on determining that the payload is an unrestricted payload, the payload management system may establish a connection between the payload and the operator using a first communication channel that has already been established between the uncrewed vehicle and the operator. Based on determining that the payload is a restricted payload, the payload management system may establish a connection between the payload and operator using a second communication channel. The payload management system may listen for restricted payload commands over the second communication channel, and when a payload command is received via the second communication channel, the payload command may be executed using the restricted payload.
Methods and systems are described herein for a payload management system that may detect that a payload has been attached to an uncrewed vehicle and determine whether the payload is a restricted payload or an unrestricted payload. Based on determining that the payload is an unrestricted payload, the payload management system may establish a connection between the payload and the operator using a first communication channel that has already been established between the uncrewed vehicle and the operator. Based on determining that the payload is a restricted payload, the payload management system may establish a connection between the payload and operator using a second communication channel. The payload management system may listen for restricted payload commands over the second communication channel, and when a payload command is received via the second communication channel, the payload command may be executed using the restricted payload.
Methods and systems are described herein for enabling aerial vehicle navigation in GPS-denied areas. The system may use a camera to record images of terrain as the aerial vehicle is flying to a target location. The system may then detect (e.g., using a machine learning model) objects within those images and compare those objects with objects within an electronic map that was loaded onto the aerial vehicle. When the system finds one or more objects within the electronic map that match the objects detected within the recorded images, the system may retrieve locations (e.g., GPS coordinates) of the objects within the electronic map and calculate, based on the coordinates, the location of the aerial vehicle. Once the location of the aerial vehicle is determined, the system may navigate to a target location or otherwise adjust a flight path of the aerial vehicle.
Methods and systems are described herein for enabling aerial vehicle navigation in GPS-denied areas. The system may use a camera to record images of terrain as the aerial vehicle is flying to a target location. The system may then detect (e.g., using a machine learning model) objects within those images and compare those objects with objects within an electronic map that was loaded onto the aerial vehicle. When the system finds one or more objects within the electronic map that match the objects detected within the recorded images, the system may retrieve locations (e.g., GPS coordinates) of the objects within the electronic map and calculate, based on the coordinates, the location of the aerial vehicle. Once the location of the aerial vehicle is determined, the system may navigate to a target location or otherwise adjust a flight path of the aerial vehicle.
G05D 1/80 - Arrangements for reacting to or preventing system or operator failure
G05D 1/46 - Control of position or course in three dimensions
G05D 1/243 - Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
G05D 1/246 - Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
G06V 20/17 - Terrestrial scenes taken from planes or by drones
G06T 7/90 - Determination of colour characteristics
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
B64U 20/87 - Mounting of imaging devices, e.g. mounting of gimbals
Methods and systems are described herein for a layered fail-safe redundancy system and architecture for privileged operation execution. The system may receive vehicle maneuvering commands from a controller over a first channel. When a user input is received to initiate a privileged mode for executing privileged commands, the system may receive a privileged command over a second channel. The system may identify, based on the privileged mode of operation and the privileged command, a privileged operation to be performed by a vehicle. The system may then transmit a request to the vehicle to perform the privileged operation.
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
G05D 1/222 - Remote-control arrangements operated by humans
G05D 1/223 - Command input arrangements on the remote controller, e.g. joysticks or touch screens
G05D 1/224 - Output arrangements on the remote controller, e.g. displays, haptics or speakers
G05D 1/24 - Arrangements for determining position or orientation
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
9.
LAYERED FAIL-SAFE REDUNDANCY ARCHITECTURE AND PROCESS FOR USE BY SINGLE DATA BUS MOBILE DEVICE
Methods and systems are described herein for a layered fail-safe redundancy system and architecture for privileged operation execution. The system may receive vehicle maneuvering commands from a controller over a first channel. When a user input is received to initiate a privileged mode for executing privileged commands, the system may receive a privileged command over a second channel. The system may identify, based on the privileged mode of operation and the privileged command, a privileged operation to be performed by a vehicle. The system may then transmit a request to the vehicle to perform the privileged operation.
Methods and systems are described herein for determining three-dimensional locations of objects within identified portions of images. An image processing system may receive an image and an identification of location within an image. The image may be input into a machine learning model to detect one or more objects within the identified location. Multiple images may then be used to generate location estimations of those objects. Based on the location estimations, an accurate three-dimensional location may be calculated.
Methods and systems are described herein for determining three-dimensional locations of objects within identified portions of images. An image processing system may receive an image and an identification of location within an image. The image may be input into a machine learning model to detect one or more objects within the identified location. Multiple images may then be used to generate location estimations of those objects. Based on the location estimations, an accurate three-dimensional location may be calculated.
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/96 - Management of image or video recognition tasks
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
G06V 10/96 - Management of image or video recognition tasks
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
14.
Systems and methods of detecting intent of spatial control
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
G05D 1/222 - Remote-control arrangements operated by humans
G05D 1/223 - Command input arrangements on the remote controller, e.g. joysticks or touch screens
G05D 1/224 - Output arrangements on the remote controller, e.g. displays, haptics or speakers
G05D 1/24 - Arrangements for determining position or orientation
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
15.
ARCHITECTURE FOR DISTRIBUTED ARTIFICIAL INTELLIGENCE AUGMENTATION
Methods and systems are described herein for determining three-dimensional locations of objects within a video stream and linking those objects with known objects. An image processing system may receive an image and image metadata and detect an object and a location of the object within the image. The estimated location of each object is then determined within the three-dimensional space. In addition, the image processing system may retrieve, for a plurality of known objects, a plurality of known locations within the three-dimensional space and determine, based on estimated location and the known location data, which of the known objects matches the detected object in the image. An indicator for the object is then generated at the location of the object within the image.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
G06V 10/96 - Management of image or video recognition tasks
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Methods and systems are described herein for detecting motion-induced errors received from inertial-type input devices and for generating accurate vehicle control commands that account for operator movement. These methods and systems may determine, using motion data from inertial sensors, whether the hand/arm of the operator is moving in the same motion as the body of the operator, and if both are moving in the same way, these systems and methods may determine that the motion is not intended to be a motion-induced command. However, if the hand/arm of the operator is moving in a different motion from the body of the operator, these methods and systems may determine that the operator intended the motion to be a motion-induced command to a vehicle.
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
18.
ARCHITECTURE FOR DISTRIBUTED ARTIFICIAL INTELLIGENCE AUGMENTATION
Methods and systems are described herein for generating composite data streams. A data stream processing system may receive multiple data streams from, for example, multiple unmanned vehicles and determine, based on the type of data within each data stream, a machine learning model for each data stream for processing the type of data. Each machine learning model may receive the frames of a corresponding data stream and output indications and locations of objects within those data streams. The data stream processing system may then generate a composite data stream with indications of the detected objects.
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
G06V 10/96 - Management of image or video recognition tasks
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G06V 10/94 - Hardware or software architectures specially adapted for image or video understanding
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Methods and systems are described herein for detecting motion-induced errors received from inertial-type input devices and for generating accurate vehicle control commands that account for operator movement. These methods and systems may determine, using motion data from inertial sensors, whether the hand/arm of the operator is moving in the same motion as the body of the operator, and if both are moving in the same way, these systems and methods may determine that the motion is not intended to be a motion-induced command. However, if the hand/arm of the operator is moving in a different motion from the body of the operator, these methods and systems may determine that the operator intended the motion to be a motion-induced command to a vehicle.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
21.
UNIVERSAL CONTROL ARCHITECTURE FOR CONTROL OF UNMANNED SYSTEMS
A common command and control architecture (alternatively termed herein as a “universal control architecture”) is disclosed that allows different unmanned systems, including different types of unmanned systems (e.g., air, ground, and/or maritime unmanned systems), to be controlled simultaneously through a common control device (e.g., a controller that can be an input and/or output device). The universal control architecture brings significant efficiency gains in engineering, deployment, training, maintenance, and future upgrades of unmanned systems. In addition, the disclosed common command and control architecture breaks the traditional stovepipe development involving deployment models and thus reducing hardware and software maintenance, creating a streamlined training/proficiency initiative, reducing physical space requirements for transport, and creating a scalable, more connected interoperable approach to control of unmanned systems over existing unmanned systems technology.
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
G05D 1/02 - Control of position or course in two dimensions
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
23.
Systems and methods of remote teleoperation of robotic vehicles
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
G05D 1/222 - Remote-control arrangements operated by humans
G05D 1/223 - Command input arrangements on the remote controller, e.g. joysticks or touch screens
G05D 1/224 - Output arrangements on the remote controller, e.g. displays, haptics or speakers
G05D 1/24 - Arrangements for determining position or orientation
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G05D 1/02 - Control of position or course in two dimensions
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members