An intelligent terminal and a functional module control device (100) thereof, the functional module control device comprising physical keys (110), a switch control circuit (120) and an electronic switch (130). The physical keys (110) are arranged in an operating area on the surface of the intelligent terminal and used for triggering on/off control operation for a functional module to be controlled (310); the switch control circuit (120) is connected to the physical keys (110) and is used for receiving trigger information of the physical keys (110) and generating switch control information according to the trigger information; and the electronic switch (130) is respectively connected to the switch control circuit (120) and the functional module (310) in the intelligent terminal, and is used for receiving the switch control information sent by the switch control circuit (120) and generating on/off control information according to the switch control information so as to control the on/off of the functional module (310). The functional module control device achieves on/off control over the functional module (310) in the intelligent terminal from a physical level, thus being independent of a main control system (320) of the intelligent terminal, eliminating the risk that the functional module (310) is remotely modified or cracked, and reducing the possibility that personal private information in the intelligent terminal is stolen and accessed.
Disclosed are an interactive communication implementation method and device, and a storage medium. The method comprises: detecting whether the current interactive object stops interaction (S110); and when the current interactive object stops interaction in a wake-up state, determining, by means of collected image data and a speech signal, one candidate object participating in interaction as a new interactive object (S120). As such, interactive objects can be switched naturally, flexibly, and intelligently in a multi-user interaction scenario, so as to achieve, in a humanized manner, the aim of interactive communication with a plurality of objects in a timely and efficient manner.
A testing system and method for a mechanical finger component, and a storage medium, the testing system comprises: a tray (1) provided with an accommodating part; a sensor module (2) provided at the accommodating part for use in acquiring measurement data of the mechanical finger component during movement; and a servo board (3) used for generating a drive signal so as to control the movement of the mechanical finger component, and obtaining corresponding joint movement performance according to the measurement data. Before assembling, the joint movement performance of parts of the mechanical finger component is tested, thereby achieving testing efficiency and versatility.
A data processing system and method, a server and a storage medium. The system comprises: a user side (100) that is in communication connection with a server side (200) and is used for generating a request message according to a requirement and sending same to the server side (200), with the request message carrying a device identification code of an IoT side (300); the server side (200) that is in communication connection with the IoT side (300) and is used for receiving the request message sent by the user side (100), sending the request message to the IoT side (300) corresponding to the device identification code, receiving a processing result from the IoT side (300) and feeding back same to the user side (100); and the IoT side (300) corresponding to the device identification code, wherein the IoT side is used for processing the request message sent by the server side (200) and feeding back a processing result to the server side (200). The present invention effectively solves the technical problems in an existing system such as poor interaction between the user side (100) and the IoT side (300), and a tedious data processing flow.
An intelligent monitoring system, a monitoring method, a monitoring terminal (130), and a storage medium. The system comprises: a first capturing device (110) configured to perform rotary inspection on an area to be monitored and determine a first captured image comprising a suspected target; the monitoring terminal (130) configured to control, according to lock-on information of the suspected target sent by the first capturing device (110), a second capturing device (120) to operate, the lock-on information comprising a rotating angle parameter when the first capturing device (110) captures the first captured image; and the second capturing device (120) configured to capture, under the control of a coordinated control module (131), a second captured image for the image where the suspected target is located, and confirm the suspected target, the resolution of the second capturing device (120) being higher than the resolution of the first capturing device (110). During monitoring, the fist capturing device (110) performs quick inspection on the area to be monitored to find the suspected target, which is confirmed by the second capturing device (120). The accuracy of finding a target is greatly increased, and time of tracking a moving object is reduced.
A robot distributed control system and method, a robot, and a storage medium. The distributed control method comprises: a human-machine interaction sub-system receives an user interaction request, performs function demand parsing on the interaction request of the user, generates a control instruction, and issues, by means of a CAN communication unit, the control instruction to sub-systems related to the control instruction, so as to cause the sub-systems to respectively execute the control instruction to implement the function desired by the user in the user interaction. By using distributed functional sub-systems having relatively independent functions and performing mutually independent controls, each of the functional sub-systems can communicate with one other for coordinated operations, and can also independently complete a certain function, which facilitates the expansion of robot functions, and also facilitates the production, detection, and repair of the robots, and moreover, the sub-systems have a high degree of real-time response, which also can greatly improve the action response speed of the robots when interacting with the user, thereby improving the user experience.
A speech interaction method based on application program control, and a robot. The method comprises: when an application program is initiated, the application program classifying services thereof and registering same to a natural language server (S100); by means of human-machine interaction, obtaining original speech input by a user, and sending the original speech to the natural language server, wherein the original speech is subjected to speech recognition by the natural language server and subjected to natural language understanding according to the service classification to obtain an interaction request of the user (S101); and the application program receiving the interaction request of the user which is sent by the natural language server, and responding to the interaction request of the user (S102). In the method, a speech interaction process is controlled by means of an application program, such that a robot is in a leading position during the speech interaction process, thereby improving the response accuracy with regard to a demand of a user, and greatly improving user experience.
A method and system for measuring the volume of a parcel, and a storage medium and a mobile terminal. The method for measuring the volume of a parcel comprises: collecting a target image by means of a camera (1003) of a mobile terminal, wherein the target image comprises a plane marker, an upper surface of a parcel to be measured provided with the plane marker, and two side surfaces, adjacent to the upper surface, of the parcel to be measured (S100); carrying out image processing on the target image, so as to recognize the plane marker in the target image (S200); carrying out image edge detection on the target image, so as to identify, in the target image, angle points of the parcel to be measured (S300); and calculating the global coordinates of the angle points according to the plane marker, and a pre-acquired internal reference matrix and angle points of the camera, so as to obtain, according to the global coordinates of the angle points, the volume of the parcel to be measured (S400). By using the camera (1003) built in a mobile terminal to accurately measure, in real time, the volume of a parcel to be measured, the usage scenario is enriched, additional hardware facilities do not need to be added, and costs are saved.
Disclosed is a method for recommending a social object to a user. The method comprises: acquiring expression information of a user; according to the expression information and communication score information in a historical communication record library of the user, generating a recommended social object; and presenting the recommended social object to the user, and establishing real-time social communication with the recommended social object. In addition, further disclosed is a system for recommending a social object to a user. By means of the present invention, the problem of a recommendation method used in existing social recommendation systems not being sufficiently intelligent and having low accuracy can be effectively solved so as to meet the current psychological requirements of a user and accurately recommend a social object to the user.
Provided are a machine-vision-based drawing method and system. The method comprises: step S100, collecting an original static image of the content to be drawn; step S200, processing the original static image, and extracting stroke data of the original static image; step S300, according to the stroke data of the original static image, obtaining a corresponding robot state variable list; step S400, according to the robot state variable list, planning a motion path and generating a motion message sequence; and step S500, according to the motion message sequence, executing a drawing action. According to the present invention, collecting the content drawn by a user, performing drawing after stroke data is extracted, and enabling a robot to perform real-time analysis and imitation in order to realize personalized handwriting imitation and stronger interactivity.
A human-computer interaction method, and an interactive robot. The method comprises: when a robot detects a user needing active interaction, acquiring user comprehensive information (S1), wherein the user comprehensive information comprises personal information of a current user, and environment information of a current robot system; generating active interaction content matching the user comprehensive information (S2); and performing active interaction with the user according to the active interaction content (S3). During the process of a robot performing active interaction, active personalized human-computer interaction can be achieved according to different user information of different users and current environment information.
A humanoid robotic finger comprises: a finger main body (100), wherein the finger main body (100) is a hollow body, and is divided into multiple finger sections, an inclined recess (101) is provided between each adjacent finger section, and each inclined recess is covered by a support cover (200); a conveying device, wherein the conveying device comprises a cord (300) and a driving pulley (400), one end of the cord (300) is wound on the driving pulley (400), and the other end passes through the support cover (200), and is connected to the finger main body (100); and a driving device, wherein the driving device is connected to the driving pulley (400), and is used for controlling the driving pulley (400) to rotate clockwise or counterclockwise and driving the cord (300) to extend or retract, so as to control the finger main body (100) to switch between a bent state and an extended state. The finger main body is integrally formed, thereby simplifying the entire structure, facilitating mounting and maintenance, improving humanoid properties, and improving movement coordination.
A motor testing fixture comprising a machine frame (100), a motor fixing mechanism (200), and a motor control mechanism (300), the motor fixing mechanism (200) being arranged on the machine frame (100) and used for mounting a motor (500), also comprising, arranged on the machine frame (100), a performance testing mechanism (400), which comprises: a torque wheel (410) that is directly or indirectly arranged on and rotates with an output shaft of the motor (500), a counterweight (420) that is hanged on the torque wheel (410) and driven thereby so as to be lifted or lowered, and a rotational speed sensor (430) that directly or indirectly measures the rotational speed of the toque wheel (410), where the motor control mechanism (300) is communicatively connected respectively to the motor (500) and to the performance testing mechanism (400) so as to control the motor (500) to rotate forwards or to rotate in reverse and to acquire an operating parameter of the motor. The counterweight (420) is hanged on the torque wheel (410) and is lifted or lowered with the operation of the motor (500). Either scenario in which the counterweight (420) serves as a driving force or resistance can be emulated, at the same time, the motor control mechanism (300) can control the connection frequency and number of times of the motor (500) rotating forwards and rotating in reverse, thus emulating working scenarios such as steering or braking.
An obstacle detection system and method for a robot, comprising: a chassis (1). The chassis (1) is provided with two auxiliary wheels (11) and two main wheels (10); the main wheels (10) are close to a second end of the chassis (1); the chassis (1) is further provided with a front left side distance sensor (3), a front right side distance sensor (4), a middle left side distance sensor (5), a middle right side distance sensor (6), and a back middle distance sensor (7); the first two sensors are positioned at a first end of the chassis (1), and the first end is opposite to the second end; the third sensor is positioned at a third end of the chassis (1), the fourth sensor is positioned on a fourth end of the chassis (1), the third end is opposite to the fourth end, and the third end is perpendicular to the first end; the last sensor is positioned at the second end of the chassis (1), the distance between the last sensor and the third sensor is the same as that between the last sensor and the fourth sensor, and the distance between the last sensor and the first sensor is the same as that between the last sensor and the second sensor. According to the system and method, a small number of sensors are adopted, and omni-directional obstacle scanning and coverage are completed, thereby reducing costs.
The present invention relates to the technical field of robot detection, and in particular to a robot fall detection method and system, and a storage medium and a device. The robot fall detection method comprises: continuously acquiring measured data of a gyroscope and measured data of an accelerometer, wherein the measured data of the gyroscope is taken as a k-moment estimation angle X(k), the measured data of the accelerometer is taken as a k-moment observation angle Y(k), and k is an integer greater than or equal to 1 (S100); according to a Kalman filtering principle, fusing the estimation angle with the observation angle to acquire an included angle between a robot and the ground (S200); and if the included angle between the robot and the ground exceeds a pre-set range, determining that the robot has fallen over (S300). According to the present invention, an effective robot fall detection method is provided, and an intelligent response is also provided to prevent a robot from falling, thereby reducing the safety risk.
A humanoid manipulator, comprising a palm (10) and a finger assembly provided on the palm (10), for use with a robot. The humanoid manipulator further comprises: an adapter (20) used for mounting the humanoid manipulator on an arm of the robot; a control circuit board (80) mounted in the palm (10) and communicationally connected to a control system of the robot by means of the adapter (20); a transmission module (100) mechanically connected to the finger assembly and used for adjusting the bending and stretching of the finger assembly; a driving module (90) mounted in the palm (10), communicationally connected to the control circuit board (80), and mechanically connected to the transmission module (100); and a touch sensor (110) mounted on the palm (10) and/or the finger assembly and communicationally connected to the control circuit board (80).
A human body fall-down detection method and device. The method comprises: obtaining a target image (S11); carrying out human body detection on the target image by means of a target detection network to determine whether the target image is an image comprising a human body (S12); and if it is determined that the target image is an image comprising a human body, carrying out fall-down identification on the target image by means of a convolutional neural network to determine whether the human body in the target image is in a fall-down state (S13). According to the method, analysis processing is carried out by obtaining a single-frame target image rather than a video stream, the image comprising the human body is identified by utilizing the target detection network based on a target detection algorithm, and the human body state in the target image is classified and identified by means of the convolutional neural network based on a classification algorithm, so as to identify the human body state in the target image, so that the technical problems in the prior art of poor accuracy and low efficiency in human body fall-down identification are solved, and the technical effect of accurately and efficiently identifying the fall-down state is realized.
A method and device for correcting motion of a robotic arm, the method comprising: obtaining a target image of a robotic arm grabbing a target object, wherein the target image comprises a plurality of target identifiers; according to the target identifiers and the target image, determining current position information of each motion arm; and according to the current position information of each motion arm, correspondingly correcting current motion of the robotic arm.
A method and system for detecting waving of a robot, and a robot. The method comprises: detecting, in a video stream, a palm at a standard position by means of a cascade classifier; extracting angular points of the palm at the standard position so as to obtain an angular point set; tracking and detecting, in the video stream, each angular point in the angular point set, so as to obtain a motion trajectory corresponding to each angular point; and determining whether the palm is waving according to the motion trajectory of each angular point in the angular point set. Also comprised are a system and a robot using the method. The method, system and robot realize waving detection in a complex environment, and have a strong anti-interference capability with regards to dynamic background noise and a high detection accuracy.
ttt, tt to obtain a second data group W' (S20); performing a fitting operation on the first data group W, and determining a change trend of a distance between a detected object and the distance sensor (110) according to a fitting result (S30); when the change trend is constant, determining that the detected object is in a stationary state (S40); calculating a fluctuation value of the second data group W' (S50); and when the fluctuation value is greater than a preset threshold, determining that the detected object is a human body (S60). The human body detection device can detect a human body in the stationary state, distinguish a standing human body from a stationary object, and eliminate determination interference caused by the stationary object in original human body detection.
Disclosed in the present invention are a gesture recognition method and system for a robot, and the robot, the method comprising: pre-acquiring pictures comprising different gestures and pictures comprising no gestures so as to obtain a sample picture set; making a detection sample set and a filtering sample set according to the sample picture set; training according to the detection sample set so as to obtain an Adaboost cascading gesture detector; training according to the filtering sample set so as to obtain a gesture recognition convolutional neural network; and recognizing the acquired gesture pictures by means of the Adaboost cascading gesture detector so as to obtain gesture recognition results, and filtering the gesture recognition results by means of the gesture recognition convolutional neural network so as to obtain a correct gesture recognition result. According to the present invention, a gesture may be precisely recognized under a complex background by filtering recognition results of an Adaboost cascading gesture detector by means of the gesture recognition convolutional neural network.
A chess piece positioning method based on machine vision, used for positioning chess pieces on a chess board provided with flat markers, the method comprising: acquiring a video stream by means of a camera and collecting video image frames from the video stream (S100); performing image processing on the video image frames to identify flat markers (S200); and, on the basis of the flat markers and a pre-acquired internal reference matrix of the camera, calculating the position of a chess piece on the chess board relative to the markers, and thereby locating the position of the chess piece (S300). Also provided are a chess piece positioning system based on machine vision, a storage medium, and a robot. The system using the chess piece positioning method can be implemented without the need for a special circuit in the chess board, and there is also no need for data communication between the chessboard and the robot, so that arrangement is more convenient. The arrangement of the flat markers enables the robot to accurately identify the flat markers in a complex scene without suffering interference from the complex scene.
A method for controlling robot object-grasping, the method comprising: acquiring a target image, the target image including a target identifier, and the target identifier being disposed on a target object to be grasped; recognizing the target identifier in the target image, and according to the target identifier, determining position information and orientation information of the target object, and a corresponding grasping solution; performing a low-coupling kinematic solution, determining multiple joint changes; according to the multiple joint changes and corresponding grasping solution, controlling a robotic manipulator to grasp the target object. An apparatus for controlling robot object-grasping, comprising an acquisition module, a determination module, a solution module and a control module. The functions of each module correspond to each step of the method laid out above. The present solution solves the technical problems in the prior art of a complex target object recognition process and high costs, and low efficiency of calculating joint changes.
A robot interaction method and a system thereof, which relate to the technical field of robots. The robot interaction method comprises the following steps: sensing an interaction behavior of an interaction user, and determining an interaction event corresponding to the interaction behavior (101); searching an interaction event file library for an event script file corresponding to the determined interaction event (102); and according to at least one found event script file, executing an action responding to the interaction behavior of the interaction user (103). By means of the robot interaction method and the system thereof, the interaction content of a robot can be recorded using a script, and when the interaction content needs to be modified, there is no need to modify a source code, thereby reducing the difficulty of modifying the interaction content of the robot.
Disclosed in the present invention are a robot-based method and system for processing the display of a target image, comprising: S10 continuously acquiring video frame images; S20 when a tracking target is detected in a kth acquired image, detecting position information of the tracking target in the kth image; S30 displaying the kth image on the time axis at a position of a k+N+1th image, and marking the position of the tracking target in the kth image, wherein N is the number of images acquired within a detection cycle; S40 according to the position information of the tracking target in the kth image, sequentially predicting the position of the tracking target from the acquired k+N+1th to a k+2N-1th image; S50 sequentially displaying the k+N+1 to k+2N-1 images on the time axis at positions from a k+N+2th to a k+2Nth image, as well as the predicted positions of the tracking target. By means of the present invention, images may be displayed more smoothly.
A mechanical arm inverse kinematics solution error determination and correction method and device. Prior to operation an error model is established, during operation a current error is determined via error solution, and corresponding data correction is performed by using the error model, solving the technical problem that conventional methods cannot determine inverse kinematics solution errors in real time and in-process, and cannot achieve timely correction of data errors. The method comprises: performing error solution on an inverse kinematic solution under test to obtain an error solution result, and establishing an error model according to the error solution result (S11); obtaining motion target data of a target object (S12); performing, according to the motion target data and by using the inverse kinematic solution under test, inverse kinematics calculations to obtain data of joint variables (S13); performing an error solution according to the joint variables data to determine a current error of the inverse kinematic solution under test (S14); and correcting, according to the current error, the joint variables data by using the error model (S15).
A robot skin touch sensing system and method, comprising: a robot housing (10) and a robot body (20). The robot housing (10) is provided on the outer surface of the robot body (20). From outside to inside, the robot housing (10) sequentially comprises: a magnetic piece (12); an elastic filling material (13), provided on the inner surface of the magnetic piece (12); and a hard housing (14), made of a non-magnetic material, provided on the inner surface of the elastic filling material (13), and wrapping the outer surface of the robot body (20). The robot body (20) comprises a magnetic induction piece (21) matching the magnetic piece (12). According to the robot skin touch sensing system and method, a sensor is provided inside a robot and is protected by the hard housing (14), so that the sensor system is protected from being damaged by external force; pressure is indirectly measured, and the hard housing (14) does not need to be perforated, thereby reducing design difficulty.
G01L 1/12 - Measuring force or stress, in general by measuring variations in the magnetic properties of materials resulting from the application of stress
A robot skin touch sensing system, comprising: a sensing array (100), converting pressure applied to the skin of a robot into an optical signal with a certain frequency; a photoelectric conversion module (200), connected to the sensing array (100) by means of an optical fiber which is used for transmitting an optical signal, the photoelectric conversion module (200) being used for parsing the optical signal from the sensing array to obtain the frequency and a light intensity value of the optical signal; a main control module (300), electrically connected to the photoelectric conversion module (200) and used for receiving the frequency and the light intensity value of the optical signal, obtaining, according to the frequency of the optical signal, a position where the pressure is applied, and obtaining, according to the light intensity value of the optical signal, a pressure value of the pressure. According to the robot skin touch sensing system, when no external pressure is applied, the sensing array does not consume electricity; the number of output lines of the sensing array is reduced, and the complexities of wiring, designing, and manufacturing are reduced. Also disclosed is a robot skin touch implementation method applied to the robot skin touch system.
G01L 1/24 - Measuring force or stress, in general by measuring variations of optical properties of material when it is stressed, e.g. by photoelastic stress analysis
A robot control system, comprising: a touch sensor (100) for sensing control instruction information input by a user; a touch detection controller (200) electrically connected to the touch sensor (100) and configured to acquire the control instruction information input by the user and sensed by the touch sensor (100) and process the control instruction information; and a motion controller (300) electrically connected to the touch detection controller (200) and configured to acquire from the touch detection controller (200) processed control instruction information input by the user and control a robot to perform a corresponding task according to the control instruction information. According to the system, the problem of sensor rebound is resolved by setting the touch sensor and the trigger sensitivity of the robot is improved. Further provided is a robot control method applied to the robot control system.
The invention provides an active interaction method and system of a robot. The method comprises: using at least one ranging sensor as a low-level sensing module to detect information relating to a change in the surrounding environment of the robot; according to a low-level sensing event of the environmental change information, activating a high-level sensing module; determining, by using the high-level sensing module, whether a preset human body feature is detected, so as to obtain a detection result; and according to the detection result, the robot performing a feedback behavior. In the invention, when a user approaches a robot, the robot instantly senses an environmental change, determines the presence and location of the user, and interacts with the user according to a predetermined script. Conventional passive control is thus transformed into active interaction of a robot with a human, and the robot becomes "smart", greatly improving the experience of interaction between user and robot.
The present disclosure describes a FOTA-based remote upgrade control method and system, the method comprising: when an update checking instruction sent by a third party terminal is received, sending an update checking request to an FOTA server; when firmware information returned by the FOTA server according to the update checking request is received, transmitting the firmware information back to the third party terminal; when a download instruction sent by the third party terminal according to the firmware information is received, downloading a firmware installation package from the FOTA server; and when the firmware installation package is successfully downloaded and an installation instruction sent by the third party terminal is received, performing an upgrade according to the firmware installation package. The present disclosure is applicable to the control, using a third party terminal, of system upgrade of an upgrade end.
Provided are a digital servo and a control method thereof. The digital servo comprises a servo control unit (5), a motor driving module, a feedback unit, a brushless motor (2), and a deceleration mechanism (1). In the control method of the digital servo, a motor control parameter is calculated according to control information sent by a host computer and received by means of a serial bus, feedback information of the feedback unit and a preset control algorithm, and the brushless motor (2) is controlled to perform corresponding rotation according to a PWM waveform control signal corresponding to the motor control parameter. The digital servo and the control method thereof have a standard control mode and an extension control mode, and adopt a serial communication bus.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewingSafety devices combined with or specially adapted for use in connection with manipulators
An adjustable soccer robot kicking system, comprising a power supply (30), a motion control board (40), an air storage bottle (10), an air cylinder (60), an electromagnetic valve (50), air pipes (21, 61, 62), and cables (41, 51). The power supply (30), the motion control board (40), and the electromagnetic valve (50) form a loop by means of the cables (41, 51); the air storage bottle (10) is connected to the air cylinder (60) by means of the air pipes (21, 61, 62) and the electromagnetic valve (50). The system achieves adjustment to the soccer kicking strength by means of air pressure change, has a simple structure, is easy to be implemented, and can greatly improve the competitive capability of soccer robots.
A building-block programming device and a programming method therefor. The building-block programming device comprises a main control building block (10) and a plurality of motion building blocks (20). The main control building block (10) comprises a main control chip (11) and a first contact circuit interface (12). The first contact circuit interface (12) comprises a first grounding interface and a plurality of data input interfaces. The motion building block (20) comprises a second contact circuit interface (21); the second contact circuit interface (21) comprises a second grounding interface and connecting interfaces of the same number as the data input interfaces, and one of the connecting interfaces is an encoding interface representing the number of the building block. In use, the main control building block (10) is connected to the motion building blocks (20) by means of the first contact circuit interface (12) and the second contact circuit interface (21). The building-block programming device can simulate simple scratch programming in the form of a physical toy, is simple and easy to learn, and protects eyesight relative to on-screen programming.
The present disclosure provides an optoelectronic mouse sensor module-based method and system for creating an indoor map. The method comprises: mapping a coordinate system of an optoelectronic mouse sensor module to a ground coordinate system in which a robot is located to obtain a correspondence; step S1: during a mobile detection process performed on an indoor environment of which an environment map is to be created, acquiring ith obstacle detection data and ith optoelectronic coordinates in an optoelectronic coordinate system, wherein i is an integer greater than or equal to 1; step S2: calculating ith ground position coordinates of the robot in the ground coordinate system according to the ith optoelectronic position coordinates and (i-1)th optoelectronic position coordinates; and step S3: creating the environment map of the indoor environment according to the ith ground position coordinates and the ith obstacle detection data. The present disclosure provides advantages of having high measuring accuracy, good linearity, a large measurement range and low costs.
A sonar-based robot tracking method and system, comprising: providing a sonar transceiver on a tracking target as a first routing node; and mounting two sonar transceivers on a robot, with one serving as a coordinator and the other one serving as a second routing node. A sonar transceiver is used to realize tracking target recognition and location, and obstacle ranging, thereby realizing multiple uses of a single sensor, so that not only is the product function realized, but the manufacturing cost is also reduced. Furthermore, information about an obstacle in the external environment can be quickly detected. The structure is simple, positioning and tracking are accurate, and the application value is high.
A health management platform and method based on a children-oriented robot. The method, based on a children-oriented robot (1), performs a tracking analysis on the health state of a human body by means of an intelligent health management system composed of various external meters with independent functions, a cloud server (5), a mobile client (6), etc., and gives a rational suggestion. Since the health management platform is based on a children-oriented robot and uses intrinsic functions of the children-oriented robot, the cost is significantly reduced. The networking of detection and analysis and making same intelligent are realized by means of the cloud server (5), the mobile client (6), etc.
A robot control system and method based on brainwave signals, and a head-mounted apparatus (10). The method comprises: collecting brainwave signals by means of an electrode sensor (11) and a reference electrode (12) on the head-mounted apparatus (10); after the brainwave signals are processed, obtaining a brain activity index; transmitting the activity index of the human brain to a robot; and the robot controlling actions of the robot according to corresponding control instructions. The head-mounted apparatus (10) has a simple structure, and the method is simple and practicable, thereby laying the foundation for the development of fields such as bioassay, medical rehabilitation, mind games, smart homes, and intelligent control.
A61B 5/0482 - Electroencephalography using biofeedback
A61M 21/00 - Other devices or methods to cause a change in the state of consciousnessDevices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
40.
ROS SYSTEM-BASED ROBOTIC ARM MOTION CONTROL METHOD AND SYSTEM
An ROS system-based robotic arm motion control method and system, mainly comprising the following steps: 1. drawing of three-dimensional models of components of a robotic arm; 2. creation and saving of a coordinate system of the three-dimensional models of the components of the robotic arm; 3. writing of an XML-based robotic arm description file; 4. ROS system-based robotic arm motion planning; and implementation of system communication and robotic arm motion control. A kinematic model and a kinetic model of a robotic arm can be quickly established, a corresponding motion planning library is invoked in combination of a MoveIt module of an ROS system to implement motion planning of a robot, and finally, the motion planning solution result is sent to the motion control module of the robotic arm, so as to achieve the actions of the robotic arm, such as positioning, grabbing, and spatial following. Quick development and verification of the motion planning of the robotic arm can be implemented under the ROS system, the present invention can be applied to both algorithm verification in research and control of the robotic arm in practical production.
A ROS-based mechanical arm grabbing method and system. An implementation process comprises: configuring a use environment of a camera (20) on an upper computer (10), and then disposing the camera (20) above or laterally above an object to be grabbed and obtaining an image of the object to be grabbed that contains a positioning mark; inputting the image to the upper computer (10); reading the image of the camera (20) and performing data processing by using a particular algorithm to obtain spatial pose information of the object to be grabbed at mechanical arm coordinates; and giving a motion information queue and sending same to a lower computer (30). The lower computer (30) receives and parses the motion information queue sent by the upper computer (10), and drives a mechanical arm to perform a grabbing operation according to a preset action. The ROS-based mechanical arm grabbing method and system can effectively utilize the powerful processing operation capability of the upper computer (10), readily implement layout of the mechanical arm and the upper computer (10) and collaborative operation of multiple mechanical arms, and easily implement motion planning of the mechanical arms by using a ROS, and thus have a wide application prospect.
Provided are a robot and an ambient map-based security patrolling method employing the same. The robot can conduct a thorough patrol according to an ambient map to prevent existence of any unmonitored region, proactively identify an insecure factor, perform confirmation according to a security policy, proactively track the insecure factor, and operate normally at night without additional lighting. The robot and the security patrolling method employing the same have high proactivity and can proactively take action to address an insecure factor, thereby greatly enhancing effectiveness, timeliness and stability of security patrolling. The security patrolling method comprises: creating a two-dimensional planimetric map of an entire monitored region; planning a monitoring route; positioning a current location of a robot in the monitored region; and conducting a patrol according to the planned monitoring route.
A method and a system for managing robot internal unit wireless networking, the system comprising a master control unit and a plurality of subsidiary units, each unit having a wireless transceiver module, wireless communication being performed between the units. The present method and system for managing robot internal unit wireless networking use short distance, low-power wireless communication technology to enable the robot internal modules to perform networking, replacing limited cabled communication methods, and thereby eliminating the electrical faults caused by long-term wear and bending of the cables, improving the safety and reliability of the robots. Any unit in the wireless network can directly or indirectly perform data communication, and the wireless network has an automatic repair function when a unit is added or withdrawn; the wireless network has short latency, rapid response speed, low power consumption, high-speed communication, good reliability, and good safety.
A visual tracking method based on monocular gesture recognition, and a robot, capable of timely obtaining a precise deviation angle between a robot and a tracking target by recognizing a featured gesture, thereby implementing easy and precise tracking and more smooth tracking actions. In addition, an initial distance can be measured by a single point ranging module; a precise relative distance between a robot and a tracking target can be timely obtained, thus achieving high tracking precision. The method and the robot feature accuracy higher than that of color block tracking, greatly reduced costs compared with a 3D motion sensing solution, greatly improved tracking accuracy, smooth user interaction, and easily mastered operation points, and are easy to use.
A height measurement method based on a monocular machine vision, comprising the following steps: an RGB camera (1) on the head of a robot photographing a two-dimensional identification on the head and feet of a person to be tested; the robot calculating a homography matrix of a current field of view according to four corner points of the detected two-dimensional identification; segmenting out a head image area (3) of the person to be tested by means of an image segmentation algorithm, so as to calculate the pixel coordinates of the top of the head of the person to be tested; and then calculating the height of the person to be tested. The height measurement method based on a monocular machine vision is simple in operation and calculation, so that a person to be tested can measure his/her height by himself/herself without help from others. The measurement method is non-contact, thereby further improving the measurement accuracy and accelerating the measurement speed.
A method and system for autonomous robot charging, and the charging method comprises the following steps: a robot detects a power level thereof; when it is detected that the power level thereof is low, the robot contacts a charging base in a wireless manner; the robot calculates, according to a process of contacting the charging base in a wireless manner, a distance and an angle thereof relative to the charging base; a motion control system of the robot controls the robot to approach the charging base according to the distance and angle; when the robot arrives right in front of the charging base, or when the distance and angle are smaller than set thresholds, the robot connects with the charging base for charging. The present invention achieves autonomous charging for a robot, is a relatively low-cost and applicable in a complex usage environment, and improves the degree of intelligence for robots.
A method and system for ultrasonic wave-based autonomous robot charging; an ultrasonic wave transmitting module (3) and a wireless communication module are installed on a charging base, and two ultrasonic wave receiving modules (1, 2) and a wireless communication module are installed on a robot body; a robot calculates, according to intensities and intensity difference of received ultrasonic signals, a distance and orientation of the robot relative to the charging base, and completes autonomous tracing for the robot by combining with a motion control system and a pose adjustment strategy so as to achieve autonomous charging; the present invention has a relatively low cost, is applicable in a complex use environment, thus improving the degree of intelligence of the robot.
A sound source localization method for a robot, and a system. The method combines the time delay estimation and power spectrum intensity comparison, and estimates the general location of a sound source by means of the power spectrum intensity of the sound received by respective sound source collection apparatuses and the spatial location of the respective sound source collection apparatuses, which essentially can accurately estimate the general location of the sound source. The power spectrum intensity comparison refers to the calculation of the average power spectrum intensity of respective sound source collection apparatuses in a specific frequency range, and the average power spectrum intensity has a certain inversely proportional relationship to the distance from the sound source to respective sound source collection apparatuses, namely, the point with large power spectrum intensity is at a shorter distance away from the sound source collection apparatus, and the point with small power spectrum intensity is at a longer distance away from the sound source collection apparatus. The sound source localization method for a robot can localize the sound source around the robot more accurately, provide a location basis for further action of the robot, and improve the intelligence of the robot during human–computer interaction.