- Powered by NVIDIA Jetson Nano (included) and based on the Robot Operating System (ROS)
- First person view robotic arm for picking, sorting and transporting
- Depth camera and Light Detection and Ranging (Lidar) for mapping and navigation
- 7 inch touch screen to monitor and debug parameters
- Open source, with ample PDF materials and tutorials provided
The Hiwonder Jetauto ROS Robot Car w/ Jetson Nano, Lidar Depth Camera & Touch Screen (Standard Kit)is equipped with a NVIDIA Jetson Nano, AI vision robotic arm, high performance encoder motor, Lidar, 3D depth camera and 7 inch screen, which open up a range of functionalities. Robot motion control, mapping and navigation, path planning, tracking and obstacle avoidance, autonomous driving, intelligent picking, model simulation, somatosensory interaction and voice interaction can all be achieved!
This versatile combination of hardware makes the JetAuto Pro an ideal platform for learning and verifying robotic SLAM functions, as well as getting solutions for ROS development. A wealth of ROS learning materials and tutorials are provided to help you get started quickly!
Free Changeable Models:
The vision robotic arm and depth camera on the JetAuto Pro can be disassembled, enabling it to freely switch between two models.
Lidar Function:
1) 2D Lidar Mapping and Navigation: The JetAuto is loaded with a high performance Lidar that supports mapping with a variety of algorithms, including Gmapping, Hector, Karto and Cartographer. In addition, path planning, fixed point navigation, as well as obstacle avoidance during navigation can be implemented.
2) Single Point Navigation, Multi Point Navigation: The JetAuto employs Lidar to detect the surroundings in real time to achieve single point navigation as well as multi point navigation.
3) TEB Path Planning, Obstacle Avoidance: It supports TEB path planning, and is able to monitor obstacles in real time during navigation. Therefore, it can replan the route to avoid the obstacle and continue moving.
4) RRT Autonomous Exploration Mapping: By utilizing the RRT algorithm, the JetAuto can complete exploration mapping, save the map and drive back to the starting point autonomously, so there is no need for manual control.
5) Lidar Tracking: By scanning the front moving object, Lidar makes the robot capable of target tracking.
6) Lidar Guarding: Guard the surroundings and sound the alarm when detecting intruders.
Depth Camera:
1) RTAB VSLAM 3D Mapping and Navigation: The depth camera supports 3D mapping in two ways, pure RTAB vision and fusion of vision and Lidar, which allows the JetAuto Pro to navigate and avoid obstacles in a 3D map, as well as relocate globally.
2) ORBSLAM2 + ORBSLAM3: ORB SLAM is an open source SLAM framework for monocular, binocular and RGB D cameras, which is able to compute the camera trajectory in real time and reconstruct 3D surroundings. And under RGB D mode, the real dimension of the object can be acquired.
3) Depth Map Data, Point Cloud: Through the corresponding API, the JetAuto Pro can get depth map, color image and point cloud of the camera.
MediaPipe Development, Upgraded AI Interaction:
Based on the MediaPipe framework, the JetAuto Pro can carry out human body recognition, fingertip detection, face detection, 3D detection and more.
Fingertip Trajectory Recognition; Human Body Recognition; 3D Detection; 3D Face Detection; AI Deep Learning Framework Utilize YOLO network algorithm and deep learning model library to recognize the objects; KCF Target Tracking; Rely on KCF filtering algorithm, the robot can track the selected target; Color/Tag Recognition and Tracking: The JetAuto Pro is able to recognize and track the designated color, and can recognize multiple April Tags and their coordinates at the same time; Augmented Reality (AR): After you select the patterns on the APP, the patterns can be overlaid on the April Tag; ROS Robot APP Mapping: Use the APP to control the JetAuto Pro to map and navigate, and view images; Body Motion Control: Recognize and analyze your limb and body frame with depth camera first, then perform the corresponding action based on your posture.
Smart Vision Robotic Arm:
Loaded with 5DOF vision robotic arm, the JetAuto Pro can pick and sort objects. Furthermore, the combination of Lidar and Robotic arm allows the JetAuto Pro to transport smartly during navigation.
Multi Point Navigation; Transport; URDF Kinematic Simulation Model; Movelt Simulation and Trajectory Control; Gesture Recognition; Color Sorting; Follow Line to Clear Obstacles; Waste Sorting.
Far Field Microphone Array:
This 6 microphone array is adept at far field sound source localization, voice recognition and voice interaction. In comparison to an ordinary microphone module, it can implement more advanced functions.
Voice Control Navigation; Sound Source Localization; Voice Control Robotic Arm; Voice Control Color Recognition.
Interconnected Motorcade:
1) Multi vehicle Navigation: Depending on multi machine communication, the JetAuto Pro can achieve multi vehicle navigation, path planning and smart obstacle avoidance.
2) Intelligent Formation: A batch of JetAuto Pro cars can maintain the formation, including horizontal line, vertical line and triangle, during movement.
3) Group Control: A group of JetAuto Pro can be controlled by only one wireless handle to perform actions uniformly and simultaneously.
Gazebo Simulation:
The JetAuto Pro employs the ROS framework and supports Gazebo simulation. Gazebo brings a fresh approach for controlling the JetAuto Pro and verifying the algorithm in a simulated environment, which reduces experimental requirements and improves efficiency.
1) JetAuto Pro Simulation Control: The kinematics algorithm can be verified in simulation to speed up algorithm iteration and reduce the experiment cost.
2) Visual Data RViz: RViz can visualize the mapping and navigation result, which facilitates debugging and improving the algorithm.
Various Control Methods:
WonderAi APP; Map Nav APP (Android Only); Wireless Handle.