avatar

Yuanhong Yu

Projects

ICRA RoboMaster University AI Challenge

In RMUA 2022, automatic robots from both sides (2V2) will shoot against each other in the rune-filled battlefield. Participating teams need to use the official robotics platform to conduct motion planning, control and autonomous decision making by sensing the environmental information of the battlefield. The fully automatic robots will launch projectiles against the enemy’s robots. At the end of the game, the team with the highest robot HP wins.

In May 2022, our team got the 3rd prize of the competition, my main responsibility is about localization&navigation for robot.

  • Due to the limitation of the competition venue environment, two lidars are installed at different heights of robots to fuse the point cloud information obtained by multiple radars to obtain better positioning effect.

  • The competition course has a strong symmetry, which creates difficulties for accurate positioning. At the same time, during the robot confrontation, there are often stronger collisions, which can also cause positioning errors. In addition, there is a possibility of tire slippage during start-up and braking, which can cause errors in positioning due to our positioning scheme’s reliance on odometer information. In summary, we use the robot position information captured by the sentry camera located at the edge of the field to assist in positioning.

  • We used cartographer for static map creation. For dynamic obstacle path planning, we choose to use the S2 radar mounted on the head for dynamic obstacle scanning detection, considering that the A3 radar mounted on the chassis is obscured by the wheels, resulting in very limited scanning capability for enemy vehicles. In the global path planning, we use the mature A* path planning algorithm with a planning frequency of 2hz, and in the local planning, we use the teb algorithm to optimize the trajectory, based on our more robust positioning scheme, we appropriately adjust the teb parameters to improve the robot motion speed and trajectory smoothness. The average speed of the robot is 1.3m/s, and the maximum speed of the robot is 1.75m/s.

  • In order to make provide more feasible strategies for the decision-making level, we add the information of enemy robots captured by the field-side sentry camera to costmap and set the expansion area for it. By this method, the robot will avoid ramming and striking by getting too close to the enemy robot when planning its path. Such an approach makes up for the lack of scanning obstacles by LIDAR alone.

2022 WeChat Program Application Development Competition

Between March and May 2022, I led the team as the project leader to develop the V5robot applet and participated in the WeChat applet competition. Finally, I won the second prize in the Northwest Region.

V5robot applet as a robotics education promotion platform, with strong interactivity, easy to get started, depth and breadth of content as the main features, containing a total of navigation, vision, control and other modules of robotics theory knowledge, and adding small games to enhance the fun. V5robot is also used as a recruiting platform for Northwestern Polytechnic University’s soccer robotics base.

Point pair feature-based pose estimation

Point Pair Feature (PPF) is a 4-dimensional feature based on a 3D point cloud, which is defined as follows:

The PPF method is divided into two parts: offline and online, in the offline process, the model is downsampled and the PPF features are extracted and stored in the hash table. point cloud is downsampled, based on which the PPF features are extracted and stored in a hash table. online process includes downsampling the site point clouds, extracting PPF features from the site point clouds, finding matching features in the established hash table, and creating a set of correspondence features. The online process includes downsampling the site cloud, extracting PPF features from the site cloud, finding matching features and creating a set of correspondence, and generating a set of positional estimates. Finally, the best transformation matrix is obtained by The best transformation matrix is obtained by voting.

During my internship in the ASGO-3D lab at Northwestern Polytechnic University, I reproduced the PPF-based center point voting pose estimation algorithm in “efficient center voting for object detection and 6d pose estimation in 3d point cloud“. The most representative improvement of this method is that it transforms the implicit voting of PPF into an explicit voting method of center voting, which in turn changes the implementation of a series of closely related components such as pose estimation generation, clustering, and hypothesis verification. I eventually tested the results on the U3OR dataset.

code