Wall Follower

This study presents an approach to addressing a wall-following problem. A virtual robot equipped with ultrasonic sensors is tasked with autonomously navigating an environment interior while maintaining constant distance to the wall. The solution leverages the extensive functionality of the V-REP virtual robot experimentation platform to provide the simulated physical environment layer, including wall and mobile robot objects. The control layer, implemented in Python and interfacing with VREP via API, manages environment perception and navigation.

A final evaluation is made, comparing single and multi-sensor approaches. The final optimized multi-sensor solution offered best performance, especially at high speed, completing one loop in 155s with a navigation distance error rate of 420.85cm.

Recommendations include more refined sensor calibration and further research into proven control methods.

Master’s Degree Intelligent Systems paper grade 92% | Source code

Wall Follower With PID Controller

Implemented as a plugin to the control layer of the first Wall Follower solution, this study adds a Proportional, Integral and Derivative (PID) controller tasked with setting motor control velocities. Several variants of PID controller were tested. Using a fixed baseline speed, the PI controller performed best, reducing navigation error by 12% over best efforts from previous research. The PD controller delivers optimum performance when dynamically adjusting baseline speed with proportional error, reducing navigation error by 55.8% over previous best.

Recommendations include refined sensor noise filtering to improve integral and derivative control

Master’s Degree Intelligent Systems paper grade 92% | Source code

Search And Rescue Robot

On 26th April 1986 the Chernobyl unit 4 nuclear reactor exploded, killing 32 from the blast and acute radiation syndrome, while an estimated many thousands suffered from long-latency cancers. 10 years later the Pioneer robot, funded by NASA and the U.S. DoE (Department of Energy), and designed by a team of robotics, hardware and software experts from JPL (Jet Propulsion Laboratory), Carnegie Mellon University, Livermore laboratory, Silicon Graphics and other partners was deployed to Chernobyl to help assess the site and it’s significantly deteriorating first concrete sarcophagus.

The first known use of robots for urban search and rescue (USAR) was 6 hours after the World Trade Centre (WTC) disaster, used in the search of victims, identification of efficient excavation paths through rubble, structural inspection and detection of hazardous materials and explosive gases.

After the Great Hanshin Earthquake of 1995 where an estimated 6434 people lost their lives, the Japanese government looked to resolve challenges with SAR operations and hence promoted the development of intelligent, highly mobile, dexterous robots that can improve the safety and effectiveness of emergency responders performing hazardous tasks through sponsoring of prizes in RoboCupRescue competitions. The competition’s primary task is the location of simulated victims that show sign of life, within a maze of harsh terrain. A map is generated by the robot which should represent landmarks, obstacles and victims.

This study researched and implemented a virtual solution designed to simulate capabilities required of Search and Rescue robots. These include obstacle avoidance, sensor data fusion, environment mapping, precision navigation and victim location.

A Python control layer commands a Pioneer P3DX robot modelled in VREP. During victim search ultrasonic sensor data builds an occupancy grid (OG) of free and occupied space probability. The A* path finding algorithm uses the OG to propose an optimal path from victim to home point. All task solutions are successful.

Recommendations include performance optimization of grid mapping and resolving identified environment feature missing/overexposure issues affecting route planning.

Master’s Degree Intelligent Systems paper grade 100% | Source code