Autonomous Ground Robot
We designed, fabricated and programed an autonomous ground robot to navigate around obstacles, follow waypoints, and respond to visual cues given by the environment.
We designed a low profile, stable chassis using plywood as the primary structural component. We maximized our track width and our wheelbase in order to insure stable sensor measurements. At the same time, we minimized our height in order to lower the overall mass of the robot, and to maximize the amount of data our LIDAR could collect from our environment.
We controlled our behavioural lights, motors, and ultrasonic and IR sensors with an Arduino. We enabled our Emergency Stop to stop our motors, yet leave the computing uninterrupted, though notified. For the majority of our computing power, we used an Odroid ux4. We controlled our LIDAR, camera, IMU, and gps with the odroid.
Check out the Final Project Summary in the link!
This is a convolved image using a 3x3 matrix. The convolution is later used for a convolutional neural network that would process images and output steering information.
We detect cones using OpenCV and use a center of mass calculation weighted by distance from horizon in order to steer our robot.
Here is a final render of our robot.
This is a convolved image using a 3x3 matrix. The convolution is later used for a convolutional neural network that would process images and output steering information.