Using the vision system, the autonomous vehicle detects and classifies objects with high accuracy and determines its navigation. The image files are transported into the robot operating system (ROS) to test in the simulation. Google’s TensorFlow and ImageAI computer vision Python library, are used to create a custom neural network model to detect and classify specific objects, i.e. lane lines to drive within or caution cones to avoid hazards. Initial development focused on processing and annotating two hundred images of a random object in various angles, distances, and light conditions recorded from the cameras on the autonomous vehicle. These images are processed through Google’s CoLaboratory, a cloud-based development server providing free GPU hardware, for faster processing. The goal is to take the customized detection model and provide a desired output. The test procedure included coding annotated pictures of different objects and creating datasets. The procedure for training the datasets are developed and as a result, several models for each of the objects are obtained. The object detection model with the best capability to detect designated objects were evaluated. The paper reflects how machine learning is utilized for obstacle detection and navigation for autonomous vehicle to maneuver around assigned constraints.

This content is only available via PDF.
You do not currently have access to this content.