This work contains the details of the developed CNN, design of the robot and an experiment proposal to test the autonomy of the robot in a controlled real environment. The autonomous vehicle developed in this project is expected to satisfy the following objectives:
The CNN was trained as described above and tested on a dataset provided by the University of Cambridge to give a sub-par performance since the weather conditions of the two datasets are different. The model was trained on two datasets from Udacity and NVIDIA. The Udacity dataset consisted of 6.6 Gigabytes of 33,478 images while the NVIDIA dataset had 2.1 Gigabytes of 7,064 images. As seen in Fig. 6, there is a stark difference in the validation Mean Square Error (MSE) loss of the model after 30 epochs in both the instances visualized with the help of TensorFlow. Udacity dataset gave a loss of 0.0003798 after 30 epochs while NVIDIA dataset gave a loss of 1.013 which was approximately 5 times smaller than the Udacity dataset.
This paper describes a methodology of implementing a level-2 autonomous vehicle in a relatively sparsely occupied environment. A CNN is trained on a dataset by Udacity and used to compute the optimal steering angle based on the image input through the camera. In case of obstruction in the path, three ultrasonic sensors are used to decide in which direction the vehicle should turn to continue on its path. Once this is achieved, the vehicle resumes its normal functioning of manoeuvring based on the steering angle given by the CNN.
Based on the above results, an experiment is being designed for on-road testing on the intra-campus roads of BITS Pilani Hyderabad Campus. The test-path is 580 meters in length with negligible elevation gradient since the dataset was also created on a plane terrain. The path will be marked with three parallel lanes of width 1 meter of Snowcal powder. The GPS module GSM SIM908 will be used for location and path-tracking of the vehicle.