Azreen Azman is an associate professor at the Universiti Putra Malaysia in Kuala Lumpur. He has just completed a three month secondment at the University of Lincoln as part of the STEP2DYNA project funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement. He has been involved in Work Package 4.
Hazard perception and collision detection are important components for the safety of an autonomous car, and it becomes more challenging in low light environment. During the three month secondment period my focus was to investigate the method for the detection of objects on the road in low light conditions by using captured images or video in order to recognise hazards or avoid collision.
One of the first tasks Azreen conducted in Lincoln was to collect audio-visual data in different road conditions. Azreen had the opportunity to join his colleagues Siavash Bahrami and Assoc Prof Shyamala Doraisamy from UPM who were also carrying out secondments at UoL and conducting audio-visual recordings of the road at the Millbrook Proving Ground in Bedford, United Kingdom. This provided a controlled environment in addition to other recordings conducted on normal roads.
It is anticipated that the performance of deep-learning based object detection algorithms such as R-CNN variants and YoLo diminishes as the input images become darker, due to the reduced amount of light and increased noise in the captured images. In Azreen’s preliminary experiment which used the Faster R-CNN model trained and tested on a collection of self-collected road images, the object detection performance is significantly reduced to almost 81% for dark and noisy images, as compared to the daylight images.
To overcome the problem, an image enhancement and noise reduction method was applied to the dark images prior to the object detection module. In his investigations, Azreen trained the LLNet, a deep autoencoder based image enhancement and noise reduction method for dark image enhancement. As a result, the Faster R-CNN is able to detect 29% more objects on the enhanced images as compared to the dark images. The performance of the deep learning-based LLNet is better than the conventional Histogram Equalisation (HE) and Retinex methods. However, the patches prediction and image reconstruction steps are computationally expensive for real-time applications.
‘The secondment has given me the opportunities and resources to conduct my research for the project and to improve my skills and networking though various meetings and discussions. Despite the challenges faced due to the ongoing pandemic, my host (University of Lincoln) has provided me with the support to work remotely while continuously engaging with other researchers virtually. I would like to thank the sponsors including Universiti Putra Malaysia and the STEP2DYNA’s Marie Sklodowska-Curie secondment grant for these opportunities.’ Azreen Azman