Adaptive sensor fusion for resilient vehicle sensing
Automotive sensing must be robust or resilient. For example, optical sensors become rapidly ineffective in heavy rain or fog, and radar sensors provide low resolution data that is inadequate for scene mapping and object identification. Further, most autononous or semi-autonomous vehicle trials are conducted in sparse sensor environments, so that interference is rarely a problem, and assume pre-learnt road network data and continuous GPS availability
Using our robotarium van equipped with radar, LiDAR, stereo sensors, GPU and INU systems, the student would develop the fusion of the sensor data such that the system degrades gracefully under adverse conditions, using both statistical and neuron-inspired machine learning approaches.
S Matzka, AM Wallace and YR Petillot, "Efficient Resource Allocation for Automotive Attentive Vision Systems", IEEE Transactions on Intelligent Transportation Systems, 13(2), 859-872, 2012.
M Sheeny de Moraes, AM Wallace, M Emambakhsh, S Wang and B Connor, “POL-LWIR: Vehicle Detection: Convolutional Neural Networks Meet Polarised Infrared Sensors” Proceedings of Int. Conf. On Computer Vision and Pattern Recognition, Salt Lake City, 2018.
A Ahrabian, M Emambakhsh, M Sheeny and AM Wallace, “Efficient Multi-Sensor Extended Target Tracking using a GM-PHD Filter”, IEEE Intelligent Vehicles Symposium, Paris, June 2019.