Fog reduces visibilities and poses threats to human drivers as well as current advanced driver assistance systems. This project aims to improve automotive perception in fog. More concretely, we would like to achieve simultaneous defogging and stereo reconstruction of the obscured scene, and possibly to go beyond this to generate a 3D interpretation of the scene. To that end, we will fuse measurements from multi-modal sensors, incorporate vehicle's location and motion information, rely on physical models of image formation through fog, exploit probabilistic inference and optimisation techniques, and possibly use deep learning to boost performance. The success of this project will not only aid human drivers to see better through fog, but also promote safety for both partially and fully autonomous vehicles when travelling through fog.