New number formats for faster deep learning
The training phase in Deep Learning is very compute and data intensive and, therefore, the efficiency of the training phase typically restricts the quality of results that can be achieved within a given time frame.
Many learning algorithms are dominated by the speed in which data can be brought to the CPU, i/e., by the memory bandwidth of the executing hardware. Consequently, techniques that are based on reduced precision number representations have been shown to produce faster results without a significant loss in the quality of results.
A new number format, named POSIT, promises better accuracy then the omnipresent IEEE floating point numbers.
This project investigates how POSIT numbers can be leveraged to improve the performance of Deep Learning algorithms.