Deep Neural Networks (DNNs) have a wide application scope beyond computer vision tasks, promising to replace manual algorithmic implementations in applications ranging from large-scale physics experiments to next-generation network security. Such applications may require data processing rates in the millions of samples per second and sub-microsecond latency, which is possible with customized FPGA or ASIC implementations. We present a novel method called LogicNets for co-design of DNN topologies and hardware circuits that maps to a very efficient FPGA implementation to address the needs of such applications.
Recommended citation: ‘Y. Umuroglu, Y. Akhauri, N. J. Fraser and M. Blott, “High-Throughput DNN Inference with LogicNets,” 2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2020, pp. 238-238, doi: 10.1109/FCCM48280.2020.00071.’