Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in AIPLANS@NeurIPS, 2019
We introduce a genetic programming inspired methodology to discover zero shot neural architecture scoring metrics which outperform existing human-designed metrics.
Download here
Published in IEEE, 2019
In this thesis, we explore how niche domains can benefit vastly if we look at neurons as a unique boolean function of the form f:BI->BO, where B={0,1}.
Download here
Published in IEEE, 2019
HadaNets introduce a flexible train-from-scratch tensor quantization scheme by pairing a full precision tensor to a binary tensor in the form of a Hadamard product.
Recommended citation: Akhauri, Y. (2019). HadaNets: Flexible Quantization Strategies for Neural Networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9025370
Published in IEEE, 2020
We present a novel method for designing neural network topologies that directly map to a highly efficient FPGA implementation.
Recommended citation: Y. Umuroglu, Y. Akhauri, N. J. Fraser and M. Blott, "LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications," 2020 30th International Conference on Field-Programmable Logic and Applications (FPL), 2020, pp. 291-297, doi: 10.1109/FPL50879.2020.00055. https://ieeexplore.ieee.org/document/9221584
Published in IEEE, 2020
Some applications may require data processing rates in the millions of samples per second and sub-microsecond latency, which is possible with customized FPGA or ASIC implementations. We present a novel method called LogicNets for co-design of DNN topologies and hardware circuits that maps to a very efficient FPGA implementation to address the needs of such applications.
Recommended citation: Y. Umuroglu, Y. Akhauri, N. J. Fraser and M. Blott, "High-Throughput DNN Inference with LogicNets," 2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2020, pp. 238-238, doi: 10.1109/FCCM48280.2020.00071. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9114869
Published in arXiv, 2021
A realizable, efficient method of co-designing neural network accelerators and neural network architectures.
Download here
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.