Hyoukjun Kwon, a recent Georgia Tech Ph.D. graduate, received an Honorable Mention at the 2021 ACM-SIGARCH / IEEE-CS TCCA Outstanding Dissertation Award ceremony, presented virtually at the International Symposium on Computer Architecture (ISCA) 2021.

Hyoukjun Kwon, a recent Georgia Tech Ph.D. graduate, received an Honorable Mention at the 2021 ACM-SIGARCH / IEEE-CS TCCA Outstanding Dissertation Award ceremony, presented virtually at the International Symposium on Computer Architecture (ISCA) 2021. ISCA 2021 was held June 14-19 and is the flagship venue for showcasing new ideas and research results in computer architecture.

Kwon’s award citation reads: “For developing mechanisms to quantify the relationship between deep neural network mappings, data reuse, and communication flows for system design of flexible deep learning accelerators.” He is currently a research scientist at Facebook Reality Labs, where he has worked since October 2020. Kwon completed his Ph.D. under the guidance of Tushar Krishna, who is the ON Semiconductor Junior Professor in the School of Electrical and Computer Engineering.

Machine learning (ML), especially deep learning (DL), has demonstrated great results in computer vision, speech recognition, natural language processing, and recommendation systems. This has energized the entire field of computer architecture to develop customized hardware accelerators to enable deployment of DL solutions in the edge and cloud. Traditionally, the efficiency of domain-specific accelerators has come from specialization, which takes place when the control path and datapath in the accelerator is tailored to the deep neural network (DNN). A key challenge facing the architecture community is that the field of ML is evolving very rapidly, so there is a very real possibility that silicon accelerator chips will be obsolete by the time they reach the market.

In order to solve this conundrum, Kwon’s dissertation proposes the key idea of flexible DNN accelerators. In contrast with a fully programmable processor, such as a CPU/GPU, or a fully reconfigurable circuit, such as an FPGA, a flexible accelerator adds small-overhead-but-high impact reconfigurability for future-proofing. Furthermore, Kwon’s thesis shows that these kinds of future-proofing technologies can also improve performance on existing neural networks, as it allows the hardware to better tailor itself to the diverse set of layer parameters, instead of targeting the average case.

Kwon’s dissertation also develops a foundational and formal understanding of the complex interplay between the DNN model, its mapping, dataflow, memory accesses, communication flows, and microarchitectural choices. It also presents a suite of open-source software and hardware codebases demonstrating these ideas.

Kwon’s thesis has already shown measurable impact. Many foundational ideas are part of a Synthesis Lectures on DNN accelerator design, co-authored by Kwon; Michael Pellauer (NVIDIA), who mentored Kwon closely throughout his Ph.D. as a co-advisor; Angshuman Parashar (NVIDIA); Ananda Samajdar, a fellow Georgia Tech Ph.D. student; and Krishna. In addition, some of the open-source artifacts developed by the thesis, such as MAERI and MAESTRO, are already being used by several research groups in industry, national labs, and other universities.

###

* IEEE-CS TCCA stands for the IEEE Computer Society Technical Committee on Computer Architecture.