Updates on the campus response to coronavirus (COVID-19)

Heck Wins IEEE SPS Best Paper Award

Atlanta, GA
Larry Heck

Larry Heck

Download Image

Larry Heck has received the 2020 IEEE Signal Processing Society (SPS) Best Paper Award. Heck and his colleagues will be recognized with this award at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2021), currently scheduled for June 6-11 in Toronto, Ontario, Canada. 

Heck is the current chair of the advisory board for the Georgia Tech School of Electrical and Computer Engineering (ECE), and he is an M.S.E.E. and Ph.D. graduate of the Institute. He is the president and CEO of Viv Labs and the senior vice president and head of Bixby North America at Samsung. 

The title of Heck’s award-winning paper is “Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding.” It was published in the IEEE/ACM Transactions on Audio, Speech, and Language Processing in March 2015. His coauthors are Grégoire Mesnil, Yann Dauphin, and Yoshua Bengio, all of the University of Montréal; Kaisheng Yao, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Dong Yu, and Geoffrey Zweig, all of Microsoft Research; and Gokhan Tur, of Apple. 

The last decade has seen the development and broad deployment of personal digital assistants (PDAs), including Apple Siri, Microsoft Cortana, Amazon Alexa, Google Assistant, and Samsung Bixby. A primary component of the PDAs is Natural Language Understanding (NLU) - understanding the meaning of the user’s utterance. 

The NLU task typically consists of determining the domain of the user’s request, such as travel; the user’s intent, such as find a flight; and information bearing parameters commonly referred to as semantic slots, such as city-departure, city-arrival, and date. The task of determining the semantic slots is called slot filling. This paper introduced a new deep learning approach to slot filling that efficiently models past and future temporal dependencies. This work is one of the earliest and most cited papers in a series of deep learning innovations for NLU.

Last revised January 7, 2021