Georgia Tech Presents Cybersecurity Discoveries at NDSS '18

Atlanta, GA
Network and Distributed System Security Symposium 2018

The Network and Distributed System Security Symposium occurs Feb. 18 - 21, 2018 in San Diego with leading universities and technology organizations from around the world.

Download Image

Can your fingerprint make your app vulnerable to cyberattack?... How can we verify if an unknown caller has been blacklisted for fraud?... Is it possible to authenticate users in real time with live video and voice?... These and other cybersecurity research discoveries from the Georgia Institute of Technology (Georgia Tech) will be on display when researchers, faculty and students gather this week at the Network and Distributed System Security Symposium (NDSS '18) -- a premier international event for the information security community.

Georgia Tech, again, is one of five universities with the most research accepted into the peer-reviewed conference. Five academic papers will be presented by Georgia Tech -- placing it alongside the University of Texas at Dallas and Northeastern University for volume of research accepted into NDSS '18. Only Indiana University and Purdue University will bring more work to the show. In all, 109 organizations contributed new discoveries to NDSS '18, including IBM Watson, Samsung, NEC Labs, and leading universities from North America, Asia, Europe and Israel.

The Symposium -- held Feb. 18-21 in San Diego -- is sponsored by The Internet Society. Assistant Professors Taesoo Kim (School of Computer Science) and Brendan Saltaformaggio (School of Electrical and Computer Engineering) served on the Program Committee to help organize speakers and a session dedicated to Android.


Research by Georgia Tech

"Broken Fingers: On the Usage of the Fingerprint API in Android"

in collaboration with University of California, Santa Barbara
Antonio Bianchi, Yanick Fratantoni*, Machiry Aravind Kumar, Christopher Kruegel, and Giovanni Vigna (University of California, Santa Barbara); Simon Pak Ho Chung and Wenke Lee (Georgia Tech)

Smartphones are increasingly used for very important tasks such as mobile payments. Correspondingly, new technologies are emerging to provide better security on smartphones. One of the most recent and most interesting is the ability to recognize fingerprints, which enables mobile apps to use biometric-based authentication and authorization to protect security-sensitive operations.

In this paper, we present the first systematic analysis of the fingerprint API in Android, and we show that this API is not well understood and often misused by app developers. To make things worse, there is currently confusion about which threat model the fingerprint API should be resilient against. For example, although there is no official reference, we argue that the fingerprint API is designed to protect from attackers that can completely compromise the untrusted OS. After introducing several relevant threat models, we identify common API usage patterns and show how inappropriate choices can make apps vulnerable to multiple attacks. We then design and implement a new static analysis tool to automatically analyze the usage of the fingerprint API in Android apps. Using this tool, we perform the first systematic study on how the fingerprint API is used.

The results are worrisome: Our tool indicates that 53.69% of the analyzed apps do not use any cryptographic check to ensure that the user actually touched the fingerprint sensor. Depending on the specific use case scenario of a given app, it is not always possible to make use of cryptographic checks. However, a manual investigation on a subset of these apps revealed that 80% of them could have done so, preventing multiple attacks. Furthermore, the tool indicates that only the 1.80% of the analyzed apps use this API in the most secure way possible, while many others, including extremely popular apps such as Google Play Store and Square Cash, use it in weaker ways. To make things worse, we find issues and inconsistencies even in the samples provided by the official Google documentation. We end this work by suggesting various improvements to the fingerprint API to prevent some of these problematic attacks. 

*Co-author Yanick Fratantoni was a Ph.D. summer intern at Georgia Tech, advised by Wenke Lee, while completing this project.

Download the paper.


"rtCaptcha: A Real-Time CAPTCHA Based Liveness Detection System"

Erkam Uzun, Simon Pak Ho Chung, Irfan Essa, and Wenke Lee (Georgia Tech)

Facial/voice-based authentication is becoming increasingly popular (e.g., already adopted by MasterCard and AliPay), because it is easy to use. In particular, users can now authenticate themselves to online services by using their mobile phone to show themselves performing simple tasks like blinking or smiling in front of its built-in camera. Our study shows that many of the publicly available facial/voice recognition services (e.g. Microsoft Cognitive Services or Amazon Rekognition) are vulnerable to even the most primitive attacks. Furthermore, recent work on modeling a person’s face/voice (e.g. Face2Face) allows an adversary to create very authentic video/audio of any target victim to impersonate that target. All it takes to launch such attacks are a few pictures and voice samples of a victim, which can all be obtained by either abusing the camera and microphone of the victim’s phone, or through the victim’s social media account. In this work, we propose the Real Time Captcha (rtCaptcha) system, which stops/slows down such an attack by turning the adversary’s task from creating authentic video/audio of the target victim performing known authentication tasks (e.g., smile, blink) to figuring out what is the authentication task, which is encoded as a Captcha. Specifically, when a user tries to authenticate using rtCaptcha, they will be presented a Captcha and will be asked to take a “selfie” video while announcing the answer to the Captcha. As such, the security guarantee of our system comes from the strength of Captcha, and not how well we can distinguish real faces/voices from synthesized ones. To demonstrate the usability and security of rtCaptcha, we conducted a user study to measure human response times to the most popular Captcha schemes. Our experiments show that, thanks to the humans’ speed of solving Captchas, adversaries will have to solve Captchas in less than 2 seconds in order to appear live/human and defeat rtCaptcha, which is not possible for the best settings on the attack side. 

Download the paper.


"Tipped Off by Your Memory Allocator: Device-Wide User Activity Sequencing from Android Memory Images"

in collaboration with Purdue University, The Affiliated Institue of ETRI, Towson University, and Louisiana State University
Rohit Bhatia (Purdue University); Brendan Saltaformaggio (Georgia Tech); Seung Jei Yang (The Affiliated Institute of ETRI); Aisha Ali-Gombe (Towson University); Xiangyu Zhang and Dongyan Xu (Purdue University); Golden G. Richard III (Louisiana State University)

An essential forensic capability is to infer the sequence of actions performed by a suspect in the commission of a crime. Unfortunately, for cyber investigations, user activity timeline reconstruction remains an open research challenge, currently requiring manual identification of datable artifacts/logs and heuristic-based temporal inference. In this paper, we propose a memory forensics capability to address this challenge. We present Timeliner, a forensics technique capable of automatically inferring the timeline of user actions on an Android device across all apps, from a single memory image acquired from the device. Timeliner is inspired by the observation that Android app Activity launches leave behind key self-identifying data structures. More importantly, this collection of data structures can be temporally ordered, owing to the predictable manner in which they were allocated and distributed in memory. Based on these observations, Timeliner is designed to (1) identify and recover these residual data structures, (2) infer the user-induced transitions between their corresponding Activities, and (3) reconstruct the device-wide, cross-app Activity timeline. Timeliner is designed to leverage the memory image of Android’s centralized ActivityManager service. Hence, it is able to sequence Activity launches across all apps — even those which have terminated. Our evaluation shows that Timeliner can reveal substantial evidence (up to an hour) across a variety of apps on different Android platforms. 

Download the paper.


"Towards Measuring the Effectiveness of Telephony Blacklists"

in collaboration with University of Georgia and Pindrop
Sharbani Pandit (Georgia Tech); Roberto Perdisci (University of Georgia, Georgia Tech); Mustaque Ahamad (Georgia Tech); Payas Gupta (Pindrop)

The convergence of telephony with the Internet has led to numerous new attacks that make use of phone calls to defraud victims. In response to the increasing number of unwanted or fraudulent phone calls, a number of call blocking applications have appeared on smartphone app stores, including a recent update to the default Android phone app that alerts users of suspected spam calls. However, little is known about the methods used by these apps to identify malicious numbers, and how effective these methods are in practice.

In this paper, we are the first to systematically investigate multiple data sources that may be leveraged to automatically learn phone blacklists, and to explore the potential effectiveness of such blacklists by measuring their ability to block future unwanted phone calls. Specifically, we consider four different data sources: user-reported call complaints submitted to the Federal Trade Commission (FTC), complaints collected via crowd-sourced efforts (e.g.,, call detail records (CDR) from a large telephony honeypot, and honeypot-based phone call audio recordings. Overall, our results show that phone blacklists are capable of blocking a significant fraction of future unwanted calls (e.g., more than 55%). Also, they have a very low false positive rate of only 0.01% for phone numbers of legitimate businesses. We also propose an unsupervised learning method to identify prevalent spam campaigns from different data sources, and show how effective blacklists may be as a defense against such campaigns. 

Download the paper.


"Game of Missuggestions: Semantic Analysis of Search-Autocomplete Manipulations"

in collaboration with Indiana University
Peng Wang, Xianghang Mi, (Indiana University); Xiaojing Liao* (College of William & Mary); XiaoFeng Wang, Kan Yuan, Feng Qian, (Indiana University), and Raheem Beyah (Georgia Tech)

As a new type of blackhat Search Engine Op-timization (SEO), autocomplete manipulations are increasingly utilized by miscreants and promotion companies alike to advertise desired suggestion terms when related trigger terms are entered by the user into a search engine. Like other illicit SEO, such activities game the search engine, mislead the querier, and in some cases, spread harmful content. However, little has beendone to understand this new threat, in terms of its scope, impact and techniques, not to mention any serious effort to detect such manipulated terms on a large scale.

Systematic analysis of autocomplete manipulation is challenging, due to the scale of the problem (tens or even hundreds of millions suggestion terms and their search results) and the heavy burdens it puts on the search engines. In this paper, we report the first technique that addresses these challenges, making a step toward better understanding and ultimately eliminating this new threat. Our technique, called Sacabuche, takes a semantics-based, two-step approach to minimize its performance impact: it utilizes Natural Language Processing (NLP) to analyze a large number of trigger and suggestion combinations, without querying search engines, to filter out the vast majority of legitimate suggestion terms; only a small set of suspicious suggestions are run against the search engines to get query results for identifying truly abused terms. This approach achieves a 96.23% precision and 95.63% recall, and its scalability enables us to perform a measurement study on 114 millions of suggestion terms, an unprecedented scale for this type of studies. The findings of the study bring to light the magnitude of the threat (0.48% Google suggestion terms we collected manipulated), and its significant security implications never reported before (e.g., exceedingly long lifetime of campaigns, sophisticated techniques and channels for spreading malware and phishing content).

*Co-author Xiaojing Liao was a Ph.D. student at Georgia Tech, advised by Raheem Beyah, while completing this project.

Download the paper.

Last revised February 19, 2018