Wednesday, August 9
7:45 am–8:45 am
Continental Breakfast
8:45 am–9:15 am
Opening Remarks and Awards
9:15 am–9:30 am
Short Break
9:30 am–10:30 am
Breaking Wireless Protocols
PhyAuth: Physical-Layer Message Authentication for ZigBee Networks
Ang Li and Jiawei Li, Arizona State University; Dianqi Han, University of Texas at Arlington; Yan Zhang, The University of Akron; Tao Li, Indiana University–Purdue University Indianapolis; Ting Zhu, The Ohio State University; Yanchao Zhang, Arizona State University
ZigBee is a popular wireless communication standard for Internet of Things (IoT) networks. Since each ZigBee network uses hop-by-hop network-layer message authentication based on a common network key, it is highly vulnerable to packet-injection attacks, in which the adversary exploits the compromised network key to inject arbitrary fake packets from any spoofed address to disrupt network operations and consume the network/device resources. In this paper, we present PhyAuth, a PHY hop-by-hop message authentication framework to defend against packet-injection attacks in ZigBee networks. The key idea of PhyAuth is to let each ZigBee transmitter embed into its PHY signals a PHY one-time password (called POTP) derived from a device-specific secret key and an efficient cryptographic hash function. An authentic POTP serves as the transmitter's PHY transmission permission for the corresponding packet. PhyAuth provides three schemes to embed, detect, and verify POTPs based on different features of ZigBee PHY signals. In addition, PhyAuth involves lightweight PHY signal processing and no change to the ZigBee protocol stack. Comprehensive USRP experiments confirm that PhyAuth can efficiently detect fake packets with very low false-positive and false-negative rates while having a negligible negative impact on normal data transmissions.
Time for Change: How Clocks Break UWB Secure Ranging
Claudio Anliker, Giovanni Camurati, and Srdjan Capkun, ETH Zurich
Formal Analysis and Patching of BLE-SC Pairing
Min Shi, Jing Chen, Kun He, Haoran Zhao, Meng Jia, and Ruiying Du, Wuhan University
Bluetooth Low Energy (BLE) is the mainstream Bluetooth standard and BLE Secure Connections (BLC-SC) pairing is a protocol that authenticates two Bluetooth devices and derives a shared secret key between them. Although BLE-SC pairing employs well-studied cryptographic primitives to guarantee its security, a recent study revealed a logic flaw in the protocol.
In this paper, we develop the first comprehensive formal model of the BLE-SC pairing protocol. Our model is compliant with the latest Bluetooth specification version 5.3 and covers all association models in the specification to discover attacks caused by the interplay between different association models. We also partly loosen the perfect cryptography assumption in traditional symbolic analysis approaches by designing a low-entropy key oracle to detect attacks caused by the poorly derived keys. Our analysis confirms two existing attacks and discloses a new attack. We propose a countermeasure to fix the flaws found in the BLE-SC pairing protocol and discuss the backward compatibility. Moreover, we extend our model to verify the countermeasure, and the results demonstrate its effectiveness in our extended model.
Framing Frames: Bypassing Wi-Fi Encryption by Manipulating Transmit Queues
Domien Schepers and Aanjhan Ranganathan, Northeastern University; Mathy Vanhoef, imec-DistriNet, KU Leuven
Wi-Fi devices routinely queue frames at various layers of the network stack before transmitting, for instance, when the receiver is in sleep mode. In this work, we investigate how Wi-Fi access points manage the security context of queued frames. By exploiting power-save features, we show how to trick access points into leaking frames in plaintext, or encrypted using the group or an all-zero key. We demonstrate resulting attacks against several open-source network stacks. We attribute our findings to the lack of explicit guidance in managing security contexts of buffered frames in the 802.11 standards. The unprotected nature of the power-save bit in a frame’s header, which our work reveals to be a fundamental design flaw, also allows an adversary to force queue frames intended for a specific client resulting in its disconnection and trivially executing a denial-of-service attack. Furthermore, we demonstrate how an attacker can override and control the security context of frames that are yet to be queued. This exploits a design flaw in hotspot-like networks and allows the attacker to force an access points to encrypt yet to be queued frames using an adversary-chosen key, thereby bypassing Wi-Fi encryption entirely. Our attacks have a widespread impact as they affect various devices and operating systems (Linux, FreeBSD, iOS, and Android) and because they can be used to hijack TCP connections or intercept client and web traffic. Overall, we highlight the need for transparency in handling security context across the network stack layers and the challenges in doing so.
Interpersonal Abuse
Abuse Vectors: A Framework for Conceptualizing IoT-Enabled Interpersonal Abuse
Sophie Stephenson and Majed Almansoori, University of Wisconsin--Madison; Pardis Emami-Naeini, Duke University; Danny Yuxing Huang, New York University; Rahul Chatterjee, University of Wisconsin--Madison
Tech-enabled interpersonal abuse (IPA) is a pervasive problem. Abusers, often intimate partners, use tools such as spyware to surveil and harass victim-survivors. Unfortunately, anecdotal evidence suggests that smart, Internet-connected devices such as home thermostats, cameras, and Bluetooth item finders may similarly be used against victim-survivors of IPA. To tackle abuse involving smart devices, it is vital that we understand the ecosystem of smart devices that enable IPA. Thus, in this work, we conduct a large-scale qualitative analysis of the smart devices used in IPA. We systematically crawl Google Search results to uncover web pages discussing how abusers use smart devices to enact IPA. By analyzing these web pages, we identify 32 devices used for IPA and detail the varied strategies abusers use for spying and harassment via these devices. Then, we design a simple, yet powerful framework—abuse vectors—which conceptualizes IoT-enabled IPA as four overarching patterns: Covert Spying, Unauthorized Access, Repurposing, and Intended Use. Using this lens, we pinpoint the necessary solutions required to address each vector of IoT abuse and encourage the security community to take action.
The Digital-Safety Risks of Financial Technologies for Survivors of Intimate Partner Violence
Rosanna Bellini, Cornell University; Kevin Lee, Princeton University; Megan A. Brown, Center for Social Media and Politics, New York University; Jeremy Shaffer, Cornell University; Rasika Bhalerao, Northeastern University; Thomas Ristenpart, Cornell Tech
"It’s the Equivalent of Feeling Like You’re in Jail”: Lessons from Firsthand and Secondhand Accounts of IoT-Enabled Intimate Partner Abuse
Sophie Stephenson and Majed Almansoori, University of Wisconsin—Madison; Pardis Emami-Naeini, Duke University; Rahul Chatterjee, University of Wisconsin—Madison
Sneaky Spy Devices and Defective Detectors: The Ecosystem of Intimate Partner Surveillance with Covert Devices
Rose Ceccio and Sophie Stephenson, University of Wisconsin—Madison; Varun Chadha, Capital One; Danny Yuxing Huang, New York University; Rahul Chatterjee, University of Wisconsin—Madison
Inferring User Details
Towards a General Video-based Keystroke Inference Attack
Zhuolin Yang, Yuxin Chen, and Zain Sarwar, University of Chicago; Hadleigh Schwartz, Columbia University; Ben Y. Zhao and Haitao Zheng, University of Chicago
A large collection of research literature has identified the privacy risks of keystroke inference attacks that use statistical models to extract content typed onto a keyboard. Yet existing attacks cannot operate in realistic settings, and rely on strong assumptions of labeled training data, knowledge of keyboard layout, carefully placed sensors or data from other side-channels. This paper describes experiences developing and evaluating a general, video-based keystroke inference attack that operates in common public settings using a single commodity camera phone, with no pretraining, no keyboard knowledge, no local sensors, and no side-channels. We show that using a self-supervised approach, noisy finger tracking data from a video can be processed, labeled and filtered to train DNN keystroke inference models that operate accurately on the same video. Using IRB approved user studies, we validate attack efficacy across a variety of environments, keyboards, and content, and users with different typing behaviors and abilities. Our project website is located at: https://sandlab.cs.uchicago.edu/keystroke/.
Going through the motions: AR/VR keylogging from user head motions
Carter Slocum, Yicheng Zhang, Nael Abu-Ghazaleh, and Jiasi Chen, University of California, Riverside
Augmented Reality/Virtual Reality (AR/VR) are the next step in the evolution of ubiquitous computing after personal computers to mobile devices. Applications of AR/VR continue to grow, including education and virtual workspaces, increasing opportunities for users to enter private text, such as passwords or sensitive corporate information. In this work, we show that there is a serious security risk of typed text in the foreground being inferred by a background application, without requiring any special permissions. The key insight is that a user’s head moves in subtle ways as she types on a virtual keyboard, and these motion signals are sufficient for inferring the text that a user types. We develop a system, TyPose, that extracts these signals and automatically infers words or characters that a victim is typing. Once the sensor signals are collected, TyPose uses machine learning to segment the motion signals in time to determine word/character boundaries, and also perform inference on the words/characters themselves. Our experimental evaluation on commercial AR/VR headsets demonstrate the feasibility of this attack, both in situations where multiple users’ data is used for training (82% top-5 word classification accuracy) or when the attack is personalized to a particular victim (92% top-5 word classification accuracy). We also show that first-line defenses of reducing the sampling rate or precision of head tracking are ineffective, suggesting that more sophisticated mitigations are needed.
Auditory Eyesight: Demystifying µs-Precision Keystroke Tracking Attacks on Unconstrained Keyboard Inputs
Yazhou Tu, Liqun Shan, and Md Imran Hossen, University of Louisiana at Lafayette; Sara Rampazzi and Kevin Butler, University of Florida; Xiali Hei, University of Louisiana at Lafayette
Watch your Watch: Inferring Personality Traits from Wearable Activity Trackers
Noé Zufferey and Mathias Humbert, University of Lausanne, Switzerland; Romain Tavenard, University of Rennes, CNRS, LETG, France; Kévin Huguenin, University of Lausanne, Switzerland
Wearable devices, such as wearable activity trackers (WATs), are increasing in popularity. Although they can help to improve one's quality of life, they also raise serious privacy issues. One particularly sensitive type of information has recently attracted substantial attention, namely personality, as it provides a means to influence individuals (e.g., voters in the Cambridge Analytica scandal). This paper presents the first empirical study to show a significant correlation between WAT data and personality traits (Big Five). We conduct an experiment with 200+ participants. The ground truth was established by using the NEO-PI-3 questionnaire. The participants' step count, heart rate, battery level, activities, sleep time, etc. were collected for four months. By following a principled machine-learning approach, the participants' personality privacy was quantified. Our results demonstrate that WATs data brings valuable information to infer the openness, extraversion, and neuroticism personality traits. We further study the importance of the different features (i.e., data types) and found that step counts play a key role in the inference of extraversion and neuroticism, while openness is more related to heart rate.
Adversarial ML beyond ML
Squint Hard Enough: Attacking Perceptual Hashing with Adversarial Machine Learning
Jonathan Prokos, Johns Hopkins University; Neil Fendley, Johns Hopkins University Applied Physics Laboratory; Matthew Green, Johns Hopkins University; Roei Schuster, Vector Institute; Eran Tromer, Tel Aviv University and Columbia University; Tushar Jois and Yinzhi Cao, Johns Hopkins University
Many online communications systems use perceptual hash matching systems to detect illicit files in user content. These systems employ specialized perceptual hash functions such as Microsoft's PhotoDNA or Facebook's PDQ to produce a compact digest of an image file that can be approximately compared to a database of known illicit-content digests. Recently, several proposals have suggested that hash-based matching systems be incorporated into client-side and end-to-end encrypted (E2EE) systems: in these designs, files that register as illicit content will be reported to the provider, while the remaining content will be sent confidentially. By using perceptual hashing to determine confidentiality guarantees, this new setting significantly changes the function of existing perceptual hashing — thus motivating the need to evaluate these functions from an adversarial perspective, using their perceptual capabilities against them. For example, an attacker may attempt to trigger a match on innocuous, but politically-charged, content in an attempt to stifle speech.
In this work we develop threat models for perceptual hashing algorithms in an adversarial setting, and present attacks against the two most widely deployed algorithms: PhotoDNA and PDQ. Our results show that it is possible to efficiently generate targeted second-preimage attacks in which an attacker creates a variant of some source image that matches some target digest. As a complement to this main result, we also further investigate the production of images that facilitate detection avoidance attacks, continuing a recent investigation of Jain et al. Our work shows that existing perceptual hash functions are likely insufficiently robust to survive attacks on this new setting.
How to Cover up Anomalous Accesses to Electronic Health Records
Xiaojun Xu, Qingying Hao, Zhuolin Yang, and Bo Li, University of Illinois at Urbana-Champaign; David Liebovitz, Northwestern University; Gang Wang and Carl A. Gunter, University of Illinois at Urbana-Champaign
Illegitimate access detection systems in hospital logs perform post hoc detection instead of runtime access restriction to allow widespread access in emergencies. We study the effectiveness of adversarial machine learning strategies against such detection systems on a large-scale dataset consisting of a year of access logs at a major hospital. We study a range of graph-based anomaly detection systems, including heuristic-based and Graph Neural Network (GNN)-based models. We find that evasion attacks, in which covering accesses (that is, accesses made to disguise a target access) are injected during evaluation period of the target access, can successfully fool the detection system. We also show that such evasion attacks can transfer among different detection algorithms. On the other hand, we find that poisoning attacks, in which adversaries inject covering accesses during the training phase of the model, do not effectively mislead the trained detection system unless the attacker is given unrealistic capabilities such as injecting over 10,000 accesses or imposing a high weight on the covering accesses in the training algorithm. To examine the generalizability of the results, we also apply our attack against a state-of-the-art detection model on the LANL network lateral movement dataset, and observe similar conclusions.
KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems
Xinghui Wu, Xi'an Jiaotong University; Shiqing Ma, University of Massachusetts Amherst; Chao Shen and Chenhao Lin, Xi'an Jiaotong University; Qian Wang, Wuhan University; Qi Li, Tsinghua University; Yuan Rao, Xi'an Jiaotong University
Tubes Among Us: Analog Attack on Automatic Speaker Identification
Shimaa Ahmed and Yash Wani, University of Wisconsin-Madison; Ali Shahin Shamsabadi, Alan Turing Institute; Mohammad Yaghini, University of Toronto and Vector Institute; Ilia Shumailov, Vector Institute and University of Oxford; Nicolas Papernot, University of Toronto and Vector Institute; Kassem Fawaz, University of Wisconsin-Madison
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning. Yet, machine learning has proven to be vulnerable to adversarial examples. A large number of modern systems protect themselves against such attacks by targeting artificiality, i.e., they deploy mechanisms to detect the lack of human involvement in generating the adversarial examples. However, these defenses implicitly assume that humans are incapable of producing meaningful and targeted adversarial examples. In this paper, we show that this base assumption is wrong. In particular, we demonstrate that for tasks like speaker identification, a human is capable of producing analog adversarial examples directly with little cost and supervision: by simply speaking through a tube, an adversary reliably impersonates other speakers in eyes of ML models for speaker identification. Our findings extend to a range of other acoustic-biometric tasks such as liveness detection, bringing into question their use in security-critical settings in real life, such as phone banking.
Private Set Operations
Efficient Unbalanced Private Set Intersection Cardinality and User-friendly Privacy-preserving Contact Tracing
Mingli Wu and Tsz Hon Yuen, The University of Hong Kong
Near-Optimal Oblivious Key-Value Stores for Efficient PSI, PSU and Volume-Hiding Multi-Maps
Alexander Bienstock, New York University; Sarvar Patel and Joon Young Seo, Google; Kevin Yeo, Google and Columbia University
Distance-Aware Private Set Intersection
Anrin Chakraborti, Duke University; Giulia Fanti, Carnegie Mellon University; Michael K. Reiter, Duke University
Private set intersection (PSI) allows two mutually untrusting parties to compute an intersection of their sets, without revealing information about items that are not in the intersection. This work introduces a PSI variant called distance-aware PSI (DA-PSI) for sets whose elements lie in a metric space. DAPSI returns pairs of items that are within a specified distance threshold of each other. This paper puts forward DA-PSI constructions for two metric spaces: (i) Minkowski distance of order 1 over the set of integers (i.e., for integers a and b, their distance is |a−b|); and (ii) Hamming distance over the set of binary strings of length ℓ. In the Minkowski DA-PSI protocol, the communication complexity scales logarithmically in the distance threshold and linearly in the set size. In the Hamming DA-PSI protocol, the communication volume scales quadratically in the distance threshold and is independent of the dimensionality of string length ℓ. Experimental results with real applications confirm that DA-PSI provides more effective matching at lower cost than naïve solutions.
Linear Private Set Union from Multi-Query Reverse Private Membership Test
Cong Zhang, State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China; Yu Chen, School of Cyber Science and Technology, Shandong University, Qingdao 266237, China; State Key Laboratory of Cryptology, P.O. Box 5159, Beijing 100878, China; Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education, Shandong University, Qingdao 266237, China; Weiran Liu, Alibaba Group; Min Zhang, School of Cyber Science and Technology, Shandong University, Qingdao 266237, China; State Key Laboratory of Cryptology, P.O. Box 5159, Beijing 100878, China; Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education, Shandong University, Qingdao 266237, China; Dongdai Lin, State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China
Private set union (PSU) protocol enables two parties, each holding a set, to compute the union of their sets without revealing anything else to either party. So far, there are two known approaches for constructing PSU protocols. The first mainly depends on additively homomorphic encryption (AHE), which is generally inefficient since it needs to perform a non-constant number of homomorphic computations on each item. The second is mainly based on oblivious transfer and symmetric-key operations, which is recently proposed by Kolesnikov et al. (ASIACRYPT 2019). It features good practical performance, which is several orders of magnitude faster than the first one. However, neither of these two approaches is optimal in the sense that their computation and communication complexity are not both O(n), where n is the size of the set. Therefore, the problem of constructing the optimal PSU protocol remains open.
In this work, we resolve this open problem by proposing a generic framework of PSU from oblivious transfer and a newly introduced protocol called multi-query reverse private membership test (mq-RPMT). We present two generic constructions of mq-RPMT. The first is based on symmetric-key encryption and general 2PC techniques. The second is based on re-randomizable public-key encryption. Both constructions lead to PSU with linear computation and communication complexity.
We implement our two PSU protocols and compare them with the state-of-the-art PSU. Experiments show that our PKE-based protocol has the lowest communication of all schemes, which is 3.7-14.8× lower depending on set size. The running time of our PSU scheme is 1.2-12× faster than that of state-of-the-art depending on network environments.
Logs and Auditing
Auditing Frameworks Need Resource Isolation: A Systematic Study on the Super Producer Threat to System Auditing and Its Mitigation
Peng Jiang, Ruizhe Huang, Ding Li, Yao Guo, and Xiangqun Chen, MOE Key Lab of HCST, School of Computer Science, Peking University; Jianhai Luan, Yuxin Ren, and Xinwei Hu, Huawei Technologies
AIRTAG: Towards Automated Attack Investigation by Unsupervised Learning with Log Texts
Hailun Ding and Juan Zhai, Rutgers University; Yuhong Nan, Sun Yat-sen University; Shiqing Ma, Rutgers University
Rethinking System Audit Architectures for High Event Coverage and Synchronous Log Availability
Varun Gandhi, Harvard University; Sarbartha Banerjee, University of Texas at Austin; Aniket Agrawal and Adil Ahmad, Arizona State University; Sangho Lee and Marcus Peinado, Microsoft Research
Improving Logging to Reduce Permission Over-Granting Mistakes
Bingyu Shen, Tianyi Shan, and Yuanyuan Zhou, University of California, San Diego
Access control configurations are gatekeepers to block unwelcome access to sensitive data. Unfortunately, system administrators (sysadmins) sometimes over-grant permissions when resolving unintended access-deny issues reported by legitimate users, which may open up security vulnerabilities for attackers. One of the primary reasons is that modern software does not provide informative logging to guide sysadmins to understand the reported problems.
This paper makes one of the first attempts (to the best of our knowledge) to help developers improve log messages in order to help sysadmins correctly understand and fix access-deny issues without over-granting permissions. First, we conducted an observation study to understand the current practices of access-deny logging in the server software. Our study shows that many access-control program locations do not have any log messages; and a large percentage of existing log messages lack useful information to guide sysadmins to correctly understand and fix the issues. On top of our observations, we built SECLOG, which uses static analysis to automatically help developers find missing access-deny log locations and identify relevant information at the log location.
We evaluated SECLOG with ten widely deployed server applications. Overall, SECLOG identified 380 new log statements for access-deny cases, and also enhanced 550 existing access-deny log messages with diagnostic information. We have reported 114 log statements to the developers of these applications, and so far 70 have been accepted into their main branches. We also conducted a user study with sysadmins (n=32) on six real-world access-deny issues. SECLOG can reduce the number of insecure fixes from 27 to 1, and also improve the diagnosis time by 64.2% on average.
10:30 am–11:00 am
Break with Refreshments
11:00 am–12:00 pm
Fighting the Robots
Diving into Robocall Content with SnorCall
Sathvik Prasad, Trevor Dunlap, Alexander Ross, and Bradley Reaves, North Carolina State University
Unsolicited bulk telephone calls — termed "robocalls" — nearly outnumber legitimate calls, overwhelming telephone users. While the vast majority of these calls are illegal, they are also ephemeral. Although telephone service providers, regulators, and researchers have ready access to call metadata, they do not have tools to investigate call content at the vast scale required. This paper presents SnorCall, a framework that scalably and efficiently extracts content from robocalls. SnorCall leverages the Snorkel framework that allows a domain expert to write simple labeling functions to classify text with high accuracy. We apply SnorCall to a corpus of transcripts covering 232,723 robocalls collected over a 23-month period. Among many other findings, SnorCall enables us to obtain first estimates on how prevalent different scam and legitimate robocall topics are, determine which organizations are referenced in these calls, estimate the average amounts solicited in scam calls, identify shared infrastructure between campaigns, and monitor the rise and fall of election-related political calls. As a result, we demonstrate how regulators, carriers, anti-robocall product vendors, and researchers can use SnorCall to obtain powerful and accurate analyses of robocall content and trends that can lead to better defenses.
UCBlocker: Unwanted Call Blocking Using Anonymous Authentication
Changlai Du and Hexuan Yu, Virginia Tech; Yang Xiao, University of Kentucky; Y. Thomas Hou, Virginia Tech; Angelos D. Keromytis, Georgia Institute of Technology; Wenjing Lou, Virginia Tech
Combating Robocalls with Phone Virtual Assistant Mediated Interaction
Sharbani Pandit, Georgia Institute of Technology; Krishanu Sarker, Georgia State University; Roberto Perdisci, University of Georgia and Georgia Institute of Technology; Mustaque Ahamad and Diyi Yang, Georgia Institute of Technology
Mass robocalls affect millions of people on a daily basis. Unfortunately, most current defenses against robocalls rely on phone blocklists and are ineffective against caller ID spoofing. To enable detection and blocking of spoofed robocalls, we propose a NLP based smartphone virtual assistant that automatically vets incoming calls. Similar to a human assistant, the virtual assistant picks up an incoming call and uses machine learning models to interact with the caller to determine if the call source is a human or a robocaller. It interrupts a user by ringing the phone only when the call is determined to be not from a robocaller. Security analysis performed by us shows that such a system can stop current and more sophisticated robocallers that might emerge in the future. We also conduct a user study that shows that the virtual assistant can preserve phone call user experience.
BotScreen: Trust Everybody, but Cut the Aimbots Yourself
Minyeop Choi, KAIST; Gihyuk Ko, Carnegie Mellon University; Sang Kil Cha, KAIST
Perspectives and Incentives
"If I could do this, I feel anyone could:" The Design and Evaluation of a Secondary Authentication Factor Manager
Garrett Smith, Tarun Kumar Yadav, and Jonathan Dutson, Brigham Young University; Scott Ruoti, The University of Tennessee; Kent Seamons, Brigham Young University
Exploring Privacy and Incentives Considerations in Adoption of COVID-19 Contact Tracing Apps
Oshrat Ayalon, Max Planck Institute for Software Systems; Dana Turjeman, Reichman University; Elissa M. Redmiles, Max Planck Institute for Software Systems
Exploring Tenants’ Preferences of Privacy Negotiation in Airbnb
Zixin Wang, Zhejiang University; Danny Yuxing Huang, New York University; Yaxing Yao, University of Maryland, Baltimore County
Know Your Cybercriminal: Evaluating Attacker Preferences by Measuring Profile Sales on an Active, Leading Criminal Market for User Impersonation at Scale
Michele Campobasso and Luca Allodi, Eindhoven University of Technology
In this paper we exploit market features proper of a leading Russian cybercrime market for user impersonation at scale to evaluate attacker preferences when purchasing stolen user profiles, and the overall economic activity of the market. We run our data collection over a period of $161$ days and collect data on a sample of $1'193$ sold user profiles out of $11'357$ advertised products in that period and their characteristics. We estimate a market trade volume of up to approximately $700$ profiles per day, corresponding to estimated daily sales of up to $4'000$ USD and an overall market revenue within the observation period between $540k$ and $715k$ USD. We find profile provision to be rather stable over time and mainly focused on European profiles, whereas actual profile acquisition varies significantly depending on other profile characteristics. Attackers' interests focus disproportionally on profiles of certain types, including those originating in North America and featuring Crypto
resources. We model and evaluate the relative importance of different profile characteristics in the final decision of an attacker to purchase a profile, and discuss implications for defenses and risk evaluation.
Traffic Analysis
HorusEye: A Realtime IoT Malicious Traffic Detection Framework using Programmable Switches
Yutao Dong, Tsinghua Shenzhen International Graduate School, Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China; Qing Li, Peng Cheng Laboratory, Shenzhen, China; Kaidong Wu and Ruoyu Li, Tsinghua Shenzhen International Graduate School, Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China; Dan Zhao, Peng Cheng Laboratory, Shenzhen, China; Gareth Tyson, Hong Kong University of Science and Technology (GZ), Guangzhou, China; Junkun Peng, Yong Jiang, and Shutao Xia, Tsinghua Shenzhen International Graduate School, Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China; Mingwei Xu, Tsinghua University, Beijing, China
An Input-Agnostic Hierarchical Deep Learning Framework for Traffic Fingerprinting
Jian Qu, Xiaobo Ma, and Jianfeng Li, Xi’an Jiaotong University; Xiapu Luo, The Hong Kong Polytechnic University; Lei Xue, Sun Yat-sen University; Junjie Zhang, Wright State University; Zhenhua Li, Tsinghua University; Li Feng, Southwest Jiaotong University; Xiaohong Guan, Xi'an Jiaotong University
Deep learning has proven to be promising for traffic fingerprinting that explores features of packet timing and sizes. Although well-known for automatic feature extraction, it is faced with a gap between the heterogeneousness of the traffic (i.e., raw packet timing and sizes) and the homogeneousness of the required input (i.e., input-specific). To address this gap, we design an input-agnostic hierarchical deep learning framework for traffic fingerprinting that can hierarchically abstract comprehensive heterogeneous traffic features into homogeneous vectors seamlessly digestible by existing neural networks for further classification. The extensive evaluation demonstrates that our framework, with just one paradigm, not only supports heterogeneous traffic input but also achieves better or comparable performance compared to state-of-the-art methods black across a wide range of traffic fingerprinting tasks.
Subverting Website Fingerprinting Defenses with Robust Traffic Representation
Meng Shen, School of Cyberspace Science and Technology, Beijing Institute of Technology; Kexin Ji and Zhenbo Gao, School of Computer Science, Beijing Institute of Technology; Qi Li, Institute for Network Sciences and Cyberspace, Tsinghua University; Liehuang Zhu, School of Cyberspace Science and Technology, Beijing Institute of Technology; Ke Xu, Department of Computer Science and Technology, Tsinghua University
Anonymity networks, e.g., Tor, are vulnerable to various website fingerprinting (WF) attacks, which allows attackers to perceive user privacy on these networks. However, the defenses developed recently can effectively interfere with WF attacks, e.g., by simply injecting dummy packets. In this paper, we propose a novel WF attack called Robust Fingerprinting (RF), which enables an attacker to fingerprint the Tor traffic under various defenses. Specifically, we develop a robust traffic representation method that generates Traffic Aggregation Matrix (TAM) to fully capture key informative features leaked from Tor traces. By utilizing TAM, an attacker can train a CNN-based classifier that learns common high-level traffic features uncovered by different defenses. We conduct extensive experiments with public real-world datasets to compare RF with state-of-the-art (SOTA) WF attacks. The closed- and open-world evaluation results demonstrate that RF significantly outperforms the SOTA attacks. In particular, RF can effectively fingerprint Tor traffic under the SOTA defenses with an average accuracy improvement of 8.9% over the best existing attack (i.e., Tik-Tok).
Rosetta: Enabling Robust TLS Encrypted Traffic Classification in Diverse Network Environments with TCP-Aware Traffic Augmentation
Renjie Xie and Jiahao Cao, Tsinghua University; Enhuan Dong and Mingwei Xu, Tsinghua University and Quan Cheng Laboratory; Kun Sun, George Mason University; Qi Li and Licheng Shen, Tsinghua University; Menghao Zhang, Tsinghua University and Kuaishou Technology
As the majority of Internet traffic is encrypted by the Transport Layer Security (TLS) protocol, recent advances leverage Deep Learning (DL) models to conduct encrypted traffic classification by automatically extracting complicated and informative features from the packet length sequences of TLS flows. Though existing DL models have reported to achieve excellent classification results on encrypted traffic, we conduct a comprehensive study to show that they all have significant performance degradation in real diverse network environments. After systematically studying the reasons, we discover the packet length sequences of flows may change dramatically due to various TCP mechanisms for reliable transmission in varying network environments. Thereafter, we propose Rosetta to enable robust TLS encrypted traffic classification for existing DL models. It leverages TCP-aware traffic augmentation mechanisms and self-supervised learning to understand implict TCP semantics, and hence extracts robust features of TLS flows. Extensive experiments show that Rosetta can significantly improve the classification performance of existing DL models on TLS traffic in diverse network environments.
Adversarial Patches and Images
Towards Targeted Obfuscation of Adversarial Unsafe Images using Reconstruction and Counterfactual Super Region Attribution Explainability
Mazal Bethany, Andrew Seong, Samuel Henrique Silva, Nicole Beebe, Nishant Vishwamitra, and Peyman Najafirad, University of Texas at San Antonio
TPatch: A Triggered Physical Adversarial Patch
Wenjun Zhu and Xiaoyu Ji, USSLAB, Zhejiang University; Yushi Cheng, BNRist, Tsinghua University; Shibo Zhang and Wenyuan Xu, USSLAB, Zhejiang University
Autonomous vehicles increasingly utilize the vision-based perception module to acquire information about driving environments and detect obstacles. Correct detection and classification are important to ensure safe driving decisions. Existing works have demonstrated the feasibility of fooling the perception models such as object detectors and image classifiers with printed adversarial patches. However, most of them are indiscriminately offensive to every passing autonomous vehicle. In this paper, we propose TPatch, a physical adversarial patch triggered by acoustic signals. Unlike other adversarial patches, TPatch remains benign under normal circumstances but can be triggered to launch a hiding, creating or altering attack by a designed distortion introduced by signal injection attacks towards cameras. To avoid the suspicion of human drivers and make the attack practical and robust in the real world, we propose a content-based camouflage method and an attack robustness enhancement method to strengthen it. Evaluations with three object detectors, YOLO V3/V5 and Faster R-CNN, and eight image classifiers demonstrate the effectiveness of TPatch in both the simulation and the real world. We also discuss possible defenses at the sensor, algorithm, and system levels.
CAPatch: Physical Adversarial Patch against Image Captioning Systems
Shibo Zhang, USSLAB, Zhejiang University; Yushi Cheng, BNRist, Tsinghua University; Wenjun Zhu, Xiaoyu Ji, and Wenyuan Xu, USSLAB, Zhejiang University
The fast-growing surveillance systems will make image captioning, i.e., automatically generating text descriptions of images, an essential technique to process the huge volumes of videos efficiently, and correct captioning is essential to ensure the text authenticity. While prior work has demonstrated the feasibility of fooling computer vision models with adversarial patches, it is unclear whether the vulnerability can lead to incorrect captioning, which involves natural language processing after image feature extraction. In this paper, we design CAPatch, a physical adversarial patch that can result in mistakes in the final captions, i.e., either create a completely different sentence or a sentence with keywords missing, against multi-modal image captioning systems. To make CAPatch effective and practical in the physical world, we propose a detection assurance and attention enhancement method to increase the impact of CAPatch and a robustness improvement method to address the patch distortions caused by image printing and capturing. Evaluations on three commonly-used image captioning systems (Show-and-Tell, Self-critical Sequence Training: Att2in, and Bottom-up Top-down) demonstrate the effectiveness of CAPatch in both the digital and physical worlds, whereby volunteers wear printed patches in various scenarios, clothes, lighting conditions. With a size of 5% of the image, physically-printed CAPatch can achieve continuous attacks with an attack success rate higher than 73.1% over a video recorder.
Hard-label Black-box Universal Adversarial Patch Attack
Guanhong Tao, Shengwei An, Siyuan Cheng, Guangyu Shen, and Xiangyu Zhang, Purdue University
Decentralized Finance
Anatomy of a High-Profile Data Breach: Dissecting the Aftermath of a Crypto-Wallet Case
Svetlana Abramova and Rainer Böhme, Universität Innsbruck
Glimpse: On-Demand PoW Light Client with Constant-Size Storage for DeFi
Giulia Scaffino, Lukas Aumayr, Zeta Avarikioti, and Matteo Maffei, TU Wien
Mixed Signals: Analyzing Ground-Truth Data on the Users and Economics of a Bitcoin Mixing Service
Fieke Miedema, Kelvin Lubbertsen, Verena Schrama, and Rolf van Wegberg, Delft University of Technology
Is Your Wallet Snitching On You? An Analysis on the Privacy Implications of Web3
Christof Ferreira Torres, Fiona Willi, and Shweta Shinde, ETH Zurich
Memory
Capstone: A Capability-based Foundation for Trustless Secure Memory Access
Jason Zhijingcheng Yu, National University of Singapore; Conrad Watt, University of Cambridge; Aditya Badole, Trevor E. Carlson, and Prateek Saxena, National University of Singapore
Capability-based memory isolation is a promising new architectural primitive. Software can access low-level memory only via capability handles rather than raw pointers, which provides a natural interface to enforce security restrictions. Existing architectural capability designs such as CHERI provide spatial safety, but fail to extend to other memory models that security-sensitive software designs may desire. In this paper, we propose Capstone, a more expressive architectural capability design that supports multiple existing memory isolation models in a trustless setup, i.e., without relying on trusted software components. We show how Capstone is well-suited for environments where privilege boundaries are fluid (dynamically extensible), memory sharing/delegation are desired both temporally and spatially, and where such needs are to be balanced with availability concerns. Capstone can also be implemented efficiently. We present an implementation sketch and through evaluation show that its overhead is below 50% in common use cases. We also prototype a functional emulator for Capstone and use it to demonstrate the runnable implementations of six real-world memory models without trusted software components: three types of enclave-based TEEs, a thread scheduler, a memory allocator, and Rust-style memory safety—all within the interface of Capstone.
FloatZone: Accelerating Memory Error Detection using the Floating Point Unit
Floris Gorter, Enrico Barberis, Raphael Isemann, Erik van der Kouwe, Cristiano Giuffrida, and Herbert Bos, Vrije Universiteit Amsterdam
PUMM: Preventing Use-After-Free Using Execution Unit Partitioning
Carter Yagemann, The Ohio State University; Simon P. Chung, Brendan Saltaformaggio, and Wenke Lee, Georgia Institute of Technology
Critical software is written in memory unsafe languages that are vulnerable to use-after-free and double free bugs. This has led to proposals to secure memory allocators by strategically deferring memory reallocations long enough to make such bugs unexploitable. Unfortunately, existing solutions suffer from high runtime and memory overheads. Seeking a better solution, we propose to profile programs to identify units of code that correspond to the handling of individual tasks. With the intuition that little to no data should flow between separate tasks at runtime, reallocation of memory freed by the currently executing unit is deferred until after its completion; just long enough to prevent use-after-free exploitation.
To demonstrate the efficacy of our design, we implement a prototype for Linux, PUMM, which consists of an offline profiler and an online enforcer that transparently wraps standard libraries to protect C/C++ binaries. In our evaluation of 40 real-world and 3,000 synthetic vulnerabilities across 26 programs, including complex multi-threaded cases like the Chakra JavaScript engine, PUMM successfully thwarts all real-world exploits, and only allows 4 synthetic exploits, while reducing memory overhead by 52.0% over prior work and incurring an average runtime overhead of 2.04%.
MTSan: A Feasible and Practical Memory Sanitizer for Fuzzing COTS Binaries
Xingman Chen, Tsinghua University; Yinghao Shi, Institute of Information Engineering, Chinese Academy of Sciences; Zheyu Jiang and Yuan Li, Tsinghua University; Ruoyu Wang, Arizona State University; Haixin Duan, Tsinghua University and Zhongguancun Laboratory; Haoyu Wang, Huazhong University of Science and Technology; Chao Zhang, Tsinghua University and Zhongguancun Laboratory
Fuzzing has been widely adopted for finding vulnerabilities in programs, especially when source code is not available. But the effectiveness and efficiency of binary fuzzing are curtailed by the lack of memory sanitizers for binaries. This lack of binary sanitizers is due to the information loss in compiling programs and challenges in binary instrumentation.
In this paper, we present a feasible and practical hardware-assisted memory sanitizer, MTSan, for binary fuzzing. MTSan can detect both spatial and temporal memory safety violations at runtime. It adopts a novel progressive object recovery scheme to recover objects in binaries and uses a customized binary rewriting solution to instrument binaries with the memory-tagging-based memory safety sanitizing policy. Further, MTSan uses a hardware feature, ARM Memory Tagging Extension (MTE) to significantly reduce its runtime overhead. We implemented a prototype of MTSan on AArch64 and systematically evaluated its effectiveness and performance. Our evaluation results show that MTSan could detect more memory safety violations than existing binary sanitizers whiling introducing much lower runtime and memory overhead.
12:00 pm–1:30 pm
Lunch (on your own)
1:30 pm–2:45 pm
Security in Digital Realities
Hidden Reality: Caution, Your Hand Gesture Inputs in the Immersive Virtual World are Visible to All!
Sindhu Reddy Kalathur Gopal and Diksha Shukla, University of Wyoming; James David Wheelock, University of Colorado Boulder; Nitesh Saxena, Texas A&M University, College Station
Text entry is an inevitable task while using Virtual Reality (VR) devices in a wide range of applications such as remote learning, gaming, and virtual meeting. VR users enter passwords/pins to log in to their user accounts in various applications and type regular text to compose emails or browse the internet. The typing activity on VR devices is believed to be resistant to direct observation attacks as the virtual screen in an immersive environment is not directly visible to others present in physical proximity. This paper presents a video-based side-channel attack, Hidden Reality (HR), that shows – although the virtual screen in VR devices is not in direct sight of adversaries, the indirect observations might get exploited to steal the user’s private information.
The Hidden Reality (HR) attack utilizes video clips of the user’s hand gestures while they type on the virtual screen to decipher the typed text in various key entry scenarios on VR devices including typed pins and passwords. Experimental analysis performed on a large corpus of 368 video clips show that the Hidden Reality model can successfully decipher an average of over 75% of the text inputs. The high success rate of our attack model led us to conduct a user study to understand the user’s behavior and perception of security in virtual reality. The analysis showed that over 95% of users were not aware of any security threats on VR devices and believed the immersive environments to be secure from digital attacks. Our attack model challenges users’ false sense of security in immersive environments and emphasizes the need for more stringent security solutions in VR space.
LocIn: Inferring Semantic Location from Spatial Maps in Mixed Reality
Habiba Farrukh, Reham Mohamed, Aniket Nare, Antonio Bianchi, and Z. Berkay Celik, Purdue University
Unique Identification of 50,000+ Virtual Reality Users from Head & Hand Motion Data
Vivek Nair and Wenbo Guo, UC Berkeley; Justus Mattern, RWTH Aachen; Rui Wang and James F. O'Brien, UC Berkeley; Louis Rosenberg, Unanimous AI; Dawn Song, UC Berkeley
Exploring User Reactions and Mental Models Towards Perceptual Manipulation Attacks in Mixed Reality
Kaiming Cheng, Jeffery F. Tian, Tadayoshi Kohno, and Franziska Roesner, University of Washington
Perceptual Manipulation Attacks (PMA) involve manipulating users’ multi-sensory (e.g., visual, auditory, haptic) perceptions of the world through Mixed Reality (MR) content, in order to influence users' judgments and following actions. For example, a MR driving application that is expected to show safety-critical output might also (maliciously or unintentionally) overlay the wrong signal on a traffic sign, misleading the user into slamming on the brake. While current MR technology is sufficient to create such attacks, little research has been done to understand how users perceive, react to, and defend against such potential manipulations. To provide a foundation for understanding and addressing PMA in MR, we conducted an in-person study with 21 participants. We developed three PMA in which we focused on attacking three different perceptions: visual, auditory, and situational awareness. Our study first investigates how user reactions are affected by evaluating their performance on “microbenchmark'' tasks under benchmark and different attack conditions. We observe both primary and secondary impacts from attacks, later impacting participants' performance even under non-attack conditions. We follow up with interviews, surfacing a range of user reactions and interpretations of PMA. Through qualitative data analysis of our observations and interviews, we identify various defensive strategies participants developed, and we observe how these strategies sometimes backfire. We derive recommendations for future investigation and defensive directions based on our findings.
Erebus: Access Control for Augmented Reality Systems
Yoonsang Kim, Sanket Goutam, Amir Rahmati, and Arie Kaufman, Stony Brook University
Password Guessing
No Single Silver Bullet: Measuring the Accuracy of Password Strength Meters
Ding Wang, Xuan Shan, and Qiying Dong, Nankai University; Yaosheng Shen, Peking University; Chunfu Jia, Nankai University
To help users create stronger passwords, nearly every respectable web service adopts a password strength meter (PSM) to provide real-time strength feedback upon user registration and password change. Recent research has found that PSMs that provide accurate feedback can indeed effectively nudge users toward choosing stronger passwords. Thus, it is imperative to systematically evaluate existing PSMs to facilitate the selection of accurate ones. In this paper, we highlight that there is no single silver bullet metric for measuring the accuracy of PSMs: For each given guessing scenario and strategy, a specific metric is necessary. We investigate the intrinsic characteristics of online and offline guessing scenarios, and for the first time, propose a systematic evaluation framework that is composed of four different dimensioned criteria to rate PSM accuracy under these two password guessing scenarios (as well as various guessing strategies).
More specifically, for online guessing, the strength misjudgments of passwords with different popularity would have varied effects on PSM accuracy, and we suggest the weighted Spearman metric and consider two typical attackers: The general attacker who is unaware of the target password distribution, and the knowledgeable attacker aware of it. For offline guessing, since the cracked passwords are generally weaker than the uncracked ones, and they correspond to two disparate distributions, we adopt the Kullback-Leibler divergence metric and investigate the four most typical guessing strategies: brute-force, dictionary-based, probability-based, and a combination of above three strategies. In particular, we propose the Precision metric to measure PSM accuracy when non-binned strength feedback (e.g., probability) is transformed into easy-to-understand bins/scores (e.g., [weak, medium, strong]). We further introduce a reconciled Precision metric to characterize the impacts of strength misjudgments in different directions (e.g., weak→strong and strong→weak) on PSM accuracy. The effectiveness and practicality of our evaluation framework are demonstrated by rating 12 leading PSMs, leveraging 14 real-world password datasets. Finally, we provide three recommendations to help improve the accuracy of PSMs.
Password Guessing Using Random Forest
Ding Wang and Yunkai Zou, Nankai University; Zijian Zhang, Peking University; Kedong Xiu, Nankai University
Passwords are the most widely used authentication method, and guessing attacks are the most effective method for password strength evaluation. However, existing password guessing models are generally built on traditional statistics or deep learning, and there has been no research on password guessing that employs classical machine learning.
To fill this gap, this paper provides a brand new technical route for password guessing. More specifically, we re-encode the password characters and make it possible for a series of classical machine learning techniques that tackle multi-class classification problems (such as random forest, boosting algorithms and their variants) to be used for password guessing. Further, we propose RFGuess, a random-forest based framework that characterizes the three most representative password guessing scenarios (i.e., trawling guessing, targeted guessing based on personally identifiable information (PII) and on users' password reuse behaviors).
Besides its theoretical significance, this work is also of practical value. Experiments using 13 large real-world password datasets demonstrate that our random-forest based guessing models are effective: (1) RFGuess for trawling guessing scenarios, whose guessing success rates are comparable to its foremost counterparts; (2) RFGuess-PII for targeted guessing based on PII, which guesses 20%~28% of common users within 100 guesses, outperforming its foremost counterpart by 7%~13%; (3) RFGuess-Reuse for targeted guessing based on users' password reuse/modification behaviors, which performs the best or 2nd best among related models. We believe this work makes a substantial step toward introducing classical machine learning techniques into password guessing.
Pass2Edit: A Multi-Step Generative Model for Guessing Edited Passwords
Ding Wang and Yunkai Zou, Nankai University; Yuan-An Xiao, Peking University; Siqi Ma, The University of New South Wales; Xiaofeng Chen, Xidian University
While password stuffing attacks (that exploit the direct password reuse behavior) have gained considerable attention, only a few studies have examined password tweaking attacks, where an attacker exploits users' indirect reuse behaviors (with edit operations like insertion, deletion, and substitution). For the first time, we model the password tweaking attack as a multi-class classification problem for characterizing users' password edit/modification processes, and propose a generative model coupled with the multi-step decision-making mechanism, called Pass2Edit, to accurately characterize users' password reuse/modification behaviors.
We demonstrate the effectiveness of Pass2Edit through extensive experiments, which consist of 12 practical attack scenarios and employ 4.8 billion real-world passwords. The experimental results show that Pass2Edit and its variant significantly improve over the prior art. More specifically, when the victim's password at site A (namely pwA) is known, within 100 guesses, the cracking success rate of Pass2Edit in guessing her password at site B (pwB ≠ pwA) is 24.2% (for common users) and 11.7% (for security-savvy users), respectively, which is 18.2%-33.0% higher than its foremost counterparts. Our results highlight that password tweaking is a much more damaging threat to password security than expected.
Improving Real-world Password Guessing Attacks via Bi-directional Transformers
Ming Xu and Jitao Yu, Fudan University; Xinyi Zhang, Facebook; Chuanwang Wang, Shenghao Zhang, Haoqi Wu, and Weili Han, Fudan University
Password guessing attacks, prevalent issues in the real world, can be conceptualized as efforts to approximate the probability distribution of text tokens. Techniques in the natural language processing (NLP) field naturally lend themselves to password guessing. Among them, bi-directional transformers stand out with their ability to utilize bi-directional contexts to capture the nuances in texts.
To further improve password guessing attacks, we propose a bi-directional-transformer-based guessing framework, referred to as PassBERT, which applies the pre-training / fine-tuning paradigm to password guessing attacks. We first prepare a pre-trained password model, which contains the knowledge of the general password distribution. Then, we design three attack-specific fine-tuning approaches to tailor the pre-trained password model to the following real-world attack scenarios: (1) conditional password guessing, which recovers the complete password given a partial password; (2) targeted password guessing, which compromises the password(s) of a specific user using their personal information; (3) adaptive rule-based password guessing, which selects adaptive mangling rules for a word (i.e., base password) to generate rule-transformed password candidates. The experimental results show that our fine-tuned models can outperform the state-of-the-art models by 14.53%, 21.82% and 4.86% in the three attacks, respectively, demonstrating the effectiveness of bi-directional transformers on downstream guessing attacks. Finally, we propose a hybrid password strength meter to mitigate the risks from the three attacks.
Araña: Discovering and Characterizing Password Guessing Attacks in Practice
Mazharul Islam, University of Wisconsin–Madison; Marina Sanusi Bohuk, Cornell Tech; Paul Chung, University of Wisconsin–Madison; Thomas Ristenpart, Cornell Tech; Rahul Chatterjee, University of Wisconsin–Madison
Remote password guessing attacks remain one of the largest sources of account compromise. Understanding and characterizing attacker strategies is critical to improving security but doing so has been challenging thus far due to the sensitivity of login services and the lack of ground truth labels for benign and malicious login requests. We perform an in-depth measurement study of guessing attacks targeting two large universities. Using a rich dataset of more than 34 million login requests to the two universities as well as thousands of compromise reports, we were able to develop a new analysis pipeline to identify 29 attack clusters—many of which involved compromises not previously known to security engineers. Our analysis provides the richest investigation to date of password guessing attacks as seen from login services. We believe our tooling will be useful in future efforts to develop real-time detection of attack campaigns, and our characterization of attack campaigns can help more broadly guide mitigation design.
Privacy Policies, Labels, Etc.
PoliGraph: Automated Privacy Policy Analysis using Knowledge Graphs
Hao Cui, Rahmadi Trimananda, Athina Markopoulou, and Scott Jordan, University of California, Irvine
Calpric: Inclusive and Fine-grain Labeling of Privacy Policies with Crowdsourcing and Active Learning
Wenjun Qiu, David Lie, and Lisa Austin, University of Toronto
POLICYCOMP: Counterpart Comparison of Privacy Policies Uncovers Overbroad Personal Data Collection Practices
Lu Zhou, Xidian University and Shanghai Jiao Tong University; Chengyongxiao Wei, Tong Zhu, and Guoxing Chen, Shanghai Jiao Tong University; Xiaokuan Zhang, George Mason University; Suguo Du, Hui Cao, and Haojin Zhu, Shanghai Jiao Tong University
Since mobile apps' privacy policies are usually complex, various tools have been developed to examine whether privacy policies have contradictions and verify whether privacy policies are consistent with the apps' behaviors. However, to the best of our knowledge, no prior work answers whether the personal data collection practices (PDCPs) in an app's privacy policy are necessary for given purposes (i.e., whether to comply with the principle of data minimization). Though defined by most existing privacy regulations/laws such as GDPR, the principle of data minimization has been translated into different privacy practices depending on the different contexts (e.g., various developers and targeted users). In the end, the developers can collect personal data claimed in the privacy policies as long as they receive authorizations from the users.
Currently, it mainly relies on legal experts to manually audit the necessity of personal data collection according to the specific contexts, which is not very scalable for millions of apps. In this study, we aim to take the first step to automatically investigate whether PDCPs in an app's privacy policy are overbroad from the perspective of counterpart comparison. Our basic insight is that, if an app claims to collect much more personal data in its privacy policy than most of its counterparts, it is more likely to be conducting overbroad collection. To achieve this, POLICYCOMP, an automatic framework for detecting overbroad PDCPs is proposed. We use POLICYCOMP to perform a large-scale analysis on 10,042 privacy policies and flag 48.29% of PDCPs to be overbroad. We shared our findings with 2,000 app developers and received 52 responses from them, 39 of which acknowledged our findings and took actions (e.g., removing overbroad PDCPs).
Lalaine: Measuring and Characterizing Non-Compliance of Apple Privacy Labels
Yue Xiao, Zhengyi Li, and Yue Qin, Indiana University Bloomington; Xiaolong Bai, Alibaba Group; Jiale Guan, Xiaojing Liao, and Luyi Xing, Indiana University Bloomington
As a key supplement to privacy policies that are known to be lengthy and difficult to read, Apple has launched app privacy labels, which purportedly help users more easily understand an app's privacy practices. However, false and misleading privacy labels can dupe privacy-conscious consumers into downloading data-intensive apps, ultimately eroding the credibility and integrity of the labels. Although Apple releases requirements and guidelines for app developers to create privacy labels, little is known about whether and to what extent the privacy labels in the wild are correct and compliant, reflecting the actual data practices of iOS apps.
This paper presents the first systematic study, based on our new methodology named Lalaine, to evaluate data-flow to privacy-label flow-to-label consistency. Lalaine fully analyzed the privacy labels and binaries of 5,102 iOS apps, shedding lights on the prevalence and seriousness of privacy-label non-compliance. We provide detailed case studies and analyze root causes for privacy label non-compliance that complements prior understandings. This has led to new insights for improving privacy-label design and compliance requirements, so app developers, platform stakeholders, and policy-makers can better achieve their privacy and accountability goals. Lalaine is thoroughly evaluated for its high effectiveness and efficiency. We are responsibly reporting the results to stakeholders.
Automated Cookie Notice Analysis and Enforcement
Rishabh Khandelwal and Asmit Nayak, University of Wisconsin—Madison; Hamza Harkous, Google, Inc.; Kassem Fawaz, University of Wisconsin—Madison
Online websites use cookie notices to elicit consent from the users, as required by recent privacy regulations like the GDPR and the CCPA. Prior work has shown that these notices are designed in a way to manipulate users into making website-friendly choices which put users' privacy at risk. In this work, we present CookieEnforcer, a new system for automatically discovering cookie notices and extracting a set of instructions that result in disabling all non-essential cookies. In order to achieve this, we first build an automatic cookie notice detector that utilizes the rendering pattern of the HTML elements to identify the cookie notices. Next, we analyze the cookie notices and predict the set of actions required to disable all unnecessary cookies. This is done by modeling the problem as a sequence-to-sequence task, where the input is a machine-readable cookie notice and the output is the set of clicks to make. We demonstrate the efficacy of CookieEnforcer via an end-to-end accuracy evaluation, showing that it can generate the required steps in 91% of the cases. Via a user study, we also show that CookieEnforcer can significantly reduce the user effort. Finally, we characterize the behavior of CookieEnforcer on the top 100k websites from the Tranco list, showcasing its stability and scalability.
ML Applications to Malware
Continuous Learning for Android Malware Detection
Yizheng Chen, Zhoujie Ding, and David Wagner, UC Berkeley
Humans vs. Machines in Malware Classification
Simone Aonzo, EURECOM; Yufei Han, INRIA; Alessandro Mantovani and Davide Balzarotti, EURECOM
Today, the classification of a file as either benign or malicious is performed by a combination of deterministic indicators (such as antivirus rules), Machine Learning classifiers, and, more importantly, the judgment of human experts.
However, to compare the difference between human and machine intelligence in malware analysis, it is first necessary to understand how human subjects approach malware classification. In this direction, our work presents the first experimental study designed to capture which `features' of a suspicious program (e.g., static properties or runtime behaviors) are prioritized for malware classification according to humans and machines intelligence. For this purpose, we created a malware classification game where 110 human players worldwide and with different seniority levels (72 novices and 38 experts) have competed to classify the highest number of unknown samples based on detailed sandbox reports. Surprisingly, we discovered that both experts and novices base their decisions on approximately the same features, even if there are clear differences between the two expertise classes.
Furthermore, we implemented two state-of-the-art Machine Learning models for malware classification and evaluated their performances on the same set of samples. The comparative analysis of the results unveiled a common set of features preferred by both Machine Learning models and helped better understand the difference in the feature extraction.
This work reflects the difference in the decision-making process of humans and computer algorithms and the different ways they extract information from the same data. Its findings serve multiple purposes, from training better malware analysts to improving feature encoding.
Adversarial Training for Raw-Binary Malware Classifiers
Keane Lucas, Samruddhi Pai, Weiran Lin, and Lujo Bauer, Carnegie Mellon University; Michael K. Reiter, Duke University; Mahmood Sharif, Tel Aviv University
Machine learning (ML) models have shown promise in classifying raw executable files (binaries) as malicious or benign with high accuracy. This has led to the increasing influence of ML-based classification methods in academic and real-world malware detection, a critical tool in cybersecurity. However, previous work provoked caution by creating variants of malicious binaries, referred to as adversarial examples, that are transformed in a functionality-preserving way to evade detection. In this work, we investigate the effectiveness of using adversarial training methods to create malware classification models that are more robust to some state-of-the-art attacks. To train our most robust models, we significantly increase the efficiency and scale of creating adversarial examples to make adversarial training practical, which has not been done before in raw-binary malware detectors. We then analyze the effects of varying the length of adversarial training, as well as analyze the effects of training with various types of attacks. We find that data augmentation does not deter state-of-the-art attacks, but that using a generic gradient-guided method, used in other discrete domains, does improve robustness. We also show that in most cases, models can be made more robust to malware-domain attacks by adversarially training them with lower-effort versions of the same attack. In the best case, we reduce one state-of-the-art attack's success rate from 90% to 5%. We also find that training with some types of attacks can increase robustness to other types of attacks. Finally, we discuss insights gained from our results, and how they can be used to more effectively train robust malware detectors.
Black-box Adversarial Example Attack towards FCG Based Android Malware Detection under Incomplete Feature Information
Heng Li, Huazhong University of Science and Technology; Zhang Cheng, NSFOCUS Technologies Group Co., Ltd. and Huazhong University of Science and Technology; Bang Wu, Liheng Yuan, Cuiying Gao, and Wei Yuan, Huazhong University of Science and Technology; Xiapu Luo, The Hong Kong Polytechnic University
The function call graph (FCG) based Android malware detection methods have recently attracted increasing attention due to their promising performance. However, these methods are susceptible to adversarial examples (AEs). In this paper, we design a novel black-box AE attack towards the FCG based malware detection system, called BagAmmo. To mislead its target system, BagAmmo purposefully perturbs the FCG feature of malware through inserting "never-executed" function calls into malware code. The main challenges are two-fold. First, the malware functionality should not be changed by adversarial perturbation. Second, the information of the target system (e.g., the graph feature granularity and the output probabilities) is absent.
To preserve malware functionality, BagAmmo employs the try-catch trap to insert function calls to perturb the FCG of malware. Without the knowledge about feature granularity and output probabilities, BagAmmo adopts the architecture of generative adversarial network (GAN), and leverages a multi-population co-evolution algorithm (i.e., Apoem) to generate the desired perturbation. Every population in Apoem represents a possible feature granularity, and the real feature granularity can be achieved when Apoem converges.
Through extensive experiments on over 44k Android apps and 32 target models, we evaluate the effectiveness, efficiency and resilience of BagAmmo. BagAmmo achieves an average attack success rate of over 99.9% on MaMaDroid, APIGraph and GCN, and still performs well in the scenario of concept drift and data imbalance. Moreover, BagAmmo outperforms the state-of-the-art attack SRL in attack success rate.
Evading Provenance-Based ML Detectors with Adversarial System Actions
Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, James Wei, and Feng Chen, The University of Texas at Dallas; Muhyun Kim, Amazon Web Service; Murat Kantarcioglu and Kangkook Jee, The University of Texas at Dallas
Secure Messaging
TreeSync: Authenticated Group Management for Messaging Layer Security
Théophile Wallez, Inria Paris; Jonathan Protzenko, Microsoft Research; Benjamin Beurdouche, Mozilla; Karthikeyan Bhargavan, Inria Paris
Messaging Layer Security (MLS), currently undergoing standardization at the IETF, is an asynchronous group messaging protocol that aims to be efficient for large dynamic groups, while providing strong guarantees like forward secrecy (FS) and post-compromise security (PCS). While prior work on MLS has extensively studied its group key establishment component (called TreeKEM), many flaws in early designs of MLS have stemmed from its group integrity and authentication mechanisms that are not as well-understood. In this work, we identify and formalize TreeSync: a sub-protocol of MLS that specifies the shared group state, defines group management operations, and ensures consistency, integrity, and authentication for the group state across all members.
We present a precise, executable, machine-checked formal specification of TreeSync, and show how it can be composed with other components to implement the full MLS protocol. Our specification is written in F* and serves as a reference implementation of MLS; it passes the RFC test vectors and is interoperable with other MLS implementations. Using the DY* symbolic protocol analysis framework, we formalize and prove the integrity and authentication guarantees of TreeSync, under minimal security assumptions on the rest of MLS. Our analysis identifies a new attack and we propose several changes that have been incorporated in the latest MLS draft. Ours is the first testable, machine-checked, formal specification for MLS, and should be of interest to both developers and researchers interested in this upcoming standard.
Formal Analysis of Session-Handling in Secure Messaging: Lifting Security from Sessions to Conversations
Cas Cremers, CISPA Helmholtz Center for Information Security; Charlie Jacomme, Inria Paris; Aurora Naska, CISPA Helmholtz Center for Information Security
The building blocks for secure messaging apps, such as Signal’s X3DH and Double Ratchet (DR) protocols, have received a lot of attention from the research community. They have notably been proved to meet strong security properties even in the case of compromise such as Forward Secrecy (FS) and Post-Compromise Security (PCS). However, there is a lack of formal study of these properties at the application level. Whereas the research works have studied such properties in the context of a single ratcheting chain, a conversation between two persons in a messaging application can in fact be the result of merging multiple ratcheting chains.
In this work, we initiate the formal analysis of secure messaging taking the session-handling layer into account, and apply our approach to Sesame, Signal’s session management. We first experimentally show practical scenarios in which PCS can be violated in Signal by a clone attacker, despite its use of the Double Ratchet. We identify how this is enabled by Signal’s session-handling layer. We then design a formal model of the session-handling layer of Signal that is tractable for automated verification with the Tamarin prover, and use this model to rediscover the PCS violation and propose two provably secure mechanisms to offer stronger guarantees.
Cryptographic Administration for Secure Group Messaging
David Balbás, IMDEA Software Institute & Universidad Politécnica de Madrid; Daniel Collins and Serge Vaudenay, EPFL
Wink: Deniable Secure Messaging
Anrin Chakraborti, Duke University; Darius Suciu and Radu Sion, Stony Brook University
Three Lessons From Threema: Analysis of a Secure Messenger
Kenneth G. Paterson, Matteo Scarlata, and Kien Tuong Truong, ETH Zurich
We provide an extensive cryptographic analysis of Threema, a Swiss-based encrypted messaging application with more than 10 million users and 7000 corporate customers. We present seven different attacks against the protocol in three different threat models. We discuss impact and remediations for our attacks, which have all been responsibly disclosed to Threema and patched. Finally, we draw wider lessons for developers of secure protocols.
x-Fuzz
MorFuzz: Fuzzing Processor via Runtime Instruction Morphing enhanced Synchronizable Co-simulation
Jinyan Xu and Yiyuan Liu, Zhejiang University; Sirui He, City University of Hong Kong; Haoran Lin and Yajin Zhou, Zhejiang University; Cong Wang, City University of Hong Kong
Modern processors are too complex to be bug free. Recently, a few hardware fuzzing techniques have shown promising results in verifying processor designs. However, due to the complexity of processors, they suffer from complex input grammar, deceptive mutation guidance, and model implementation differences. Therefore, how to effectively and efficiently verify processors is still an open problem.
This paper proposes MorFuzz, a novel processor fuzzer that can efficiently discover software triggerable hardware bugs. The core idea behind MorFuzz is to use runtime information to generate instruction streams with valid formats and meaningful semantics. MorFuzz designs a new input structure to provide multi-level runtime mutation primitives and proposes the instruction morphing technique to mutate instruction dynamically. Besides, we also extend the co-simulation framework to various microarchitectures and develop the state synchronization technique to eliminate implementation differences. We evaluate MorFuzz on three popular open-source RISC-V processors: CVA6, Rocket, BOOM, and discover 17 new bugs (with 13 CVEs assigned). Our evaluation shows MorFuzz achieves 4.4× and 1.6× more state coverage than the state-of-the-art fuzzer, DifuzzRTL, and the famous constrained instruction generator, riscv-dv.
µFUZZ: Redesign of Parallel Fuzzing using Microservice Architecture
Yongheng Chen, Georgia Institute of Technology; Rui Zhong, Pennsylvania State University; Yupeng Yang, Georgia Institute of Technology; Hong Hu and Dinghao Wu, Pennsylvania State University; Wenke Lee, Georgia Institute of Technology
FISHFUZZ: Catch Deeper Bugs by Throwing Larger Nets
Han Zheng, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; EPFL; Zhongguancun Lab; Jiayuan Zhang, School of Computer and Communication, Lanzhou University of Technology; National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; Yuhang Huang, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; Zezhong Ren, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; Zhongguancun Lab; He Wang, School of Cyber Engineering, Xidian University; Chunjie Cao, School of Cyberspace Security, Hainan University; Yuqing Zhang, National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences; School of Cyber Engineering, Xidian University; School of Cyberspace Security, Hainan University; Zhongguancun Lab; Flavio Toffalini and Mathias Payer, EPFL
HyPFuzz: Formal-Assisted Processor Fuzzing
Chen Chen, Rahul Kande, Nathan Nguyen, Flemming Andersen, and Aakash Tyagi, Texas A&M University; Ahmad-Reza Sadeghi, Technical University of Darmstadt; Jeyavijayan Rajendran, Texas A&M University
PolyFuzz: Holistic Greybox Fuzzing of Multi-Language Systems
Wen Li, Jinyang Ruan, and Guangbei Yi, Washington State University; Long Cheng, Clemson University; Xiapu Luo, The Hong Kong Polytechnic University; Haipeng Cai, Washington State University
While offering many advantages during software process, the practice of using multiple programming languages in constructing one software system also introduces additional security vulnerabilities in the resulting code. As this practice becomes increasingly prevalent, securing multi-language systems is of pressing criticality. Fuzzing has been a powerful security testing technique, yet existing fuzzers are commonly limited to single-language software. In this paper, we present PolyFuzz, a greybox fuzzer that holistically fuzzes a given multi-language system through cross-language coverage feedback and explicit modeling of the semantic relationships between (various segments of) program inputs and branch predicates across languages. PolyFuzz is extensible for supporting multilingual code written in different language combinations and has been implemented for C, Python, Java, and their combinations. We evaluated PolyFuzz versus state-of-the-art single-language fuzzers for these languages as baselines against 15 real-world multi-language systems and 15 single-language benchmarks. PolyFuzz achieved 25.3–52.3% higher code coverage and found 1–10 more bugs than the baselines against the multilingual programs, and even 10-20% higher coverage against the single-language benchmarks. In total, PolyFuzz has enabled the discovery of 12 previously unknown multilingual vulnerabilities and 2 single-language ones, with 5 CVEs assigned. Our results show great promises of PolyFuzz for cross-language fuzzing, while justifying the strong need for holistic fuzzing against trivially applying single-language fuzzers to multi-language software.
2:45 pm–3:15 pm
Break with Refreshments
3:15 pm–4:30 pm
Programs, Code, and Binaries
VIPER: Spotting Syscall-Guard Variables for Data-Only Attacks
Hengkai Ye, Song Liu, Zhechang Zhang, and Hong Hu, The Pennsylvania State University
AURC: Detecting Errors in Program Code and Documentation
Peiwei Hu, Ruigang Liang, and Ying Cao, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China, and School of Cyber Security, University of Chinese Academy of Sciences, China; Kai Chen, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China, School of Cyber Security, University of Chinese Academy of Sciences, China, and Beijing Academy of Artificial Intelligence, China; Runze Zhang, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China, and School of Cyber Security, University of Chinese Academy of Sciences, China
Error detection in program code and documentation is a critical problem in computer security. Previous studies have shown promising vulnerability discovery performance by extensive code or document-guided analysis. However, the state-of-the-arts have the following significant limitations: (i) They assume the documents are correct and treat the code that violates documents as bugs, thus cannot find documents’ defects and code’s bugs if APIs have defective documents or no documents. (ii) They utilize majority voting to judge the inconsistent code snippets and treat the deviants as bugs, thus cannot cope with situations where correct usage is minor or all use cases are wrong.
In this paper, we present AURC, a static framework for detecting code bugs of incorrect return checks and document defects. We observe that three objects participate in the API invocation, the document, the caller (code that invokes API), and the callee (the source code of API). Mutual corroboration of these three objects eliminates the reliance on the above assumptions. AURC contains a context-sensitive backward analysis to process callees, a pre-trained model-based document classifier, and a container that collects conditions of if statements from callers. After cross-checking the results from callees, callers, and documents, AURC delivers them to the correctness inference module to infer the defective one. We evaluated AURC on ten popular codebases. AURC discovered 529 new bugs that can lead to security issues like heap buffer overflow and sensitive information leakage, and 224 new document defects. Maintainers acknowledge our findings and have accepted 222 code patches and 76 document patches.
Not All Data are Created Equal: Data and Pointer Prioritization for Scalable Protection Against Data-Oriented Attacks
Salman Ahmed, IBM Research; Hans Liljestrand, University of Waterloo; Hani Jamjoom, IBM Research; Matthew Hicks, Virginia Tech; N. Asokan, University of Waterloo; Danfeng (Daphne) Yao, Virginia Tech
SAFER : Efficient and Error-Tolerant Binary Instrumentation
Soumyakant Priyadarshan, Huan Nguyen, Rohit Chouhan, and R. Sekar, Stony Brook University
Reassembly is Hard: A Reflection on Challenges and Strategies
Hyungseok Kim, KAIST and The Affiliated Institute of ETRI; Soomin Kim and Junoh Lee, KAIST; Kangkook Jee, University of Texas at Dallas; Sang Kil Cha, KAIST
Reassembly, a branch of static binary rewriting, has become a focus of research today. However, despite its widespread use and research interest, there have been no systematic investigations on the techniques and challenges of reassemblers. In this paper, we formally define different types of errors that occur in current existing reassemblers, and present an automated tool named REASSESSOR to find such errors. We attempt to show through our tool and the large-scale benchmark we created the current challenges in the field and how they can be approached.
IoT Security Expectations and Barriers
Measuring Up to (Reasonable) Consumer Expectations: Providing an Empirical Basis for Holding IoT Manufacturers Legally Responsible
Lorenz Kustosch and Carlos Gañán, TU Delft; Mattis van 't Schip, Radboud University; Michel van Eeten and Simon Parkin, TU Delft
With continued cases of security and privacy incidents with consumer Internet-of-Things (IoT) devices comes the need to identify which actors are in the best place to respond. Previous literature studied expectations of consumers regarding how security and privacy should be implemented and who should take on preventive efforts. But how do such normative consumer expectations differ from what is actually realistic, or reasonable to expect how security and privacy-related events will be handled? Using a vignette survey with 862 participants, we studied consumer expectations on how IoT manufacturers and users would and should respond when confronted with a potentially infected or privacy-invading IoT device. We find that expectations differ considerably between what is realistic and what is appropriate. Furthermore, security and privacy lead to different expectations around users’ and manufacturers’ actions, with a general diffusion of expectations on how to handle privacy-related events. We offer recommendations to IoT manufacturers and regulators on how to support users in addressing security and privacy issues.
Are Consumers Willing to Pay for Security and Privacy of IoT Devices?
Pardis Emami-Naeini, Duke University; Janarth Dheenadhayalan, Yuvraj Agarwal, and Lorrie Faith Cranor, Carnegie Mellon University
Internet of Things (IoT) device manufacturers provide little information to consumers about their security and data handling practices. Therefore, IoT consumers cannot make informed purchase choices around security and privacy. While prior research has found that consumers would likely consider security and privacy when purchasing IoT devices, past work lacks empirical evidence as to whether they would actually pay more to purchase devices with enhanced security and privacy. To fill this gap, we conducted a two-phase incentive-compatible online study with 180 Prolific participants. We measured the impact of five security and privacy factors (e.g., access control) on participants' purchase behaviors when presented individually or together on an IoT label. Participants were willing to pay a significant premium for devices with better security and privacy practices. The biggest price differential we found was for de-identified rather than identifiable cloud storage. Mainly due to its usability challenges, the least valuable improvement for participants was to have multi-factor authentication as opposed to passwords. Based on our findings, we provide recommendations on creating more effective IoT security and privacy labeling programs.
Examining Consumer Reviews to Understand Security and Privacy Issues in the Market of Smart Home Devices
Swaathi Vetrivel, Veerle van Harten, Carlos H. Gañán, Michel van Eeten, and Simon Parkin, Delft University of Technology
Despite growing evidence that consumers care about secure Internet-of-Things (IoT) devices, relevant security and privacy-related information is unavailable at the point of purchase. While initiatives such as security labels create new avenues to signal a device's security and privacy posture, we analyse an existing avenue for such market signals - customer reviews. We investigate whether and to what extent customer reviews of IoT devices with well-known security and privacy issues reflect these concerns. We examine 83,686 reviews of four IoT device types commonly infected with Mirai across all Amazon websites in English. We perform topic modelling to group the reviews and conduct manual coding to understand (i) the prevalence of security and privacy issues and (ii) the themes that these issues articulate. Overall, around one in ten reviews (9.8%) mentions security and privacy issues; the geographical distribution varies across the six countries. We distil references to security and privacy into seven themes and identify two orthogonal themes: reviews written in technical language and those that mention friction with security steps. Our results thus highlight the value of the already existing avenue of customer reviews. We draw on these results to make recommendations and identify future research directions.
Internet Service Providers' and Individuals' Attitudes, Barriers, and Incentives to Secure IoT
Nissy Sombatruang, National Institute of Information and Communications Technology; Tristan Caulfield and Ingolf Becker, University College London; Akira Fujita, Takahiro Kasama, Koji Nakao, and Daisuke Inoue, National Institute of Information and Communications Technology
Internet Service Providers (ISPs) and individual users of Internet of Things (IoT) play a vital role in securing IoT. However, encouraging them to do so is hard. Our study investigates ISPs' and individuals' attitudes towards the security of IoT, the obstacles they face, and their incentives to keep IoT secure, drawing evidence from Japan.
Due to the complex interactions of the stakeholders, we follow an iterative methodology where we present issues and potential solutions to our stakeholders in turn. For ISPs, we survey 27 ISPs in Japan, followed by a workshop with representatives from government and 5 ISPs. Based on the findings from this, we conduct semi-structured interviews with 20 participants followed by a more quantitative survey with 328 participants. We review these results in a second workshop with representatives from government and 7 ISPs. The appreciation of challenges by each party has lead to findings that are supported by all stakeholders.
Securing IoT devices is neither users' nor ISPs' priority. Individuals are keen on more interventions both from the government as part of regulation and from ISPs in terms of filtering malicious traffic. Participants are willing to pay for enhanced monitoring and filtering. While ISPs do want to help users, there appears to be a lack of effective technology to aid them. ISPs would like to see more public recognition for their efforts, but internally they struggle with executive buy-in and effective means to communicate with their customers. The majority of barriers and incentives are external to ISPs and individuals, demonstrating the complexity of keeping IoT secure and emphasizing the need for relevant stakeholders in the IoT ecosystem to work in tandem.
Detecting and Handling IoT Interaction Threats in Multi-Platform Multi-Control-Channel Smart Homes
Haotian Chi, Shanxi University and Temple University; Qiang Zeng, George Mason University; Xiaojiang Du, Stevens Institute of Technology
A smart home involves a variety of entities, such as IoT devices, automation applications, humans, voice assistants, and companion apps. These entities interact in the same physical environment, which can yield undesirable and even hazardous results, called IoT interaction threats. Existing work on interaction threats is limited to considering automation apps, ignoring other IoT control channels, such as voice commands, companion apps, and physical operations. Second, it becomes increasingly common that a smart home utilizes multiple IoT platforms, each of which has a partial view of device states and may issue conflicting commands. Third, compared to detecting interaction threats, their handling is much less studied. Prior work uses generic handling policies, which are unlikely to fit all homes. We present IoTMediator, which provides accurate threat detection and threat-tailored handling in multi-platform multi-control-channel homes. Our evaluation in two real-world homes demonstrates that IoTMediator significantly outperforms prior state-of-the-art work.
Differential Privacy
Private Proof-of-Stake Blockchains using Differentially-Private Stake Distortion
Chenghong Wang, David Pujol, Kartik Nayak, and Ashwin Machanavajjhala, Duke University
PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation
Yuchen Yang, Bo Hui, and Haolin Yuan, The Johns Hopkins University; Neil Gong, Duke University; Yinzhi Cao, The Johns Hopkins University
Federated learning (FL) enables multiple clients to collaboratively train a model with the coordination of a central server. Although FL improves data privacy via keeping each client's training data locally, an attacker—e.g., an untrusted server—an still compromise the privacy of clients' local training data via various inference attacks. A de facto approach to preserving FL privacy is Differential Privacy (DP), which adds random noise during training. However, when applied to FL, DP suffers from a key limitation: it sacrifices the model accuracy substantially—which is even more severely than being applied to traditional centralized learning—to achieve a meaningful level of privacy.
In this paper, we study the accuracy degradation cause of FL+DP and then design an approach to improve the accuracy. First, we propose that such accuracy degradation is partially because DP introduces additional heterogeneity among FL clients when adding different random noise with clipping bias during local training. To the best of our knowledge, we are the first to associate DP in FL with client heterogeneity. Second, we design PrivateFL to learn accurate, differentially private models in FL with reduced heterogeneity. The key idea is to jointly learn a differentially private, personalized data transformation for each client during local training. The personalized data transformation shifts client's local data distribution to compensate the heterogeneity introduced by DP, thus improving FL model's accuracy.
In the evaluation, we combine and compare PrivateFL with eight state-of-the-art differentially private FL methods on seven benchmark datasets, including six image and one non-image datasets. Our results show that PrivateFL learns accurate FL models with a small ε, e.g., 93.3% on CIFAR-10 with 100 clients under (ε = 2, δ = 1e – 3)-DP. Moreover, PrivateFL can be combined with prior works to reduce DP-induced heterogeneity and further improve their accuracy.
What Are the Chances? Explaining the Epsilon Parameter in Differential Privacy
Priyanka Nanayakkara, Northwestern University; Mary Anne Smart, UC San Diego; Rachel Cummings, Columbia University; Gabriel Kaptchuk, Boston University; Elissa M. Redmiles, Max Planck Institute for Software Systems
Tight Auditing of Differentially Private Machine Learning
Milad Nasr, Google Brain; Jamie Hayes, Google DeepMind; Thomas Steinke, Google Research; Borja De Balle Pigem, Google DeepMind; Matthew Jagielski, Google Research; Florian Tramer, ETH Zurich; Nicholas Carlini, Google Brain; Andreas Terzis, Google, Inc.
PrivTrace: Differentially Private Trajectory Synthesis by Adaptive Markov Models
Haiming Wang, Zhejiang University; Zhikun Zhang, CISPA Helmholtz Center for Information Security; Tianhao Wang, University of Virginia; Shibo He, Zhejiang University; Michael Backes, CISPA Helmholtz Center for Information Security; Jiming Chen, Zhejiang University; Yang Zhang, CISPA Helmholtz Center for Information Security
Publishing trajectory data (individual's movement information) is very useful, but it also raises privacy concerns. To handle the privacy concern, in this paper, we apply differential privacy, the standard technique for data privacy, together with Markov chain model, to generate synthetic trajectories. We notice that existing studies all use Markov chain model and thus propose a framework to analyze the usage of the Markov chain model in this problem. Based on the analysis, we come up with an effective algorithm PrivTrace that uses the first-order and second-order Markov model adaptively. We evaluate PrivTrace and existing methods on synthetic and real-world datasets to demonstrate the superiority of our method.
Poisoning
Meta-Sift: How to Sift Out a Clean Subset in the Presence of Data Poisoning?
Yi Zeng, Minzhou Pan, Himanshu Jahagirdar, and Ming Jin, Virginia Tech; Lingjuan Lyu, SONY AI Inc.; Ruoxi Jia, Virginia Tech
External data sources are increasingly being used to train machine learning (ML) models as the data demand increases. However, the integration of external data into training poses data poisoning risks, where malicious providers manipulate their data to compromise the utility or integrity of the model. Most data poisoning defenses assume access to a set of clean data (referred to as the base set), which could be obtained through trusted sources. But it also becomes common that entire data sources for an ML task are untrusted (e.g., Internet data). In this case, one needs to identify a subset within a contaminated dataset as the base set to support these defenses.
This paper starts by examining the performance of defenses when poisoned samples are mistakenly mixed into the base set. We analyze five representative defenses that use base sets and find that their performance deteriorates dramatically with less than 1% poisoned points in the base set. These findings suggest that sifting out a base set with \emph{high precision} is key to these defenses' performance. Motivated by these observations, we study how precise existing automated tools and human inspection are at identifying clean data in the presence of data poisoning. Unfortunately, neither effort achieves the precision needed that enables effective defenses. Worse yet, many of the outcomes of these methods are worse than random selection.
In addition to uncovering the challenge, we take a step further and propose a practical countermeasure, Meta-Sift. Our method is based on the insight that existing poisoning attacks shift data distributions, resulting in high prediction loss when training on the clean portion of a poisoned dataset and testing on the corrupted portion. Leveraging the insight, we formulate a bilevel optimization to identify clean data and further introduce a suite of techniques to improve the efficiency and precision of the identification. Our evaluation shows that Meta-Sift can sift a clean base set with 100\% precision under a wide range of poisoning threats. The selected base set is large enough to give rise to successful defense when plugged into the existing defense techniques.
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Xiangyu Qi, Tinghao Xie, Tianhao Wang, Tong Wu, Saeed Mahloujifar, and Prateek Mittal, Princeton University
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Jinyuan Jia, The Pennsylvania State University; Yupei Liu, Yuepeng Hu, and Neil Gong, Duke University
Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks
Hamid Mozaffari, Virat Shejwalkar, and Amir Houmansadr, University of Massachusetts Amherst
Federated learning (FL) allows untrusted clients to collaboratively train a common machine learning model, called global model, without sharing their private/proprietary training data. However, FL is susceptible to poisoning by malicious clients who aim to hamper the accuracy of the global model by contributing malicious updates during FL's training process.
We argue that the key factor to the success of poisoning attacks against existing FL systems is the large space of model updates available to the clients to choose from. To address this, we propose Federated Rank Learning (FRL). FRL reduces the space of client updates from model parameter updates (a continuous space of float numbers) in standard FL to the space of parameter rankings (a discrete space of integer values). To be able to train the global model using parameter ranks (instead of parameter weights), FRL leverage ideas from recent supermasks training mechanisms. Specifically, FRL clients rank the parameters of a randomly initialized neural network (provided by the server) based on their local training data, and the FRL server uses a voting mechanism to aggregate the parameter rankings submitted by the clients.
Intuitively, our voting-based aggregation mechanism prevents poisoning clients from making significant adversarial modifications to the global model, as each client will have a single vote! We demonstrate the robustness of FRL to poisoning through analytical proofs and experimentation, and we show its high communication efficiency.
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation
Xiaoguang Li, Xidian University and Purdue University; Ninghui Li and Wenhai Sun, Purdue University; Neil Zhenqiang Gong, Duke University; Hui Li, Xidian University
Although local differential privacy (LDP) protects individual users' data from inference by an untrusted data curator, recent studies show that an attacker can launch a data poisoning attack from the user side to inject carefully-crafted bogus data into the LDP protocols in order to maximally skew the final estimate by the data curator.
In this work, we further advance this knowledge by proposing a new fine-grained attack, which allows the attacker to fine-tune and simultaneously manipulate mean and variance estimations that are popular analytical tasks for many real-world applications. To accomplish this goal, the attack leverages the characteristics of LDP to inject fake data into the output domain of the local LDP instance. We call our attack the output poisoning attack (OPA). We observe a security-privacy consistency where a small privacy loss enhances the security of LDP, which contradicts the known security-privacy trade-off from prior work. We further study the consistency and reveal a more holistic view of the threat landscape of data poisoning attacks on LDP. We comprehensively evaluate our attack against a baseline attack that intuitively provides false input to LDP. The experimental results show that OPA outperforms the baseline on three real-world datasets. We also propose a novel defense method that can recover the result accuracy from polluted data collection and offer insight into the secure LDP design.
Smart Contracts
Your Exploit is Mine: Instantly Synthesizing Counterattack Smart Contract
Zhuo Zhang, Purdue University; Zhiqiang Lin and Marcelo Morales, Ohio State University; Xiangyu Zhang and Kaiyuan Zhang, Purdue University
Smart Learning to Find Dumb Contracts
Tamer Abdelaziz, National University of Singapore; Aquinas Hobor, University College London
Confusum Contractum: Confused Deputy Vulnerabilities in Ethereum Smart Contracts
Fabio Gritti, Nicola Ruaro, Robert McLaughlin, Priyanka Bose, Dipanjan Das, Ilya Grishchenko, Christopher Kruegel, and Giovanni Vigna, University of California, Santa Barbara
Panda: Security Analysis of Algorand Smart Contracts
Zhiyuan Sun, The Hong Kong Polytechnic University and Southern University of Science and Technology; Xiapu Luo, The Hong Kong Polytechnic University; Yinqian Zhang, Southern University of Science and Technology
This paper and abstract are under embargo and will be released to the public on the first day of the symposium, Wednesday, August 9, 2023.
Proxy Hunting: Understanding and Characterizing Proxy-based Upgradeable Smart Contracts in Blockchains
William E Bodell III, Sajad Meisami, and Yue Duan, Illinois Institute of Technology
Upgradeable smart contracts (USCs) have become a key trend in smart contract development, bringing flexibility to otherwise immutable code. However, they also introduce security concerns. On the one hand, they require extensive security knowledge to implement in a secure fashion. On the other hand, they provide new strategic weapons for malicious activities. Thus, it is crucial to fully understand them, especially their security implications in the real-world. To this end, we conduct a large-scale study to systematically reveal the status quo of USCs in the wild. To achieve our goal, we develop a complete USC taxonomy to comprehensively characterize the unique behaviors of USCs and further develop USCHUNT, an automated USC analysis framework for supporting our study. Our study aims to answer three sets of essential research questions regarding USC importance, design patterns, and security issues. Our results show that USCs are of great importance to today’s blockchain as they hold billions of USD worth of digital assets. Moreover, our study summarizes eleven unique design patterns of USCs, and discovers a total of 2,546 real-world USC-related security and safety issues in six major categories.
x-Fuzz and Fuzz-x
Fuzztruction: Using Fault Injection-based Fuzzing to Leverage Implicit Domain Knowledge
Nils Bars, Moritz Schloegel, Tobias Scharnowski, and Nico Schiller, Ruhr-Universität Bochum; Thorsten Holz, CISPA Helmholtz Center for Information Security
Today's digital communication relies on complex protocols and specifications for exchanging structured messages and data. Communication naturally involves two endpoints: One generating data and one consuming it. Traditional fuzz testing approaches replace one endpoint, the generator, with a fuzzer and rapidly test many mutated inputs on the target program under test. While this fully automated approach works well for loosely structured formats, this does not hold for highly structured formats, especially those that go through complex transformations such as compression or encryption.
In this work, we propose a novel perspective on generating inputs in highly complex formats without relying on heavyweight program analysis techniques, coarse-grained grammar approximation, or a human domain expert. Instead of mutating the inputs for a target program, we inject faults into the data generation program so that this data is almost of the expected format. Such data bypasses the initial parsing stages in the consumer program and exercises deeper program states, where it triggers more interesting program behavior. To realize this concept, we propose a set of compile-time and run-time analyses to mutate the generator in a targeted manner, so that it remains intact and produces semi-valid outputs that satisfy the constraints of the complex format. We have implemented this approach in a prototype called Fuzztruction and show that it outperforms the state-of-the-art fuzzers AFL++, SYMCC, and WEIZZ. Fuzztruction finds significantly more coverage than existing methods, especially on targets that use cryptographic primitives. During our evaluation, Fuzztruction uncovered 151 unique crashes (after automated deduplication). So far, we manually triaged and reported 27 bugs and 4 CVEs were assigned.
FuzzJIT: Oracle-Enhanced Fuzzing for JavaScript Engine JIT Compiler
Junjie Wang, College of Intelligence and Computing, Tianjin University; Zhiyi Zhang, CodeSafe Team, Qi An Xin Group Corp.; Shuang Liu, College of Intelligence and Computing, Tianjin University; Xiaoning Du, Monash University; Junjie Chen, College of Intelligence and Computing, Tianjin University
We present a novel fuzzing technique, FuzzJIT, for exposing JIT compiler bugs in JavaScript engines, based on our insight that JIT compilers shall only speed up the execution but never change the execution result of JavaScript code. FuzzJIT can activate the JIT compiler for every test case and acutely capture any execution discrepancy caused by JIT compilers. The key to success is the design of an input wrapping template, which proactively activates the JIT compiler and makes the generated samples oracle-aware themselves and the oracle is tested during execution spontaneously. We also design a set of mutation strategies to emphasize program elements promising in revealing JIT compiler bugs. FuzzJIT drills to JIT compilers and at the same time retains the high efficiency of fuzzing. We have implemented the design and applied the prototype to find new JIT compiler bugs in four mainstream JavaScript engines. In one month, ten, five, two, and 16 new bugs are exposed in JavaScriptCore, V8, SpiderMonkey, and ChakraCore, respectively, with three demonstrated exploitable.
GLeeFuzz: Fuzzing WebGL Through Error Message Guided Mutation
Hui Peng, Purdue University; Zhihao Yao and Ardalan Amiri Sani, UC Irvine; Dave (Jing) Tian, Purdue University; Mathias Payer, EPFL
WebGL is a set of standardized JavaScript APIs for GPU accelerated graphics. Security of the WebGL interface is paramount because it exposes remote and unsandboxed access to the underlying graphics stack (including the native GL libraries and GPU drivers) in the host OS. Unfortunately, applying state-of-the-art fuzzing techniques to the WebGL interface for vulnerability discovery is challenging because of (1) its huge input state space, and (2) the infeasibility of collecting code coverage across concurrent processes, closed-source libraries, and device drivers in the kernel.
Our fuzzing technique, GLeeFuzz, guides input mutation by error messages instead of code coverage. Our key observation is that browsers emit meaningful error messages to aid developers in debugging their WebGL programs. Error messages indicate which part of the input fails (e.g., incomplete arguments, invalid arguments, or unsatisfied dependencies between API calls). Leveraging error messages as feedback, the fuzzer effectively expands coverage by focusing mutation on erroneous parts of the input. We analyze Chrome’s WebGL implementation to identify the dependencies between error-emitting statements and rejected parts of the input, and use this information to guide input mutation. We evaluate our GLeeFuzz prototype on Chrome, Firefox, and Safari on diverse desktop and mobile OSes. We discovered 7 vulnerabilities, 4 in Chrome, 2 in Safari, and 1 in Firefox. The Chrome vulnerabilities allow a remote attacker to freeze the GPU and possibly execute remote code at the browser privilege.
autofz: Automated Fuzzer Composition at Runtime
Yu-Fu Fu, Jaehyuk Lee, and Taesoo Kim, Georgia Institute of Technology
Fuzzing has gained in popularity for software vulnerability detection by virtue of the tremendous effort to develop a diverse set of fuzzers. Thanks to various fuzzing techniques, most of the fuzzers have been able to demonstrate great performance on their selected targets. However, paradoxically, this diversity in fuzzers also made it difficult to select fuzzers that are best suitable for complex real-world programs, which we call selection burden. Communities attempted to address this problem by creating a set of standard benchmarks to compare and contrast the performance of fuzzers for a wide range of applications, but the result was always a suboptimal decision—the best-performing fuzzer on average does not guarantee the best outcome for the target of a user's interest.
To overcome this problem, we propose an automated, yet non-intrusive meta-fuzzer, called autofz, to maximize the benefits of existing state-of-the-art fuzzers via dynamic composition. To an end user, this means that, instead of spending time on selecting which fuzzer to adopt (similar in concept to hyperparameter tuning in ML), one can simply put all of the available fuzzers to autofz (similar in concept to AutoML), and achieve the best, optimal result. The key idea is to monitor the runtime progress of the fuzzers, called trends (similar in concept to gradient descent), and make a fine-grained adjustment of resource allocation (e.g., CPU time) of each fuzzer. This is a stark contrast to existing approaches that statically combine a set of fuzzers, or via exhaustive pre-training per target program - autofz deduces a suitable set of fuzzers of the active workload in a fine-grained manner at runtime. Our evaluation shows that, given the same amount of computation resources, autofz outperforms any best-performing individual fuzzers in 11 out of 12 available benchmarks and beats the best, collaborative fuzzing approaches in 19 out of 20 benchmarks without any prior knowledge in terms of coverage. Moreover, on average, autofz found 152% more bugs than individual fuzzers on UNIFUZZ and FTS, and 415% more bugs than collaborative fuzzing on UNIFUZZ.
CarpetFuzz: Automatic Program Option Constraint Extraction from Documentation for Fuzzing
Dawei Wang, Ying Li, and Zhiyu Zhang, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China; Kai Chen, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China; Beijing Academy of Artificial Intelligence, China
The large-scale code in software supports the rich and diverse functionalities, and at the same time contains potential vulnerabilities. Fuzzing, as one of the most popular vulnerability detection methods, continues evolving in both industry and academy, aiming to find more vulnerabilities by covering more code. However, we find that even with the state-of-the-art fuzzers, there is still some unexplored code that can only be triggered using a specific combination of program options. Simply mutating the options may generate many invalid combinations due to the lack of consideration of constraints (or called relationships) among options. In this paper, we leverage natural language processing (NLP) to automatically extract option descriptions from program documents and analyze the relationship (e.g., conflicts, dependencies) among the options before filtering out invalid combinations and only leaving the valid ones for fuzzing. We implemented a tool called CarpetFuzz and evaluated its performance. The results show that CarpetFuzz accurately extracts the relationships from documents with 96.10% precision and 88.85% recall. Based on these relationships, CarpetFuzz reduced the 67.91% option combinations to be tested. It helps AFL find 45.97% more paths that other fuzzers cannot discover. After analyzing 20 popular open-source programs, CarpetFuzz discovered 57 vulnerabilities, including 43 undisclosed ones. We also successfully obtained CVE IDs for 30 vulnerabilities.
4:30 pm–4:45 pm
Short Break
4:45 pm–6:00 pm
Cache Attacks
SCARF: A Low-Latency Block Cipher for Secure Cache-Randomization
Federico Canale, Tim Güneysu, Gregor Leander, and Jan Philipp Thoma, Horst Görtz Institute for IT-Security, Ruhr University Bochum; Yosuke Todo, NTT Social Informatics Laboratories; Rei Ueno, Tohoku University
The Gates of Time: Improving Cache Attacks with Transient Execution
Daniel Katzman, Tel Aviv University; William Kosasih, The University of Adelaide; Chitchanok Chuengsatiansup, The University of Melbourne; Eyal Ronen, Tel Aviv University; Yuval Yarom, The University of Adelaide
For over two decades, cache attacks have been shown to pose a significant risk to the security of computer systems. In particular, a large number of works show that cache attacks provide a stepping stone for implementing transient-execution attacks. However, much less effort has been expended investigating the reverse direction—how transient execution can be exploited for cache attacks. In this work, we answer this question.
We first show that using transient execution, we can perform arbitrary manipulations of the cache state. Specifically, we design versatile logical gates whose inputs and outputs are the caching state of memory addresses. Our gates are generic enough that we can implement them in WebAssembly. Moreover, the gates work on processors from multiple vendors, including Intel, AMD, Apple, and Samsung. We demonstrate that these gates are Turing complete and allow arbitrary computation on cache states, without exposing the logical values to the architectural state of the program.
We then show two use cases for our gates in cache attacks. The first use case is to amplify the cache state, allowing us to create timing differences of over 100 millisecond between the cases that a specific memory address is cached or not. We show how we can use this capability to build eviction sets in WebAssembly, using only a low-resolution (0.1 millisecond) timer. For the second use case, we present the Prime+Scope attack, a variant of Prime+Probe that decouples the sampling of cache states from the measurement of said state. Prime+Store is the first timing-based cache attack that can sample the cache state at a rate higher than the clock rate. We show how to use Prime+Store to obtain bits from a concurrently executing modular exponentiation, when the only timing signal is at a resolution of 0.1 millisecond.
Synchronization Storage Channels (S2C): Timer-less Cache Side-Channel Attacks on the Apple M1 via Hardware Synchronization Instructions
Jiyong Yu and Aishani Dutta, University of Illinois Urbana-Champaign; Trent Jaeger, Pennsylvania State University; David Kohlbrenner, University of Washington; Christopher Fletcher, University of Illinois Urbana-Champaign
ClepsydraCache -- Preventing Cache Attacks with Time-Based Evictions
Jan Philipp Thoma, Ruhr University Bochum; Christian Niesler, University of Duisburg-Essen; Dominic Funke, Gregor Leander, Pierre Mayr, and Nils Pohl, Ruhr University Bochum; Lucas Davi, University of Duisburg-Essen; Tim Güneysu, Ruhr University Bochum & DFKI
In the recent past, we have witnessed the shift towards attacks on the microarchitectural CPU level. In particular, cache side-channels play a predominant role as they allow an attacker to exfiltrate secret information by exploiting the CPU microarchitecture. These subtle attacks exploit the architectural visibility of conflicting cache addresses. In this paper, we present ClepsydraCache, which mitigates state-of-the-art cache attacks using a novel combination of cache decay and index randomization. Each cache entry is linked with a Time-To-Live (TTL) value. We propose a new dynamic scheduling mechanism of the TTL which plays a fundamental role in preventing those attacks while maintaining performance. ClepsydraCache efficiently protects against the latest cache attacks such as Prime+(Prune+)Probe. We present a full prototype in gem5 and lay out a proof-of-concept hardware design of the TTL mechanism, which demonstrates the feasibility of deploying ClepsydraCache in real-world systems.
CacheQL: Quantifying and Localizing Cache Side-Channel Vulnerabilities in Production Software
Yuanyuan Yuan, Zhibo Liu, and Shuai Wang, The Hong Kong University of Science and Technology
Cache side-channel attacks extract secrets by examining how victim software accesses cache. To date, practical attacks on crypto systems and media libraries are demonstrated under different scenarios, inferring secret keys from crypto algorithms and reconstructing private media data such as images.
This work first presents eight criteria for designing a fullfledged detector for cache side-channel vulnerabilities. Then, we propose CacheQL, a novel detector that meets all of these criteria. CacheQL precisely quantifies information leaks of binary code, by characterizing the distinguishability of logged side channel traces. Moreover, CacheQL models leakage as a cooperative game, allowing information leakage to be precisely distributed to program points vulnerable to cache side channels. CacheQL is meticulously optimized to analyze whole side channel traces logged from production software (where each trace can have millions of records), and it alleviates randomness introduced by crypto blinding, ORAM, or real-world noises.
Our evaluation quantifies side-channel leaks of production crypto and media software. We further localize vulnerabilities reported by previous detectors and also identify a few hundred new vulnerable program points in recent OpenSSL (ver. 3.0.0), MbedTLS (ver. 3.0.0), Libgcrypt (ver. 1.9.4). Many of our localized program points are within the pre-processing modules of crypto libraries, which are not analyzed by existing works due to scalability. We also localize vulnerabilities in Libjpeg (ver. 2.1.2) that leak privacy about input images.
Authentication
InfinityGauntlet: Expose Smartphone Fingerprint Authentication to Brute-force Attack
Yu Chen and Yang Yu, Xuanwu Lab, Tencent; Lidong Zhai, Institute of Information Engineering, Chinese Academy of Sciences
Billions of smartphone fingerprint authentications (SFA) occur daily for unlocking, privacy and payment. Existing threats to SFA include presentation attacks (PA) and some case-by-case vulnerabilities. The former need to know the victim's fingerprint information (e.g., latent fingerprints) and can be mitigated by liveness detection and security policies. The latter require additional conditions (e.g., third-party screen protector, root permission) and are only exploitable for individual smartphone models.
In this paper, we conduct the first investigation on the general zero-knowledge attack towards SFA where no knowledge about the victim is needed. We propose a novelty fingerprint brute-force attack on off-the-shelf smartphones, named InfinityGauntlet. Firstly, we discover design vulnerabilities in SFA systems across various manufacturers, operating systems, and fingerprint types to achieve unlimited authentication attempts. Then, we use SPI MITM to bypass liveness detection and make automatic attempts. Finally, we customize a synthetic fingerprint generator to get a valid brute-force fingerprint dictionary.
We design and implement low-cost equipment to launch InfinityGauntlet. A proof-of-concept case study demonstrates that InfinityGauntlet can brute-force attack successfully in less than an hour without any knowledge of the victim. Additionally, empirical analysis on representative smartphones shows the scalability of our work.
A Study of Multi-Factor and Risk-Based Authentication Availability
Anthony Gavazzi, Ryan Williams, Engin Kirda, and Long Lu, Northeastern University; Andre King, Andy Davis, and Tim Leek, MIT Lincoln Laboratory
Password-based authentication (PBA) remains the most popular form of user authentication on the web despite its long-understood insecurity. Given the deficiencies of PBA, many online services support multi-factor authentication (MFA) and/or risk-based authentication (RBA) to better secure user accounts. The security, usability, and implementations of MFA and RBA have been studied extensively, but attempts to measure their availability among popular web services have lacked breadth. Additionally, no study has analyzed MFA and RBA prevalence together or how the presence of Single-Sign-On (SSO) providers affects the availability of MFA and RBA on the web.
In this paper, we present a study of 208 popular sites in the Tranco top 5K that support account creation to understand the availability of MFA and RBA on the web, the additional authentication factors that can be used for MFA and RBA, and how logging into sites through more secure SSO providers changes the landscape of user authentication security. We find that only 42.31% of sites support any form of MFA, and only 22.12% of sites block an obvious account hijacking attempt. Though most sites do not offer MFA or RBA, SSO completely changes the picture. If one were to create an account for each site through an SSO provider that offers MFA and/or RBA, whenever available, 80.29% of sites would have access to MFA and 72.60% of sites would stop an obvious account hijacking attempt. However, this proliferation through SSO comes with a privacy trade-off, as nearly all SSO providers that support MFA and RBA are major third-party trackers.
A Large-Scale Measurement of Website Login Policies
Suood Al Roomi, Georgia Institute of Technology, Kuwait University; Frank Li, Georgia Institute of Technology
Security and Privacy Failures in Popular 2FA Apps
Conor Gilsenan, UC Berkeley / ICSI; Fuzail Shakir and Noura Alomar, UC Berkeley; Serge Egelman, UC Berkeley / ICSI
The Time-based One-Time Password (TOTP) algorithm is a 2FA method that is widely deployed because of its relatively low implementation costs and purported security benefits over SMS 2FA. However, users of TOTP 2FA apps face a critical usability challenge: maintain access to the secrets stored within the TOTP app, or risk getting locked out of their accounts. To help users avoid this fate, popular TOTP apps implement a wide range of backup mechanisms, each with varying security and privacy implications. In this paper, we define an assessment methodology for conducting systematic security and privacy analyses of the backup and recovery functionality of TOTP apps. We identified all general purpose Android TOTP apps in the Google Play Store with at least 100k installs that implemented a backup mechanism (n = 22). Our findings show that most backup strategies end up placing trust in the same technologies that TOTP 2FA is meant to supersede: passwords, SMS, and email. Many backup implementations shared personal user information with third parties, had serious cryptographic flaws, and/or allowed the app developers to access the TOTP secrets in plaintext. We present our findings and recommend ways to improve the security and privacy of TOTP 2FA app backup mechanisms.
Multi-Factor Key Derivation Function (MFKDF) for Fast, Flexible, Secure, & Practical Key Management
Vivek Nair and Dawn Song, University of California, Berkeley
We present the first general construction of a Multi-Factor Key Derivation Function (MFKDF). Our function expands upon password-based key derivation functions (PBKDFs) with support for using other popular authentication factors like TOTP, HOTP, and hardware tokens in the key derivation process. In doing so, it provides an exponential security improvement over PBKDFs with less than 12 ms of additional computational overhead in a typical web browser. We further present a threshold MFKDF construction, allowing for client-side key recovery and reconstitution if a factor is lost. Finally, by "stacking" derived keys, we provide a means of cryptographically enforcing arbitrarily specific key derivation policies. The result is a paradigm shift toward direct cryptographic protection of user data using all available authentication factors, with no noticeable change to the user experience. We demonstrate the ability of our solution to not only significantly improve the security of existing systems implementing PBKDFs, but also to enable new applications where PBKDFs would not be considered a feasible approach.
Private Data Leaks
Log: It’s Big, It’s Heavy, It’s Filled with Personal Data! Measuring the Logging of Sensitive Information in the Android Ecosystem
Allan Lyons, University of Calgary; Julien Gamba, IMDEA Networks Institute and Universidad Carlos III de Madrid; Austin Shawaga, University of Calgary; Joel Reardon, University of Calgary and AppCensus, Inc.; Juan Tapiador, Universidad Carlos III de Madrid; Serge Egelman, ICSI and UC Berkeley and AppCensus, Inc.; Narseo Vallina-Rodriguez, IMDEA Networks Institute and AppCensus, Inc.
Android offers a shared system that multiplexes all logged data from all system components, including both the operating system and the console output of apps that run on it. A security mechanism ensures that user-space apps can only read the log entries that they create, though many "privileged" apps are exempt from this restriction. This includes preloaded system apps provided by Google, the phone manufacturer, the cellular carrier, as well as those sharing the same signature. Consequently, Google advises developers to not log sensitive information to the system log.
In this work, we examined the logging of sensitive data in the Android ecosystem. Using a field study, we show that most devices log some amount of user-identifying information. We show that the logging of "activity" names can inadvertently reveal information about users through their app usage. We also tested whether different smartphones log personal identifiers by default, examined preinstalled apps that access the system logs, and analyzed the privacy policies of manufacturers that report collecting system logs.
CodexLeaks: Privacy Leaks from Code Generation Language Models in GitHub Copilot
Liang Niu and Shujaat Mirza, New York University; Zayd Maradni and Christina Pöpper, New York University Abu Dhabi
Freaky Leaky SMS: Extracting User Locations by Analyzing SMS Timings
Evangelos Bitsikas, Northeastern University; Theodor Schnitzler, Research Center Trustworthy Data Science and Security and TU Dortmund; Christina Pöpper, New York University Abu Dhabi; Aanjhan Ranganathan, Northeastern University
The Writing on the Wall and 3D Digital Twins: Personal Information in (not so) Private Real Estate
Rachel McAmis and Tadayoshi Kohno, University of Washington
Online real estate companies are starting to offer 3D virtual tours of homes (3D digital twins). We qualitatively analyzed 44 3D home tours with personal artifacts visible on Zillow and assessed each home for the extent and type of personal information shared. Using a codebook we created, we analyzed three categories of personal information in each home: government-provided guidance of what not to share on the internet, identity information, and behavioral information. Our analysis unearthed a wide variety of sensitive information across all homes, including names, hobbies, employment and education history, product preferences (e.g., pantry items, types of cigarettes), medications, credit card numbers, passwords, and more. Based on our analysis, residents both employed privacy protections and had privacy oversights. We identify potential adversaries that might use 3D tour information, highlight additional sensitive sources of indoor space information, and discuss future tools and policy changes that could address these issues.
Generative AI
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y. Zhao, University of Chicago
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
Gustavo Sandoval, Hammond Pearce, Teo Nys, Ramesh Karri, Siddharth Garg, and Brendan Dolan-Gavitt, New York University
Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers’ code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked ‘shopping list’ structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.
Two-in-One: A Model Hijacking Attack Against Text Generation Models
Wai Man Si, Michael Backes, and Yang Zhang, CISPA Helmholtz Center for Information Security; Ahmed Salem, Microsoft Research
PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
Nils Lukas and Florian Kerschbaum, University of Waterloo
Security Worker Perspectives
Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys
Daniel W. Woods, University of Edinburgh; Rainer Böhme, University of Innsbruck; Josephine Wolff, Tufts University; Daniel Schwarcz, University of Minnesota
Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.
Bug Hunters’ Perspectives on the Challenges and Benefits of the Bug Bounty Ecosystem
Omer Akgul, University of Maryland; Taha Eghtesad, Pennsylvania State University; Amit Elazari, University of California, Berkeley; Omprakash Gnawali, University of Houston; Jens Grossklags, Technical University of Munich; Michelle L. Mazurek, University of Maryland; Daniel Votipka, Tufts University; Aron Laszka, Pennsylvania State University
Although researchers have characterized the bug-bounty ecosystem from the point of view of platforms and programs, minimal effort has been made to understand the perspectives of the main workers: bug hunters. To improve bug bounties, it is important to understand hunters’ motivating factors, challenges, and overall benefits. We address this research gap with three studies: identifying key factors through a free listing survey (n=56), rating each factor’s importance with a larger-scale factor-rating survey (n=159), and conducting semi-structured interviews to uncover details (n=24). Of 54 factors that bug hunters listed, we find that rewards and learning opportunities are the most important benefits. Further, we find scope to be the top differentiator between programs. Surprisingly, we find earning reputation to be one of the least important motivators for hunters. Of the challenges we identify, communication problems, such as unresponsiveness and disputes, are the most substantial. We present recommendations to make the bug-bounty ecosystem accommodating to more bug hunters and ultimately increase participation in an underutilized market.
Work-From-Home and COVID-19: Trajectories of Endpoint Security Management in a Security Operations Center
Kailani R. Jones and Dalton A. Brucker-Hahn, University of Kansas; Bradley Fidler, Independent Researcher; Alexandru G. Bardas, University of Kansas
The COVID-19 surge of "Work From Home" (WFH) Internet use incentivized many organizations to strengthen their endpoint security monitoring capabilities. This trend has significant implications for how Security Operations Centers (SOCs) manage these end devices on their enterprise networks: in their organizational roles, regulatory environment, and required skills. By intersecting historical analysis (starting in the 1970s) and ethnography (analyzed 352 field notes across 1,000+ hours in a SOC over 34 months) whilst complementing with quantitative interviews (covering 7 other SOCs), we uncover causal forces that have pushed network management toward endpoints. We further highlight the negative impacts on end user privacy and analyst burnout. As such, we assert that SOCs should consider preparing for a continual, long-term shift from managing the network perimeter and the associated devices to commanding the actual user endpoints while facing potential privacy challenges and more burnout.
“Employees Who Don’t Accept the Time Security Takes Are Not Aware Enough”: The CISO View of Human-Centred Security
Jonas Hielscher and Uta Menges, Ruhr University Bochum; Simon Parkin, TU Delft; Annette Kluge and M. Angela Sasse, Ruhr University Bochum
In larger organisations, the security controls and policies that protect employees are typically managed by a Chief Information Security Officer (CISO). In research, industry, and policy, there are increasing efforts to relate principles of human behaviour interventions and influence to the practice of the CISO, despite these being complex disciplines in their own right. Here we explore how well the concepts of human-centred security (HCS) have survived exposure to the needs of practice: in an action research approach we engaged with n=30 members of a Swiss-based community of CISOs in five workshop sessions over the course of 8 months, dedicated to discussing HCS. We coded and analysed over 25 hours of notes we took during the discussions. We found that CISOs far and foremost perceive HCS as what is available on the market, namely awareness and phishing simulations. While they regularly shift responsibility either to the management (by demanding more support) or to the employees (by blaming them) we see a lack of power but also silo-thinking that prevents CISOs from considering actual human behaviour and friction that security causes for employees. We conclude that industry best practices and the state-of-the-art in HCS research are not aligned.
Deep Thoughts on Deep Learning
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Jialai Wang, Tsinghua University; Ziyuan Zhang, Beijing University of Posts and Telecommunications; Meiqi Wang, Tsinghua University; Han Qiu, Tsinghua University and Zhongguancun Laboratory; Tianwei Zhang, Nanyang Technological University; Qi Li, Tsinghua University and Zhongguancun Laboratory; Zongpeng Li, Tsinghua University and Hangzhou Dianzi University; Tao Wei, Ant Group; Chao Zhang, Tsinghua University and Zhongguancun Laboratory
Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary could tamper with a small number of model parameter bits to break the integrity of DNNs. To mitigate such threats, a batch of defense methods are proposed, focusing on the untargeted scenarios. Unfortunately, they either require extra trustworthy applications or make models more vulnerable to targeted BFAs. Countermeasures against targeted BFAs, stealthier and more purposeful by nature, are far from well established.
In this work, we propose Aegis, a novel defense method to mitigate targeted BFAs. The core observation is that existing targeted attacks focus on flipping critical bits in certain important layers. Thus, we design a dynamic-exit mechanism to attach extra internal classifiers (ICs) to hidden layers. This mechanism enables input samples to early-exit from different layers, which effectively upsets the adversary's attack plans. Moreover, the dynamic-exit mechanism randomly selects ICs for predictions during each inference to significantly increase the attack cost for the adaptive attacks where all defense mechanisms are transparent to the adversary. We further propose a robustness training strategy to adapt ICs to the attack scenarios through simulating BFAs during the IC training phase, to increase model robustness. Extensive evaluations over four well-known datasets and two popular DNN structures reveal that Aegis could effectively mitigate different state-of-the-art targeted attacks, reducing attack success rate by 5-10x, significantly outperforming existing defense methods. We open source the code of Aegis.
Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation
Yifan Yan, Xudong Pan, Mi Zhang, and Min Yang, Fudan University
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations. To trace illegally distributed model copies, DNN watermarking is an emerging technique for embedding and verifying secret identity messages in the prediction behaviors or the model internals. Sacrificing less functionality and involving more knowledge about the target DNN, the latter branch called white-box DNN watermarking is believed to be accurate, credible and secure against most known watermark removal attacks, with emerging research efforts in both the academy and the industry.
In this paper, we present the first systematic study on how the mainstream white-box DNN watermarks are commonly vulnerable to neural structural obfuscation with dummy neurons, a group of neurons which can be added to a target model but leave the model behavior invariant. Devising a comprehensive framework to automatically generate and inject dummy neurons with high stealthiness, our novel attack intensively modifies the architecture of the target model to inhibit the success of watermark verification. With extensive evaluation, our work for the first time shows that nine published watermarking schemes require amendments to their verification procedures.
PELICAN: Exploiting Backdoors of Naturally Trained Deep Learning Models In Binary Code Analysis
Zhuo Zhang, Guanhong Tao, Guangyu Shen, Shengwei An, Qiuling Xu, Yingqi Liu, and Yapeng Ye, Purdue University; Yaoxuan Wu, University of California, Los Angeles; Xiangyu Zhang, Purdue University
Deep Learning (DL) models are increasingly used in many cyber-security applications and achieve superior performance compared to traditional solutions. In this paper, we study backdoor vulnerabilities in naturally trained models used in binary analysis. These backdoors are not injected by attackers but rather products of defects in datasets and/or training processes. The attacker can exploit these vulnerabilities by injecting some small fixed input pattern (e.g., an instruction) called backdoor trigger to their input (e.g., a binary code snippet for a malware detection DL model) such that misclassification can be induced (e.g., the malware evades the detection). We focus on transformer models used in binary analysis. Given a model, we leverage a trigger inversion technique particularly designed for these models to derive trigger instructions that can induce misclassification. During attack, we utilize a novel trigger injection technique to insert the trigger instruction(s) to the input binary code snippet. The injection makes sure that the code snippets' original program semantics are preserved and the trigger becomes an integral part of such semantics and hence cannot be easily eliminated. We evaluate our prototype PELICAN on 5 binary analysis tasks and 15 models. The results show that PELICAN can effectively induce misclassification on all the evaluated models in both white-box and black-box scenarios. Our case studies demonstrate that PELICAN can exploit the backdoor vulnerabilities of two closed-source commercial tools.
IvySyn: Automated Vulnerability Discovery in Deep Learning Frameworks
Neophytos Christou, Di Jin, and Vaggelis Atlidakis, Brown University; Baishakhi Ray, Columbia University; Vasileios P. Kemerlis, Brown University
We present IvySyn, the first fully-automated framework for discovering memory error vulnerabilities in Deep Learning (DL) frameworks. IvySyn leverages the statically-typed nature of native APIs in order to automatically perform type-aware mutation-based fuzzing on low-level kernel code. Given a set of offending inputs that trigger memory safety (and runtime) errors in low-level, native DL (C/C++) code, IvySyn automatically synthesizes code snippets in high-level languages (e.g., in Python), which propagate error-triggering input via high(er)-level APIs. Such code snippets essentially act as "Proof of Vulnerability", as they demonstrate the existence of bugs in native code that an attacker can target through various high-level APIs. Our evaluation shows that IvySyn significantly outperforms past approaches, both in terms of efficiency and effectiveness, in finding vulnerabilities in popular DL frameworks. Specifically, we used IvySyn to test Tensor-Flow and PyTorch. Although still an early prototype, IvySyn has already helped the TensorFlow and PyTorch framework developers to identify and fix 61 previously-unknown security vulnerabilities, and assign 39 unique CVEs.
6:00 pm–7:30 pm
Symposium Reception and Presentation of the USENIX Lifetime Achievement Award
Thursday, August 10
8:00 am–9:00 am
Continental Breakfast
9:00 am–10:15 am
Smart? Assistants
Hey Kimya, Is My Smart Speaker Spying on Me? Taking Control of Sensor Privacy Through Isolation and Amnesia
Piet De Vaere and Adrian Perrig, ETH Zürich
Although smart speakers and other voice assistants are becoming increasingly ubiquitous, their always-standby nature continues to prompt significant privacy concerns. To address these, we propose Kimya, a hardening framework that allows device vendors to provide strong data-privacy guarantees. Concretely, Kimya guarantees that microphone data can only be used for local processing, and is immediately discarded unless a user-auditable notification is generated. Kimya thus makes devices accountable for their data-retention behavior. Moreover, Kimya is not limited to voice assistants, but is applicable to all devices with always-standby, event-triggered sensors. We implement Kimya for ARM Cortex-M, and apply it to a wake-word detection engine. Our evaluation shows that Kimya introduces low overhead, can be used in constrained environments, and does not require hardware modifications.
Spying through Your Voice Assistants: Realistic Voice Command Fingerprinting
Dilawer Ahmed, Aafaq Sabir, and Anupam Das, North Carolina State University
QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems
Guangke Chen, Yedi Zhang, and Zhe Zhao, ShanghaiTech University; Fu Song, ShanghaiTech University; Automotive Software Innovation Center; Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences
Learning Normality is Enough: A Software-based Mitigation against Inaudible Voice Attacks
Xinfeng Li, Xiaoyu Ji, and Chen Yan, USSLAB, Zhejiang University; Chaohao Li, USSLAB, Zhejiang University and Hangzhou Hikvision Digital Technology Co., Ltd.; Yichen Li, Hong Kong University of Science and Technology; Zhenning Zhang, University of Illinois at Urbana-Champaign; Wenyuan Xu, USSLAB, Zhejiang University
Inaudible voice attacks silently inject malicious voice commands into voice assistants to manipulate voice-controlled devices such as smart speakers. To alleviate such threats for both existing and future devices, this paper proposes NormDetect, a software-based mitigation that can be instantly applied to a wide range of devices without requiring any hardware modification. To overcome the challenge that the attack patterns vary between devices, we design a universal detection model that does not rely on audio features or samples derived from specific devices. Unlike existing studies’ supervised learning approach, we adopt unsupervised learning inspired by anomaly detection. Though the patterns of inaudible voice attacks are diverse, we find that benign audios share similar patterns in the time-frequency domain. Therefore, we can detect the attacks (the anomaly) by learning the patterns of benign audios (the normality). NormDetect maps spectrum features to a low-dimensional space, performs similarity queries, and replaces them with the standard feature embeddings for spectrum reconstruction. This results in a more significant reconstruction error for attacks than normality. Evaluation based on the 383,320 test samples we collected from 24 smart devices shows an average AUC of 99.48% and EER of 2.23%, suggesting the effectiveness of NormDetect in detecting inaudible voice attacks.
Powering for Privacy: Improving User Trust in Smart Speaker Microphones with Intentional Powering and Perceptible Assurance
Youngwook Do, Nivedita Arora, and Ali Mirzazadeh, Georgia Institute of Technology; Injoo Moon, MIT Traverso Lab; Eryue Xu, Georgia Institute of Technology; Zhihan Zhang, Paul G. Allen School of Computer Science & Engineering, University of Washington; Gregory Abowd, Northeastern University; Sauvik Das, Carnegie Mellon University
Security-Adjacent Worker Perspectives
To Cloud or not to Cloud: A Qualitative Study on Self-Hosters’ Motivation, Operation, and Security Mindset
Lea Gröber and Rafael Mrowczynski, CISPA Helmholtz Center for Information Security; Nimisha Vijay and Daphne A. Muller, Nextcloud; Adrian Dabrowski and Katharina Krombholz, CISPA Helmholtz Center for Information Security
“I wouldn’t want my unsafe code to run my pacemaker”: An Interview Study on the Use, Comprehension, and Perceived Risks of Unsafe Rust
Sandra Höltervennhoff, Leibniz University Hannover; Philip Klostermeyer and Noah Wöhler, CISPA Helmholtz Center for Information Security; Yasemin Acar, Paderborn University, George Washington University; Sascha Fahl, CISPA Helmholtz Center for Information Security
Pushed by Accident: A Mixed-Methods Study on Strategies of Handling Secret Information in Source Code Repositories
Alexander Krause, CISPA Helmholtz Center for Information Security; Jan H. Klemmer and Nicolas Huaman, Leibniz University Hannover; Dominik Wermke, CISPA Helmholtz Center for Information Security; Yasemin Acar, Paderborn University, George Washington University; Sascha Fahl, CISPA Helmholtz Center for Information Security
A Mixed-Methods Study of Security Practices of Smart Contract Developers
Tanusree Sharma, Zhixuan Zhou, Andrew Miller, and Yang Wang, University of Illinois at Urbana Champaign
The Role of Professional Product Reviewers in Evaluating Security and Privacy
Wentao Guo, Jason Walter, and Michelle L. Mazurek, University of Maryland
Consumers who use Internet-connected products are often exposed to security and privacy vulnerabilities that they lack time or expertise to evaluate themselves. Can professional product reviewers help by evaluating security and privacy on their behalf? We conducted 17 interviews with product reviewers about their procedures, incentives, and assumptions regarding security and privacy. We find that reviewers have some incentives to evaluate security and privacy, but they also face substantial disincentives and challenges, leading them to consider a limited set of relevant criteria and threat models. We recommend future work to help product reviewers provide useful advice to consumers in ways that align with reviewers' business models and incentives. These include developing usable resources and tools, as well as validating the heuristics they use to judge security and privacy expediently.
Censorship and Internet Freedom
Network Responses to Russia's Invasion of Ukraine in 2022: A Cautionary Tale for Internet Freedom
Reethika Ramesh, Ram Sundara Raman, and Apurva Virkud, University of Michigan; Alexandra Dirksen, TU Braunschweig; Armin Huremagic, University of Michigan; David Fifield, unaffiliated; Dirk Rodenburg and Rod Hynes, Psiphon; Doug Madory, Kentik; Roya Ensafi, University of Michigan
Russia's invasion of Ukraine in February 2022 was followed by sanctions and restrictions: by Russia against its citizens, by Russia against the world, and by foreign actors against Russia. Reports suggested a torrent of increased censorship, geoblocking, and network events affecting Internet freedom.
This paper is an investigation into the network changes that occurred in the weeks following this escalation of hostilities. It is the result of a rapid mobilization of researchers and activists, examining the problem from multiple perspectives. We develop GeoInspector, and conduct measurements to identify different types of geoblocking, and synthesize data from nine independent data sources to understand and describe various network changes. Immediately after the invasion, more than 45% of Russian government domains tested blocked access from countries other than Russia and Kazakhstan; conversely, 444 foreign websites, including news and educational domains, geoblocked Russian users. We find significant increases in Russian censorship, especially of news and social media. We find evidence of the use of BGP withdrawals to implement restrictions, and we quantify the use of a new domestic certificate authority. Finally, we analyze data from circumvention tools, and investigate their usage and blocking. We hope that our findings showing the rapidly shifting landscape of Internet splintering serves as a cautionary tale, and encourages research and efforts to protect Internet freedom.
A Study of China's Censorship and Its Evasion Through the Lens of Online Gaming
Yuzhou Feng, Florida International University; Ruyu Zhai, Hangzhou Dianzi University; Radu Sion, Stony Brook University; Bogdan Carbunar, Florida International University
DeResistor: Toward Detection-Resistant Probing for Evasion of Internet Censorship
Abderrahmen Amich and Birhanu Eshete, University of Michigan, Dearborn; Vinod Yegneswaran, SRI International; Nguyen Phong Hoang, University of Chicago
Timeless Timing Attacks and Preload Defenses in Tor's DNS Cache
Rasmus Dahlberg and Tobias Pulls, Karlstad University
We show that Tor's DNS cache is vulnerable to a timeless timing attack, allowing anyone to determine if a domain is cached or not without any false positives. The attack requires sending a single TLS record. It can be repeated to determine when a domain is no longer cached to leak the insertion time. Our evaluation in the Tor network shows no instances of cached domains being reported as uncached and vice versa after 12M repetitions while only targeting our own domains. This shifts DNS in Tor from an unreliable side-channel—using traditional timing attacks with network jitter—to being perfectly reliable. We responsibly disclosed the attack and suggested two short-term mitigations.
As a long-term defense for the DNS cache in Tor against all types of (timeless) timing attacks, we propose a redesign where only an allowlist of domains is preloaded to always be cached across circuits. We compare the performance of a preloaded DNS cache to Tor's current solution towards DNS by measuring aggregated statistics for four months from two exits (after engaging with the Tor Research Safety Board and our university ethical review process). The evaluated preload lists are variants of the following top-lists: Alexa, Cisco Umbrella, and Tranco. Our results show that four-months-old preload lists can be tuned to offer comparable performance under similar resource usage or to significantly improve shared cache-hit ratios (2–3x) with a modest increase in memory usage and resolver load compared to a 100 Mbit/s exit. We conclude that Tor's current DNS cache is mostly a privacy harm because the majority of cached domains are unlikely to lead to cache hits but remain there to be probed by attackers.
How the Great Firewall of China Detects and Blocks Fully Encrypted Traffic
Mingshi Wu, GFW Report; Jackson Sippe, University of Colorado Boulder; Danesh Sivakumar and Jack Burg, University of Maryland; Peter Anderson, Independent researcher; Xiaokang Wang, V2Ray Project; Kevin Bock, University of Maryland; Amir Houmansadr, University of Massachusetts Amherst; Dave Levin, University of Maryland; Eric Wustrow, University of Colorado Boulder
One of the cornerstones in censorship circumvention is fully encrypted protocols, which encrypt every byte of the payload in an attempt to “look like nothing”. In early November 2021, the Great Firewall of China (GFW) deployed a new censorship technique that passively detects—and subsequently blocks—fully encrypted traffic in real time. The GFW’s new censorship capability affects a large set of popular censorship circumvention protocols, including but not limited to Shadowsocks, VMess, and Obfs4. Although China had long actively probed such protocols, this was the first report of purely passive detection, leading the anti-censorship community to ask how detection was possible.
In this paper, we measure and characterize the GFW’s new system for censoring fully encrypted traffic. We find that, instead of directly defining what fully encrypted traffic is, the censor applies crude but efficient heuristics to exempt traffic that is unlikely to be fully encrypted traffic; it then blocks the remaining non-exempted traffic. These heuristics are based on the fingerprints of common protocols, the fraction of set bits, and the number, fraction, and position of printable ASCII characters. Our Internet scans reveal what traffic and which IP addresses the GFW inspects. We simulate the inferred GFW’s detection algorithm on live traffic at a university network tap to evaluate its comprehensiveness and false positives. We show evidence that the rules we inferred have good coverage of what the GFW actually uses. We estimate that, if applied broadly, it could potentially block about 0.6% of normal Internet traffic as collateral damage.
Our understanding of the GFW’s new censorship mechanism helps us derive several practical circumvention strategies. We responsibly disclosed our findings and suggestions to the developers of different anti-censorship tools, helping millions of users successfully evade this new form of blocking.
Machine Learning Backdoors
A Data-free Backdoor Injection Approach in Neural Networks
Peizhuo Lv, Chang Yue, Ruigang Liang, and Yunfei Yang, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China; Shengzhi Zhang, Department of Computer Science, Metropolitan College, Boston University, USA; Hualong Ma, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China; Kai Chen, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China; Beijing Academy of Artificial Intelligence, China
Recently, the backdoor attack on deep neural networks (DNNs) has been extensively studied, which causes the backdoored models to behave well on benign samples, whereas performing maliciously on controlled samples (with triggers attached). Almost all existing backdoor attacks require access to the original training/testing dataset or data relevant to the main task to inject backdoors into the target models, which is unrealistic in many scenarios, e.g., private training data. In this paper, we propose a novel backdoor injection approach in a "data-free" manner. We collect substitute data irrelevant to the main task and reduce its volume by filtering out redundant samples to improve the efficiency of backdoor injection. We design a novel loss function for fine-tuning the original model into the backdoored one using the substitute data, and optimize the fine-tuning to balance the backdoor injection and the performance on the main task. We conduct extensive experiments on various deep learning scenarios, e.g., image classification, text classification, tabular classification, image generation, and multimodal, using different models, e.g., Convolutional Neural Networks (CNNs), Autoencoders, Transformer models, Tabular models, as well as Multimodal DNNs. The evaluation results demonstrate that our data-free backdoor injection approach can efficiently embed backdoors with a nearly 100\% attack success rate, incurring an acceptable performance downgrade on the main task.
Sparsity Brings Vulnerabilities: Exploring New Metrics in Backdoor Attacks
Jianwen Tian, NKLSTISS, Institute of Systems Engineering, Academy of Military Sciences, China; Kefan Qiu, School of Cyberspace Science and Technology, Beijing Institute of Technology; Debin Gao, Singapore Management University; Zhi Wang, DISSec, College of Cyber Science, Nankai University; Xiaohui Kuang and Gang Zhao, NKLSTISS, Institute of Systems Engineering, Academy of Military Sciences, China
Aliasing Backdoor Attacks on Pre-trained Models
Cheng'an Wei, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China; Yeonjoon Lee, Hanyang University, Ansan, Republic of Korea; Kai Chen, Institute of Information Engineering, Chinese Academy of Sciences, China; Guozhu Meng and Peizhuo Lv, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan and Yi Zeng, Virginia Tech; Lingjuan Lyu, Sony AI; Xue Lin, Northeastern University; Ruoxi Jia, Virginia Tech
VILLAIN: Backdoor Attacks Against Vertical Split Learning
Yijie Bai and Yanjiao Chen, Zhejiang University; Hanlei Zhang and Wenyuan Xu, Zhejing University; Haiqin Weng and Dou Goodman, Ant Group
Integrity
ARI: Attestation of Real-time Mission Execution Integrity
Jinwen Wang, Yujie Wang, and Ao Li, Washington University in St. Louis; Yang Xiao, University of Kentucky; Ruide Zhang, Wenjing Lou, and Y. Thomas Hou, Virginia Polytechnic Institute and State University; Ning Zhang, Washington University in St. Louis
With the proliferation of autonomous safety-critical cyber-physical systems (CPS) in our daily life, their security is becoming ever more important. Remote attestation is a powerful mechanism to enable remote verification of system integrity. While recent developments have made it possible to efficiently attest IoT operations, autonomous systems that are built on top of real-time cyber-physical control loops and execute missions independently present new unique challenges.
In this paper, we formulate a new security property, Real-time Mission Execution Integrity (RMEI) to provide proof of correct and timely execution of the missions. While it is an attractive property, measuring it can incur prohibitive overhead for the real-time autonomous system. To tackle this challenge, we propose policy-based attestation of compartments to enable a trade-off between the level of details in measurement and runtime overhead. To further minimize the impact on real-time responsiveness, multiple techniques were developed to improve the performance, including customized software instrumentation and timing recovery through re-execution. We implemented a prototype of ARI and evaluated its performance on five CPS platforms. A user study involving 21 developers with different skill sets was conducted to understand the usability of our solution.
Design of Access Control Mechanisms in Systems-on-Chip with Formal Integrity Guarantees
Dino Mehmedagić, Mohammad Rahmani Fadiheh, Johannes Müller, Anna Lena Duque Antón, Dominik Stoffel, and Wolfgang Kunz, Rheinland-Pfälzische Technische Universität (RPTU) Kaiserslautern-Landau, Germany
Many SoCs employ system-level hardware access control mechanisms to ensure that security-critical operations cannot be tampered with by less trusted components of the circuit. While there are many design and verification techniques for developing an access control system, continuous discoveries of new vulnerabilities in such systems suggest a need for an exhaustive verification methodology to find and eliminate such weaknesses. This paper proposes UPEC-OI, a formal verification methodology that exhaustively covers integrity vulnerabilities of an SoC-level access control system. The approach is based on iteratively checking a 2-safety interval property whose formulation does not require any explicit specification of possible attack scenarios. The counterexamples returned by UPEC-OI can provide designers of access control hardware with valuable information on possible attack channels, allowing them to perform pinpoint fixes. We present a verification-driven development methodology which formally guarantees the developed SoC’s access control mechanism to be secure with respect to integrity. We evaluate the proposed approach in a case study on OpenTitan’s Earl Grey SoC where we add an SoC-level access control mechanism alongside malicious IPs to model the threat. UPEC-OI was found vital to guarantee the integrity of the mechanism and was proven to be tractable for SoCs of realistic size.
HashTag: Hash-based Integrity Protection for Tagged Architectures
Lukas Lamster, Martin Unterguggenberger, David Schrammel, and Stefan Mangard, Graz University of Technology
XCheck: Verifying Integrity of 3D Printed Patient-Specific Devices via Computing Tomography
Zhiyuan Yu, Yuanhaur Chang, Shixuan Zhai, Nicholas Deily, and Tao Ju, Washington University in St. Louis; XiaoFeng Wang, Indiana University Bloomington; Uday Jammalamadaka, Rice University; Ning Zhang, Washington University in St. Louis
3D printing is bringing revolutionary changes to the field of medicine, with applications ranging from hearing aids to regrowing organs. As our society increasingly relies on this technology to save lives, the security of these systems is a growing concern. However, existing defense approaches that leverage side channels may require domain knowledge from computer security to fully understand the impact of the attack.
To bridge the gap, we propose XCheck, which leverages medical imaging to verify the integrity of the printed patient-specific device (PSD). XCheck follows a defense-in-depth approach and directly compares the computed tomography (CT) scan of the printed device to its original design. XCheck utilizes a voxel-based approach to build multiple layers of defense involving both 3D geometric verification and multivariate material analysis. To further enhance usability, XCheck also provides an adjustable visualization scheme that allows practitioners' inspection of the printed object with varying tolerance thresholds to meet the needs of different applications. We evaluated the system with 47 PSDs representing different medical applications to validate the efficacy.
Demystifying Pointer Authentication on Apple M1
Zechao Cai, Jiaxun Zhu, Wenbo Shen, Yutian Yang, and Rui Chang, Zhejiang University; Yu Wang, Hangzhou Cyberserval Co., Ltd.; Jinku Li, Xidian University; Kui Ren, Zhejiang University
Fuzzing Firmware and Drivers
DDRace: Finding Concurrency UAF Vulnerabilities in Linux Drivers with Directed Fuzzing
Ming Yuan and Bodong Zhao, Tsinghua University; Penghui Li, The Chinese University of Hong Kong; Jiashuo Liang and Xinhui Han, Peking University; Xiapu Luo, The Hong Kong Polytechnic University; Chao Zhang, Tsinghua University and Zhongguancun Lab
Concurrency use-after-free (UAF) vulnerabilities account for a large portion of UAF vulnerabilities in Linux drivers. Many solutions have been proposed to find either concurrency bugs or UAF vulnerabilities, but few of them can be directly applied to efficiently find concurrency UAF vulnerabilities. In this paper, we propose the first concurrency directed greybox fuzzing solution DDRace to discover concurrency UAF vulnerabilities efficiently in Linux drivers. First, we identify candidate use-after-free locations as target sites and extract the relevant concurrency elements to reduce the exploration space of directed fuzzing. Second, we design a novel vulnerability related distance metric and an interleaving priority scheme to guide the fuzzer to better explore UAF vulnerabilities and thread interleavings. Lastly, to make test cases reproducible, we design an adaptive kernel state migration scheme to assist continuous fuzzing. We have implemented a prototype of DDRace, and evaluated it on upstream Linux drivers. Results show that DDRace is effective at discovering concurrency use-after-free vulnerabilities. It finds 4 unknown vulnerabilities and 8 known ones, which is more effective than other state-of-the-art solutions.
Automata-Guided Control-Flow-Sensitive Fuzz Driver Generation
Cen Zhang and Yuekang Li, Nanyang Technological University, Continental-NTU Corporate Lab; Hao Zhou, The Hong Kong Polytechnic University; Xiaohan Zhang, Xidian University; Yaowen Zheng, Nanyang Technological University, Continental-NTU Corporate Lab; Xian Zhan, Southern University of Science and Technology; The Hong Kong Polytechnic University; Xiaofei Xie, Singapore Management University; Xiapu Luo, The Hong Kong Polytechnic University; Xinghua Li, Xidian University; Yang Liu, Nanyang Technological University, Continental-NTU Corporate Lab; Sheikh Mahbub Habib, Continental AG, Germany
This paper is under embargo and will be released on the first day of the symposium.
Fuzz drivers are essential for fuzzing library APIs. However, manually composing fuzz drivers is difficult and time-consuming. Therefore, several works have been proposed to generate fuzz drivers automatically. Although these works can learn correct API usage from the consumer programs of the target library, three challenges still hinder the quality of the generated fuzz drivers: 1) How to learn and utilize the control dependencies in API usage; 2) How to handle the noises of the learned API usage, especially for complex real-world consumer programs; 3) How to organize independent sets of API usage inside the fuzz driver to better coordinate with fuzzers.
To solve these challenges, we propose RUBICK, an automata-guided control-flow-sensitive fuzz driver generation technique. RUBICK has three key features: 1) it models the API usage (including API data and control dependencies) as a deterministic finite automaton; 2) it leverages active automata learning algorithm to distill the learned API usage; 3) it synthesizes a single automata-guided fuzz driver, which provides scheduling interface for the fuzzer to test independent sets of API usage during fuzzing. During the experiments, the fuzz drivers generated by RUBICK showed a significant performance advantage over the baselines by covering an average of 50.42% more edges than fuzz drivers generated by FUZZGEN and 44.58% more edges than manually written fuzz drivers from OSS-Fuzz or human experts. By learning from large-scale open source projects, RUBICK has generated fuzz drivers for 11 popular Java projects and two of them have been merged into OSS-Fuzz. So far, 199 bugs, including four CVEs, are found using these fuzz drivers, which can affect popular PC and Android software with dozens of millions of downloads.
Hoedur: Embedded Firmware Fuzzing using Multi-Stream Inputs
Tobias Scharnowski and Simon Woerner, CISPA Helmholtz Center for Information Security; Felix Buchmann, Ruhr-University Bochum; Nils Bars, Moritz Schloegel, and Thorsten Holz, CISPA Helmholtz Center for Information Security
Forming Faster Firmware Fuzzers
Lukas Seidel, Qwiet AI; Dominik Maier, TU Berlin; Marius Muench, VU Amsterdam
A recent trend for assessing the security of an embedded system’s firmware is rehosting, the art of running the firmware in a virtualized environment, rather than on the original hardware platform. One significant use case for firmware rehosting is fuzzing to dynamically uncover security vulnerabilities.
However, state-of-the-art implementations suffer from high emulator-induced overhead, leading to less-than-optimal execution speeds. Instead of emulation, we propose near-native rehosting: running embedded firmware as a Linux userspace process on a high-performance system that shares the instruction set family with the targeted device. We implement this approach with SAFIREFUZZ, a throughput-optimized rehosting and fuzzing framework for ARM Cortex-M firmware. SAFIREFUZZ takes monolithic binary-only firmware images and uses high-level emulation (HLE) and dynamic binary rewriting to run them on far more powerful hardware with low overhead. By replicating experiments of HALucinator, the state-of-the-art HLE-based rehosting system for binary firmware, we show that SAFIREFUZZ can provide a 690x throughput increase on average during 24-hour fuzzing campaigns while covering up to 30% more basic blocks.
ReUSB: Replay-Guided USB Driver Fuzzing
Jisoo Jang, Minsuk Kang, and Dokyung Song, Yonsei University
10:15 am–10:45 am
Break with Refreshments
10:45 am–12:00 pm
Vehicles and Security
Exorcising "Wraith": Protecting LiDAR-based Object Detector in Automated Driving System from Appearing Attacks
Qifan Xiao, Xudong Pan, Yifan Lu, Mi Zhang, Jiarun Dai, and Min Yang, Fudan University
Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds. However, recent works show the adversary can forge non-existent cars in the prediction results with a few fake points (i.e., appearing attack). By removing statistical outliers, existing defenses are however designed for specific attacks or biased by predefined heuristic rules. Towards more comprehensive mitigation, we first systematically inspect the mechanism of previous appearing attacks: Their common weaknesses are observed in crafting fake obstacles which (i) have obvious differences in the local parts compared with real obstacles and (ii) violate the physical relation between depth and point density.
In this paper, we propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector to eliminate forged obstacles where a major proportion of local parts have low objectness, i.e., to what degree it belongs to a real object. At the core of our module is a local objectness predictor, which explicitly incorporates the depth information to model the relation between depth and point density, and predicts each local part of an obstacle with an objectness score. Extensive experiments show, our proposed defense eliminates at least 70% cars forged by three known appearing attacks in most cases, while, for the best previous defense, less than 30% forged cars are eliminated. Meanwhile, under the same circumstance, our defense incurs less overhead for AP/precision on cars compared with existing defenses. Furthermore, We validate the effectiveness of our proposed defense on simulation-based closed-loop control driving tests in the open-source system of Baidu's Apollo.
Discovering Adversarial Driving Maneuvers against Autonomous Vehicles
Ruoyu Song, Muslum Ozgur Ozmen, Hyungsub Kim, Raymond Muller, Z. Berkay Celik, and Antonio Bianchi, Purdue University
Understand Users' Privacy Perception and Decision of V2X Communication in Connected Autonomous Vehicles
Zekun Cai and Aiping Xiong, The Pennsylvania State University
Connected autonomous vehicles (CAVs) offer opportunities to improve road safety and enhance traffic efficiency. Vehicle-to-everything (V2X) communication allows CAVs to communicate with any entity that may affect, or may be affected by, the vehicles. The implementation of V2X in CAVs is inseparable from sharing and receiving a wide variety of data. Nevertheless, the public is not necessarily aware of such ubiquitous data exchange or does not understand their implications. We conducted an online study (N = 595) examining drivers’ privacy perceptions and decisions of four V2X application scenarios. Participants perceived more benefits but fewer risks of data sharing in the V2X scenarios where data collection is critical for driving than otherwise. They also showed more willingness to share data in those scenarios. In addition, we found that participants’ awareness of privacy risks (priming) and their experience on driving assistance and connectivity functions impacted their data-sharing decisions. Qualitative data confirmed that benefits, especially safety, come first, indicating a privacy-safety tradeoff. Moreover, factors such as misconceptions and novel expectations about CAV data collection and use moderated participants’ privacy decisions. We discuss implications of the obtained results to inform CAV privacy design and development.
You Can't See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks
Yulong Cao, University of Michigan; S. Hrushikesh Bhupathiraju and Pirouz Naghavi, University of Florida; Takeshi Sugawara, The University of Electro-Communications; Z. Morley Mao, University of Michigan; Sara Rampazzi, University of Florida
Autonomous Vehicles (AVs) increasingly use LiDAR-based object detection systems to perceive other vehicles and pedestrians on the road. While existing attacks on LiDAR-based autonomous driving architectures focus on lowering the confidence score of AV object detection models to induce obstacle misdetection, our research discovers how to leverage laser-based spoofing techniques to selectively remove the LiDAR point cloud data of genuine obstacles at the sensor level before being used as input to the AV perception. The ablation of this critical LiDAR information causes autonomous driving obstacle detectors to fail to identify and locate obstacles and, consequently, induces AVs to make dangerous automatic driving decisions. In this paper, we present a method invisible to the human eye that hides objects and deceives autonomous vehicles’ obstacle detectors by exploiting inherent automatic transformation and filtering processes of LiDAR sensor data integrated with autonomous driving frameworks. We call such attacks Physical Removal Attacks (PRA), and we demonstrate their effectiveness against three popular AV obstacle detectors (Apollo, Autoware, PointPillars), and we achieve 45◦ attack capability. We evaluate the attack impact on three fusion models (Frustum-ConvNet, AVOD, and Integrated-Semantic Level Fusion) and the consequences on the driving decision using LGSVL, an industry-grade simulator. In our moving vehicle scenarios, we achieve a 92.7% success rate removing 90% of a target obstacle’s cloud points. Finally, we demonstrate the attack’s success against two popular defenses against spoofing and object hiding attacks and discuss two enhanced defense strategies to mitigate our attack.
PatchVerif: Discovering Faulty Patches in Robotic Vehicles
Hyungsub Kim, Muslum Ozgur Ozmen, Z. Berkay Celik, Antonio Bianchi, and Dongyan Xu, Purdue University
Modern software is continuously patched to fix bugs and security vulnerabilities. Patching is particularly important in robotic vehicles (RVs), in which safety and security bugs can cause severe physical damages. However, existing automated methods struggle to identify faulty patches in RVs, due to their inability to systematically determine patch-introduced behavioral modifications, which affect how the RV interacts with the physical environment.
In this paper, we introduce PATCHVERIF, an automated patch analysis framework. PATCHVERIF’s goal is to evaluate whether a given patch introduces bugs in the patched RV control software. To this aim, PATCHVERIF uses a combination of static and dynamic analysis to measure how the analyzed patch affects the physical state of an RV. Specifically, PATCHVERIF uses a dedicated input mutation algorithm to generate RV inputs that maximize the behavioral differences (in the physical space) between the original code and the patched one. Using the collected information about patch-introduced behavioral modifications, PATCHVERIF employs support vector machines (SVMs) to infer whether a patch is faulty or correct.
We evaluated PATCHVERIF on two popular RV control software (ArduPilot and PX4), and it successfully identified faulty patches with an average precision and recall of 97.9% and 92.1%, respectively. Moreover, PATCHVERIF discovered 115 previously unknown bugs, 103 of which have been acknowledged, and 51 of them have already been fixed.
Verifying Users
Fast IDentity Online with Anonymous Credentials (FIDO-AC)
Wei-Zhu Yeoh, CISPA Helmholtz Center for Information Security; Michal Kepkowski, Macquarie University; Gunnar Heide, CISPA Helmholtz Center for Information Security; Dali Kaafar, Macquarie University; Lucjan Hanzlik, CISPA Helmholtz Center for Information Security
How to Bind Anonymous Credentials to Humans
Julia Hesse, IBM Research Europe - Zurich; Nitin Singh, IBM Research India - Bangalore; Alessandro Sorniotti, IBM Research Europe - Zurich
Inducing Authentication Failures to Bypass Credit Card PINs
David Basin, Patrick Schaller, and Jorge Toro-Pozo, ETH Zurich
For credit card transactions using the EMV standard, the integrity of transaction information is protected cryptographically by the credit card. Integrity checks by the payment terminal use RSA signatures and are part of EMV’s offline data authentication mechanism. Online integrity checks by the card issuer use a keyed MAC. One would expect that failures in either mechanism would always result in transaction failure, but this is not the case as offline authentication failures do not always result in declined transactions. Consequently, the integrity of transaction data that is not protected by the keyed MAC (online) cannot be guaranteed.
We show how this missing integrity protection can be exploited to bypass PIN verification for high-value Mastercard transactions. As a proof-of-concept, we have built an Android app that modifies unprotected card-sourced data, including the data relevant for cardholder verification. Using our app, we have tricked real-world terminals into downgrading from PIN verification to either no cardholder verification or (paper) signature verification, for transactions of up to 500 Swiss Francs. Our findings have been disclosed to the vendor with the recommendation to decline any transaction where offline data authentication fails.
An Empirical Study & Evaluation of Modern CAPTCHAs
Andrew Searles, Yoshimichi Nakatsuka, and Ercan Ozturk, University of California, Irvine; Andrew Paverd, Microsoft; Gene Tsudik, University of California, Irvine; Ai Enkoji, Lawrence Livermore National Laboratory
Account Verification on Social Media: User Perceptions and Paid Enrollment
Madelyne Xiao, Mona Wang, Anunay Kulshrestha, and Jonathan Mayer, Princeton University
DNS Security
User Awareness and Behaviors Concerning Encrypted DNS Settings in Web Browsers
Alexandra Nisenoff, Carnegie Mellon University and University of Chicago; Ranya Sharma and Nick Feamster, University of Chicago
Recent developments to encrypt the Domain Name System (DNS) have resulted in major browser and operating system vendors deploying encrypted DNS functionality, often enabling various configurations and settings by default. In many cases, default encrypted DNS settings have implications for performance and privacy; for example, Firefox’s default DNS setting sends all of a user’s DNS queries to Cloudflare, potentially introducing new privacy vulnerabilities. In this paper, we confirm that most users are unaware of these developments—with respect to the rollout of these new technologies, the changes in default settings, and the ability to customize encrypted DNS configuration to balance user preferences between privacy and performance. Our findings suggest several important implications for the designers of interfaces for encrypted DNS functionality in both browsers and operating systems, to help improve user awareness concerning these settings, and to ensure that users retain the ability to make choices that allow them to balance tradeoffs concerning DNS privacy and performance.
Two Sides of the Shield: Understanding Protective DNS adoption factors
Elsa Rebeca Turcios Rodríguez, Radu Anghel, Simon Parkin, Michel van Eeten, and Carlos Hernandez Ganan, Delft University of Technology
The Maginot Line: Attacking the Boundary of DNS Caching Protection
Xiang Li, Chaoyi Lu, and Baojun Liu, Tsinghua University; Qifan Zhang and Zhou Li, University of California, Irvine; Haixin Duan, Tsinghua University, QI-ANXIN Technology Research Institute, and Zhongguancun Laboratory; Qi Li, Tsinghua University and Zhongguancun Laboratory
This paper is under embargo and will be released on the first day of the symposium.
In this paper, we report MaginotDNS, a powerful cache poisoning attack against DNS servers that simultaneously act as forwarder and recursive resolver (termed as CDNS). The attack is made possible through exploiting vulnerabilities in the bailiwick checking algorithms, one of the cornerstones of DNS security since the 1990s, and affects multiple versions of popular DNS software, including BIND and Microsoft DNS. Through field tests, we find that the attack is potent, allowing attackers to take over entire DNS zones, even including Top-Level Domains (e.g., .com and .net). Through a large-scale measurement study, we also confirm the extensive usage of CDNSes in real-world networks (up to 41.8% of our probed open DNS servers) and find that at least 35.5% of all CDNSes are vulnerable to MaginotDNS. After interviews with ISPs, we show a wide range of CDNS use cases and real-world attacks. We have reported all the discovered vulnerabilities to DNS software vendors and received acknowledgments from all of them. 3 CVE-ids have been assigned, and 2 vendors have fixed their software. Our study brings attention to the implementation inconsistency of security checking logic in different DNS software and server modes (i.e., recursive resolvers and forwarders), and we call for standardization and agreements among software vendors.
Fourteen Years in the Life: A Root Server’s Perspective on DNS Resolver Security
Alden Hilton, Sandia National Laboratories; Casey Deccio, Brigham Young University; Jacob Davis, Sandia National Laboratories
We consider how the DNS security and privacy landscape has evolved over time, using data collected annually at A-root between 2008 and 2021. We consider issues such as deployment of security and privacy mechanisms, including source port randomization, TXID randomization, DNSSEC, and QNAME minimization. We find that achieving general adoption of new security practices is a slow, ongoing process. Of particular note, we find a significant number of resolvers lacking nearly all of the security mechanisms we considered, even as late as 2021. Specifically, in 2021, over 4% of the resolvers analyzed were unprotected by either source port randomization, DNSSEC validation, DNS cookies, or 0x20 encoding. Encouragingly, we find that the volume of traffic from resolvers with secure practices is significantly higher than that of other resolvers.
NRDelegationAttack: Complexity DDoS attack on DNS Recursive Resolvers
Yehuda Afek and Anat Bremler-Barr, Tel-Aviv University; Shani Stajnrod, Reichman University
Malicious actors carrying out distributed denial-of-service (DDoS) attacks are interested in requests that consume a large amount of resources and provide them with ammunition. We present a severe complexity attack on DNS resolvers, where a single malicious query to a DNS resolver can significantly increase its CPU load. Even a few such concurrent queries can result in resource exhaustion and lead to a denial of its service to legitimate clients. This attack is unlike most recent DDoS attacks on DNS servers, which use communication amplification attacks where a single query generates a large number of message exchanges between DNS servers.
The attack described here involves a malicious client whose request to a target resolver is sent to a collaborating malicious authoritative server; this server, in turn, generates a carefully crafted referral response back to the (victim) resolver. The chain reaction of requests continues, leading to the delegation of queries. These ultimately direct the resolver to a server that does not respond to DNS queries. The exchange generates a long sequence of cache and memory accesses that dramatically increase the CPU load on the target resolver. Hence the name non-responsive delegation attack, or NRDelegationAttack.
We demonstrate that three major resolver implementations, BIND9, Unbound, and Knot, are affected by the NRDelegationAttack, and carry out a detailed analysis of the amplification factor on a BIND9 based resolver. As a result of this work, three common vulnerabilities and exposures (CVEs) regarding NRDelegationAttack were issued by these resolver implementations. We also carried out minimal testing on 16 open resolvers, confirming that the attack affects them as well.
Graphs and Security
Inductive Graph Unlearning
Cheng-Long Wang, King Abdullah University of Science and Technology; Mengdi Huai, Iowa State University; Di Wang, King Abdullah University of Science and Technology
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation
Sina Sajadmanesh, Idiap Research Institute and EPFL; Ali Shahin Shamsabadi, Alan Turing Institute; Aurélien Bellet, Inria; Daniel Gatica-Perez, Idiap Research Institute and EPFL
In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise to the GNN's aggregation function to statistically obfuscate the presence of a single edge (edge-level privacy) or a single node and all its adjacent edges (node-level privacy). Tailored to the specifics of private learning, GAP's new architecture is composed of three separate modules: (i) the encoder module, where we learn private node embeddings without relying on the edge information; (ii) the aggregation module, where we compute noisy aggregated node embeddings based on the graph structure; and (iii) the classification module, where we train a neural network on the private aggregations for node classification without further querying the graph edges. GAP's major advantage over previous approaches is that it can benefit from multi-hop neighborhood aggregations, and guarantees both edge-level and node-level DP not only for training, but also at inference with no additional costs beyond the training's privacy budget. We analyze GAP's formal privacy guarantees using Rényi DP and conduct empirical experiments over three real-world graph datasets. We demonstrate that GAP offers significantly better accuracy-privacy trade-offs than state-of-the-art DP-GNN approaches and naive MLP-based baselines. Our code is publicly available at https://github.com/sisaman/GAP.
PrivGraph: Differentially Private Graph Data Publication by Exploiting Community Information
Quan Yuan, Zhejiang University; Zhikun Zhang, CISPA Helmholtz Center for Information Security; Linkang Du, Zhejiang University; Min Chen, CISPA Helmholtz Center for Information Security; Peng Cheng and Mingyang Sun, Zhejiang University
On the Security Risks of Knowledge Graph Reasoning
Zhaohan Xi, Tianyu Du, Changjiang Li, and Ren Pang, Pennsylvania State University; Shouling Ji, Zhejiang University; Xiapu Luo, The Hong Kong Polytechnic University; Xusheng Xiao, Arizona State University; Fenglong Ma and Ting Wang, Pennsylvania State University
The Case for Learned Provenance Graph Storage Systems
Hailun Ding, Juan Zhai, Dong Deng, and Shiqing Ma, Rutgers University
Cyberattacks are becoming more frequent and sophisticated, and investigating them becomes more challenging. Provenance graphs are the primary data source to support forensics analysis. Because of system complexity and long attack duration, provenance graphs can be huge, and efficiently storing them remains a challenging problem. Existing works typically use relational or graph databases to store provenance graphs. These solutions suffer from high storage overhead and low query efficiency. Recently, researchers leveraged Deep Neural Networks (DNNs) in storage system design and achieved promising results. We observe that DNNs can embed given inputs as context-aware numerical vector representations, which are compact and support parallel query operations. In this paper, we propose to learn a DNN as the storage system for provenance graphs to achieve storage and query efficiency. We also present novel designs that leverage domain knowledge to reduce provenance data redundancy and build fast-query processing with indexes. We built a prototype LEONARD and evaluated it on 12 datasets. Compared with the relational database Quickstep and the graph database Neo4j, LEONARD reduced the space overhead by up to 25.90x and boosted up to 99.6% query executions.
Ethereum Security
A Large Scale Study of the Ethereum Arbitrage Ecosystem
Robert McLaughlin, Christopher Kruegel, and Giovanni Vigna, University of California, Santa Barbara
The Ethereum blockchain rapidly became the epicenter of a complex financial ecosystem, powered by decentralized exchanges (DEXs). These exchanges form a diverse capital market where anyone can swap one type of token for another. Arbitrage trades are a normal and expected phenomenon in free capital markets, and, indeed, several recent works identify these transactions on decentralized exchanges.
Unfortunately, existing studies leave significant knowledge gaps in our understanding of the system as a whole, which hinders research into the security, stability, and economic impacts of arbitrage. To address this issue, we perform two large-scale measurements over a 28-month period. First, we design a novel arbitrage identification strategy capable of analyzing over 10x more DEX applications than prior work. This uncovers 3.8 million arbitrages, which yield a total of $321 million in profit. Second, we design a novel arbitrage opportunity detection system, which is the first to support modern complex price models at scale. This system identifies 4 billion opportunities and would generate a weekly profit of 395 Ether (approximately $500,000, at the time of writing). We observe two key insights that demonstrate the usefulness of these measurements: (1) an increasing percentage of revenue is paid to the miners, which threatens consensus stability, and (2) arbitrage opportunities occasionally persist for several blocks, which implies that price-oracle manipulation attacks may be less costly than expected.
ACon^2: Adaptive Conformal Consensus for Provable Blockchain Oracles
Sangdon Park, Georgia Institute of Technology; Osbert Bastani, University of Pennsylvania; Taesoo Kim, Georgia Institute of Technology
Blockchains with smart contracts are distributed ledger systems that achieve block-state consistency among distributed nodes by only allowing deterministic operations of smart contracts. However, the power of smart contracts is enabled by interacting with stochastic off-chain data, which in turn opens the possibility to undermine the block-state consistency. To address this issue, an oracle smart contract is used to provide a single consistent source of external data; but, simultaneously, this introduces a single point of failure, which is called the oracle problem. To address the oracle problem, we propose an adaptive conformal consensus (ACon2) algorithm that derives a consensus set of data from multiple oracle contracts via the recent advance in online uncertainty quantification learning. Interesting, the consensus set provides a desired correctness guarantee under distribution shift and Byzantine adversaries. We demonstrate the efficacy of the proposed algorithm on two price datasets and an Ethereum case study. In particular, the Solidity implementation of the proposed algorithm shows the potential practicality of the proposed algorithm, implying that online machine learning algorithms are applicable to address security issues in blockchains.
Snapping Snap Sync: Practical Attacks on Go Ethereum Synchronising Nodes
Massimiliano Taverna and Kenneth G. Paterson, ETH Zurich
Token Spammers, Rug Pulls, and Sniper Bots: An Analysis of the Ecosystem of Tokens in Ethereum and in the Binance Smart Chain (BNB)
Federico Cernera, Massimo La Morgia, Alessandro Mei, and Francesco Sassi, Sapienza University of Rome
In this work, we perform a longitudinal analysis of the BNB Smart Chain and Ethereum blockchain from their inception to March 2022. We study the ecosystem of the tokens and liquidity pools, highlighting analogies and differences between the two blockchains. We discover that about 60% of tokens are active for less than one day. Moreover, we find that 1% of addresses create an anomalous number of tokens (between 20% and 25%). We discover that these tokens are used as disposable tokens to perform a particular type of rug pull, which we call 1-day rug pull. We quantify the presence of this operation on both blockchains discovering its prevalence on the BNB Smart Chain. We estimate that 1-day rug pulls generated $240 million in profits. Finally, we present sniper bots, a new kind of trader bot involved in these activities, and we detect their presence and quantify their activity in the rug pull operations.
Automated Inference on Financial Security of Ethereum Smart Contracts
Wansen Wang and Wenchao Huang, University of Science and Technology of China; Zhaoyi Meng, Anhui University; Yan Xiong and Fuyou Miao, University of Science and Technology of China; Xianjin Fang, Anhui University of Science and Technology; Caichang Tu and Renjie Ji, University of Science and Technology of China
Supply Chains and Third-Party Code
LibScan: Towards More Precise Third-Party Library Identification for Android Applications
Yafei Wu and Cong Sun, State Key Lab of ISN, School of Cyber Engineering, Xidian University, China; Dongrui Zeng, Palo Alto Networks, Inc., Santa Clara, CA, USA; Gang Tan, The Pennsylvania State University, University Park, PA, USA; Siqi Ma, University of New South Wales, Australia; Peicheng Wang, State Key Lab of ISN, School of Cyber Engineering, Xidian University, China
Union under Duress: Understanding Hazards of Duplicate Resource Mismediation in Android Software Supply Chain
Xueqiang Wang, University of Central Florida; Yifan Zhang and XiaoFeng Wang, Indiana University Bloomington; Yan Jia, Nankai University; Luyi Xing, Indiana University Bloomington
UVSCAN: Detecting Third-Party Component Usage Violations in IoT Firmware
Binbin Zhao, Georgia Institute of Technology; Shouling Ji and Xuhong Zhang, Zhejiang University; Yuan Tian, University of California, Los Angeles; Qinying Wang, Yuwen Pu, and Chenyang Lyu, Zhejiang University; Raheem Beyah, Georgia Institute of Technology
Beyond Typosquatting: An In-depth Look at Package Confusion
Shradha Neupane, Worcester Polytechnic Institute; Grant Holmes, Elizabeth Wyss, and Drew Davidson, University of Kansas; Lorenzo De Carli, University of Calgary
SandDriller: A Fully-Automated Approach for Testing Language-Based JavaScript Sandboxes
Abdullah AlHamdan and Cristian-Alexandru Staicu, CISPA Helmholtz Center for Information Security
Language-based isolation offers a cheap way to restrict the privileges of untrusted code. Previous work proposes a plethora of such techniques for isolating JavaScript code on the client-side, enabling the creation of web mashups. While these solutions are mostly out of fashion among practitioners, there is a growing trend to use analogous techniques for JavaScript code running outside of the browser, e.g., for protecting against supply chain attacks on the server-side. Irrespective of the use case, bugs in the implementation of language-based isolation can have devastating consequences. Hence, we propose SandDriller, the first dynamic analysis-based approach for detecting sandbox escape vulnerabilities. Our core insight is to design testing oracles based on two main objectives of language-based sandboxes: Prevent writes outside the sandbox and restrict access to privileged operations. Using instrumentation, we interpose oracle checks on all the references exchanged between the host and the guest code to detect foreign references that allow the guest code to escape the sandbox. If at run time, a foreign reference is detected by an oracle, SandDriller proceeds to synthesize an exploit for it. We apply our approach to six sandbox systems and find eight unique zero-day sandbox breakout vulnerabilities and two crashes. We believe that SandDriller can be integrated in the development process of sandboxes to detect security vulnerabilities in the pre-release phase.
12:00 pm–1:30 pm
Symposium Luncheon
1:30 pm–2:45 pm
Cellular Networks
Instructions Unclear: Undefined Behaviour in Cellular Network Specifications
Daniel Klischies, Ruhr University Bochum; Moritz Schloegel and Tobias Scharnowski, CISPA Helmholtz Center for Information Security; Mikhail Bogodukhov, Independent; David Rupprecht, Radix Security; Veelasha Moonsamy, Ruhr University Bochum
MobileAtlas: Geographically Decoupled Measurements in Cellular Networks for Security and Privacy Research
Gabriel K. Gegenhuber, University of Vienna; Wilfried Mayer, SBA Research; Edgar Weippl, University of Vienna; Adrian Dabrowski, CISPA Helmholtz Center for Information Security
Cellular networks are not merely data access networks to the Internet. Their distinct services and ability to form large complex compounds for roaming purposes make them an attractive research target in their own right. Their promise of providing a consistent service with comparable privacy and security across roaming partners falls apart at close inspection.
Thus, there is a need for controlled testbeds and measurement tools for cellular access networks doing justice to the technology's unique structure and global scope. Particularly, such measurements suffer from a combinatorial explosion of operators, mobile plans, and services. To cope with these challenges, we built a framework that geographically decouples the SIM from the cellular modem by selectively connecting both remotely. This allows testing any subscriber with any operator at any modem location within minutes without moving parts. The resulting GSM/UMTS/LTE measurement and testbed platform offers a controlled experimentation environment, which is scalable and cost-effective. The platform is extensible and fully open-sourced, allowing other researchers to contribute locations, SIM cards, and measurement scripts.
Using the above framework, our international experiments in commercial networks revealed exploitable inconsistencies in traffic metering, leading to multiple phreaking opportunities, i.e., fare-dodging. We also expose problematic IPv6 firewall configurations, hidden SIM card communication to the home network, and fingerprint dial progress tones to track victims across different roaming networks and countries with voice calls.
Eavesdropping Mobile App Activity via Radio-Frequency Energy Harvesting
Tao Ni, Shenzhen Research Institute, City University of Hong Kong, and Department of Computer Science, City University of Hong Kong; Guohao Lan, Department of Software Technology, Delft University of Technology; Jia Wang, College of Computer Science and Software Engineering, Shenzhen University; Qingchuan Zhao, Department of Computer Science, City University of Hong Kong; Weitao Xu, Shenzhen Research Institute, City University of Hong Kong, and Department of Computer Science, City University of Hong Kong
Radio-frequency (RF) energy harvesting is a promising technology for Internet-of-Things (IoT) devices to power sensors and prolong battery life. In this paper, we present a novel side-channel attack that leverages RF energy harvesting signals to eavesdrop mobile app activities. To demonstrate this novel attack, we propose AppListener, an automated attack framework that recognizes fine-grained mobile app activities from harvested RF energy. The RF energy is harvested from a custom-built RF energy harvester which generates voltage signals from ambient Wi-Fi transmissions, and app activities are recognized from a three-tier classification algorithm. We evaluate AppListener with four mobile devices running 40 common mobile apps (e.g., YouTube, Facebook, and WhatsApp) belonging to five categories (i.e., video, music, social media, communication, and game); each category contains five application-specific activities. Experiment results show that AppListener achieves over 99% accuracy in differentiating four different mobile devices, over 98% accuracy in classifying 40 different apps, and 86.7% accuracy in recognizing five sets of application-specific activities. Moreover, a comprehensive study is conducted to show AppListener is robust to a number of impact factors, such as distance, environment, and non-target connected devices. Practices of integrating AppListener into commercial IoT devices also demonstrate that it is easy to deploy. Finally, countermeasures are presented as the first step to defend against this novel attack.
Sherlock on Specs: Building LTE Conformance Tests through Automated Reasoning
Yi Chen and Di Tang, Indiana University Bloomington; Yepeng Yao, Institute of Information Engineering, Chinese Academy of Sciences, and School of Cyber Security, University of Chinese Academy of Sciences; Mingming Zha and Xiaofeng Wang, Indiana University Bloomington; Xiaozhong Liu, Worcester Polytechnic Institute; Haixu Tang, Indiana University Bloomington; Baoxu Liu, Institute of Information Engineering, Chinese Academy of Sciences, and Institute of Information Engineering, Chinese Academy of Sciences, and School of Cyber Security, University of Chinese Academy of Sciences
Conformance tests are critical for finding security weaknesses in carrier network systems. However, building a conformance test procedure from specifications is challenging, as indicated by the slow progress made by the 3GPP, particularly in developing security-related tests, even with a large amount of resources already committed. A unique challenge in building the procedure is that a testing system often cannot directly invoke the condition event in a security requirement or directly observe the occurrence of the operation expected to be triggered by the event. Addressing this issue requires an event chain to be found, which once initiated leads to a chain reaction so the testing system can either indirectly triggers the target event or indirectly observe the occurrence of the expected event. To find a solution to this problem and make progress towards a fully automated conformance test generation, we developed a new approach called Contester , which utilizes natural language processing and machine learning to build an event dependency graph from a 3GPP specification, and further perform automated reasoning on the graph to discover the event chains for a given security requirement. Such event chains are further converted by Contester into a conformance testing procedure, which is then executed by a testing system to evaluate the compliance of user equipment (UE) with the security requirement. Our evaluation shows that given 22 security requirements from the LTE NAS specifications, Contester successfully generated over a hundred test procedures in just 25 minutes. After running these procedures on 22 popular UEs including iPhone 13, Pixel 5a and IoT devices, our approach uncovered 197 security requirement violations, with 190 never reported before, rendering these devices to serious security risks such as MITM, fake base station and reply attacks.
BASECOMP: A Comparative Analysis for Integrity Protection in Cellular Baseband Software
Eunsoo Kim, Min Woo Baek, and CheolJun Park, KAIST; Dongkwan Kim, Samsung SDS; Yongdae Kim and Insu Yun, KAIST
Usability and User Perspectives
Investigating Verification Behavior and Perceptions of Visual Digital Certificates
Dañiel Gerhardt and Alexander Ponticello, CISPA Helmholtz Center for Information Security and Saarland University; Adrian Dabrowski and Katharina Krombholz, CISPA Helmholtz Center for Information Security
This paper presents a qualitative study to explore how individuals perceive and verify visual digital certificates with QR codes. During the COVID-19 pandemic, such certificates have been used in the EU to provide standardized proof of vaccination.
We conducted semi-structured interviews with N=17 participants responsible for verifying COVID-19 certificates as part of their job. Using a two-fold thematic analysis approach, we, among other things, identified and classified multiple behavioral patterns, including inadequate reliance on visual cues as a proxy for proper digital verification.
We present design and structural recommendations based on our findings, including conceptual changes and improvements to storage and verification apps to limit shortcut opportunities. Our empirical findings are hence essential to improve the usability, robustness, and effectiveness of visual digital certificates and their verification.
"My Privacy for their Security": Employees' Privacy Perspectives and Expectations when using Enterprise Security Software
Jonah Stegman, Patrick J. Trottier, Caroline Hillier, and Hassan Khan, University of Guelph; Mohammad Mannan, Concordia University
Employees are often required to use Enterprise Security Software (“ESS”) on corporate and personal devices. ESS products collect users’ activity data including users’ location, applications used, and websites visited — operating from employees’ device to the cloud. To the best of our knowledge, the privacy implications of this data collection have yet to be explored. We conduct an online survey (n=258) and a semistructured interview (n=22) with ESS users to understand their privacy perceptions, the challenges they face when using ESS, and the ways they try to overcome those challenges. We found that while many participants reported receiving no information about what data their ESS collected, those who received some information often underestimated what was collected. Employees reported lack of communication about various data collection aspects including: the entities with access to the data and the scope of the data collected. We use the interviews to uncover several sources of misconceptions among the participants. Our findings show that while employees understand the need for data collection for security, the lack of communication and ambiguous data collection practices result in the erosion of employees’ trust on the ESS and employers. We obtain suggestions from participants on how to mitigate these misconceptions and collect feedback on our design mockups of a privacy notice and privacy indicators for ESS. Our work will benefit researchers, employers, and ESS developers to protect users’ privacy in the growing ESS market.
Account Security Interfaces: Important, Unintuitive, and Untrustworthy
Alaa Daffalla and Marina Bohuk, Cornell University; Nicola Dell, Jacobs Institute Cornell Tech; Rosanna Bellini, Cornell University; Thomas Ristenpart, Cornell Tech
Defining "Broken": User Experiences and Remediation Tactics When Ad-Blocking or Tracking-Protection Tools Break a Website’s User Experience
Alexandra Nisenoff, University of Chicago and Carnegie Mellon University; Arthur Borem, Madison Pickering, Grant Nakanishi, Maya Thumpasery, and Blase Ur, University of Chicago
To counteract the ads and third-party tracking ubiquitous on the web, users turn to blocking tools—ad-blocking and tracking-protection browser extensions and built-in features. Unfortunately, blocking tools can cause non-ad, non-tracking elements of a website to degrade or fail, a phenomenon termed breakage. Examples include missing images, non-functional buttons, and pages failing to load. While the literature frequently discusses breakage, prior work has not systematically mapped and disambiguated the spectrum of user experiences subsumed under breakage, nor sought to understand how users experience, prioritize, and attempt to fix breakage. We fill these gaps. First, through qualitative analysis of 18,932 extension-store reviews and GitHub issue reports for ten popular blocking tools, we developed novel taxonomies of 38 specific types of breakage and 15 associated mitigation strategies. To understand subjective experiences of breakage, we then conducted a 95-participant survey. Nearly all participants had experienced various types of breakage, and they employed an array of strategies of variable effectiveness in response to specific types of breakage in specific contexts. Unfortunately, participants rarely notified anyone who could fix the root causes. We discuss how our taxonomies and results can improve the comprehensiveness and prioritization of ongoing attempts to automatically detect and fix breakage.
Cryptographic Deniability: A Multi-perspective Study of User Perceptions and Expectations
Tarun Kumar Yadav, Brigham Young University; Devashish Gosain, KU Leuven; Kent Seamons, Brigham Young University
Entomology
Silent Bugs Matter: A Study of Compiler-Introduced Security Bugs
Jianhao Xu, Nanjing Unniversity; Kangjie Lu, University of Minnesota; Zhengjie Du, Zhu Ding, and Linke Li, Nanjing University; Qiushi Wu, University of Minnesota; Mathias Payer, EPFL; Bing Mao, Nanjing University
Compilers assure that any produced optimized code is semantically equivalent to the original code. However, even "correct" compilers may introduce security bugs as security properties go beyond translation correctness. Security bugs introduced by such correct compiler behaviors can be disputable; compiler developers expect users to strictly follow language specifications and understand all assumptions, while compiler users may incorrectly assume that their code is secure. Such bugs are hard to find and prevent, especially when it is unclear whether they should be fixed on the compiler or user side. Nevertheless, these bugs are real and can be severe, thus should be studied carefully.
We perform a comprehensive study on compiler-introduced security bugs (CISB) and their root causes. We collect a large set of CISB in the wild by manually analyzing 4,827 potential bug reports of the most popular compilers (GCC and Clang), distilling them into a taxonomy of CISB. We further conduct a user study to understand how compiler users view compiler behaviors. Our study shows that compiler-introduced security bugs are common and may have serious security impacts. It is unrealistic to expect compiler users to understand and comply with compiler assumptions. For example, the "no-undefined-behavior" assumption has become a nightmare for users and a major cause of CISB.
A Bug's Life: Analyzing the Lifecycle and Mitigation Process of Content Security Policy Bugs
Gertjan Franken, Tom Van Goethem, Lieven Desmet, and Wouter Joosen, imec-DistriNet, KU Leuven
Remote Code Execution from SSTI in the Sandbox: Automatically Detecting and Exploiting Template Escape Bugs
Yudi Zhao, Yuan Zhang, and Min Yang, Fudan University
Template engines are widely used in web applications to ease the development of user interfaces. The powerful capabilities provided by the template engines can be abused by attackers through server-side template injection (SSTI), enabling severe attacks on the server side, including remote code execution (RCE). Hence, modern template engines have provided a sandbox mode to prevent SSTI attacks from RCE.
In this paper, we study an overlooked sandbox bypass vulnerability in template engines, called template escape, that could elevate SSTI attacks to RCE. By escaping the template rendering process, template escape bugs can be used to inject executable code on the server side. Template escape bugs are subtle to detect and exploit, due to their dependencies on the template syntax and the template rendering logic. Consequently, little knowledge is known about their prevalence and severity in the real world. To this end, we conduct the first in-depth study on template escape bugs and present TEFuzz, an automatic tool to detect and exploit such bugs. By incorporating several new techniques, TEFuzz does not need to learn the template syntax and can generate PoCs and exploits for the discovered bugs. We apply TEFuzz to seven popular PHP template engines. In all, TEFuzz discovers 135 new template escape bugs and synthesizes RCE exploits for 55 bugs. Our study shows that template escape bugs are prevalent and pose severe threats.
Detecting API Post-Handling Bugs Using Code and Description in Patches
Miaoqian Lin, Kai Chen, and Yang Xiao, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China
Place Your Locks Well: Understanding and Detecting Lock Misuse Bugs
Yuandao Cai, Peisen Yao, Chengfeng Ye, and Charles Zhang, The Hong Kong University of Science and Technology
Modern multi-threaded software systems commonly leverage locks to prevent concurrency bugs. Nevertheless, due to the complexity of writing the correct concurrent code, using locks itself is often error-prone. In this work, we investigate a general variety of lock misuses. Our characteristic study of existing CVE IDs reveals that lock misuses can inflict concurrency errors and even severe security issues, such as denial-of-service and memory corruption. To alleviate the threats, we present a practical static analysis framework, namely Lockpick, which consists of two core stages to effectively detect misused locks. More specifically, Lockpick first conducts path-sensitive typestate analysis, tracking lock-state transitions and interactions to identify sequential typestate violations. Guided by the preceding results, Lockpick then performs concurrency-aware detection to pinpoint various lock misuse errors, effectively reasoning about the thread interleavings of interest. The results are encouraging—we have used Lockpick to uncover 203 unique and confirmed lock misuses across a broad spectrum of impactful open-source systems, such as OpenSSL, the Linux kernel, PostgreSQL, MariaDB, FFmpeg, Apache HTTPd, and FreeBSD. Three exciting results are that those confirmed lock misuses are long-latent, hiding for 7.4 years on average; in total, 16 CVE IDs have been assigned for the severe errors uncovered; and Lockpick can flag many real bugs missed by the previous tools with significantly fewer false positives.
Adversarial Examples
The Space of Adversarial Strategies
Ryan Sheatsley, Blaine Hoak, Eric Pauley, and Patrick McDaniel, University of Wisconsin-Madison
Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade. Yet, our understanding of this phenomenon stems from a rather fragmented pool of knowledge; at present, there are a handful of attacks, each with disparate assumptions in threat models and incomparable definitions of optimality. In this paper, we propose a systematic approach to characterize worst-case (i.e., optimal) adversaries. We first introduce an extensible decomposition of attacks in adversarial machine learning by atomizing attack components into surfaces and travelers. With our decomposition, we enumerate over components to create 576 attacks (568 of which were previously unexplored). Next, we propose the Pareto Ensemble Attack (PEA): a theoretical attack that upper-bounds attack performance. With our new attacks, we measure performance relative to the PEA on: both robust and non-robust models, seven datasets, and three extended ℓp-based threat models incorporating compute costs, formalizing the Space of Adversarial Strategies. From our evaluation we find that attack performance to be highly contextual: the domain, model robustness, and threat model can have a profound influence on attack efficacy. Our investigation suggests that future studies measuring the security of machine learning should: (1) be contextualized to the domain & threat models, and (2) go beyond the handful of known attacks used today.
“Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry
Jaron Mink, University of Illinois at Urbana-Champaign; Harjot Kaur, Leibniz University Hannover; Juliane Schmüser and Sascha Fahl, CISPA Helmholtz Center for Information Security; Yasemin Acar, Paderborn University and George Washington University
Adversarial machine learning (AML) has the potential to leak training data, force arbitrary classifications, and greatly degrade overall performance of machine learning models, all of which academics and companies alike consider as serious issues. Despite this, seminal work has found that most organizations insufficiently protect against such threats. While the lack of defenses to AML is most commonly attributed to missing knowledge, it is unknown why mitigations are unrealized in industry projects. To better understand the reasons behind the lack of deployed AML defenses, we conduct semi-structured interviews (n=21) with data scientists and data engineers to explore what barriers impede the effective implementation of such defenses. We find that practitioners’ ability to deploy defenses is hampered by three primary factors: a lack of institutional motivation and educational resources for these concepts, an inability to adequately assess their AML risk and make subsequent decisions, and organizational structures and goals that discourage implementation in favor of other objectives. We conclude by discussing practical recommendations for companies and practitioners to be made more aware of these risks, and better prepared to respond.
X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection
Aishan Liu and Jun Guo, Beihang University; Jiakai Wang, Zhongguancun Laboratory; Siyuan Liang, Chinese Academy of Sciences; Renshuai Tao, Beihang University; Wenbo Zhou, University of Science and Technology of China; Cong Liu, iFLYTEK; Xianglong Liu, Beihang University, Zhongguancun Laboratory, and Hefei Comprehensive National Science Center; Dacheng Tao, JD Explore Academy
Adversarial attacks are valuable for evaluating the robustness of deep learning models. Existing attacks are primarily conducted on the visible light spectrum (e.g., pixel-wise texture perturbation). However, attacks targeting texture-free X-ray images remain underexplored, despite the widespread application of X-ray imaging in safety-critical scenarios such as the X-ray detection of prohibited items. In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection, and reveal the serious threats posed by such attacks in this safety-critical scenario. Specifically, we posit that successful physical adversarial attacks in this scenario should be specially designed to circumvent the challenges posed by color/texture fading and complex overlapping. To this end, we propose X-Adv to generate physically printable metals that act as an adversarial agent capable of deceiving X-ray detectors when placed in luggage. To resolve the issues associated with color/texture fading, we develop a differentiable converter that facilitates the generation of 3D-printable objects with adversarial shapes, using the gradients of a surrogate model rather than directly generating adversarial textures. To place the printed 3D adversarial objects in luggage with complex overlapped instances, we design a policy-based reinforcement learning strategy to find locations eliciting strong attack performance in worst-case scenarios whereby the prohibited items are heavily occluded by other items. To verify the effectiveness of the proposed X-Adv, we conduct extensive experiments in both the digital and the physical world (employing a commercial X-ray security inspection system for the latter case). Furthermore, we present the physical-world X-ray adversarial attack dataset XAD. We hope this paper will draw more attention to the potential threats targeting safety-critical scenarios. Our codes and XAD dataset are available at https://github.com/DIG-Beihang/X-adv.
SMACK: Semantically Meaningful Adversarial Audio Attack
Zhiyuan Yu, Yuanhaur Chang, and Ning Zhang, Washington University in St. Louis; Chaowei Xiao, Arizona State University
Voice controllable systems rely on speech recognition and speaker identification as the key enabling technologies. While they bring revolutionary changes to our daily lives, their security has become a growing concern. Existing work has demonstrated the feasibility of using maliciously crafted perturbations to manipulate speech or speaker recognition. Although these attacks vary in targets and techniques, they all require the addition of noise perturbations. While these perturbations are generally restricted to Lp-bounded neighborhood, the added noises inevitably leave unnatural traces recognizable by humans, and can be used for defense. To address this limitation, we introduce a new class of adversarial audio attack, named Semantically Meaningful Adversarial Audio AttaCK (SMACK), where the inherent speech attributes (such as prosody) are modified such that they still semantically represent the same speech and preserves the speech quality. The efficacy of SMACK was evaluated against five transcription systems and two speaker recognition systems in a black-box manner. By manipulating semantic attributes, our adversarial audio examples are capable of evading the state-of-the-art defenses, with better speech naturalness compared to traditional Lp-bounded attacks in the human perceptual study.
URET: Universal Robustness Evaluation Toolkit (for Evasion)
Kevin Eykholt, Taesung Lee, Douglas Schales, Jiyong Jang, and Ian Molloy, IBM Research; Masha Zorin, University of Cambridge
Machine learning models are known to be vulnerable to adversarial evasion attacks as illustrated by image classification models. Thoroughly understanding such attacks is critical in order to ensure the safety and robustness of critical AI tasks. However, most evasion attacks are difficult to deploy against a majority of AI systems because they have focused on image domain with only few constraints. An image is composed of homogeneous, numerical, continuous, and independent features, unlike many other input types to AI systems used in practice. Furthermore, some input types include additional semantic and functional constraints that must be observed to generate realistic adversarial inputs. In this work, we propose a new framework to enable the generation of adversarial inputs irrespective of the input type and task domain. Given an input and a set of pre-defined input transformations, our framework discovers a sequence of transformations that result in a semantically correct and functional adversarial input. We demonstrate the generality of our approach on several diverse machine learning tasks with various input representations. We also show the importance of generating adversarial examples as they enable the deployment of mitigation techniques.
Private Record Access
Authenticated private information retrieval
Simone Colombo, EPFL; Kirill Nikitin, Cornell Tech; Henry Corrigan-Gibbs, MIT; David J. Wu, UT Austin; Bryan Ford, EPFL
This paper introduces protocols for authenticated private information retrieval. These schemes enable a client to fetch a record from a remote database server such that (a) the server does not learn which record the client reads, and (b) the client either obtains the "authentic" record or detects server misbehavior and safely aborts. Both properties are crucial for many applications. Standard private-information-retrieval schemes either do not ensure this form of output authenticity, or they require multiple database replicas with an honest majority. In contrast, we offer multi-server schemes that protect security as long as at least one server is honest. Moreover, if the client can obtain a short digest of the database out of band, then our schemes require only a single server. Performing an authenticated private PGP-public-key lookup on an OpenPGP key server's database of 3.5 million keys (3 GiB), using two non-colluding servers, takes under 1.2 core-seconds of computation, essentially matching the time taken by unauthenticated private information retrieval. Our authenticated single-server schemes are 30-100× more costly than state-of-the-art unauthenticated single-server schemes, though they achieve incomparably stronger integrity properties.
Don’t be Dense: Efficient Keyword PIR for Sparse Databases
Sarvar Patel and Joon Young Seo, Google; Kevin Yeo, Google and Columbia University
In this paper, we introduce SparsePIR, a single-server keyword private information retrieval (PIR) construction that enables querying over sparse databases. At its core, SparsePIR is based on a novel encoding algorithm that encodes sparse database entries as linear combinations while being compatible with important PIR optimizations including recursion. SparsePIR achieves response overhead that is half of state-of-the art keyword PIR schemes without requiring long-term client storage of linear-sized mappings. We also introduce two variants, SparsePIRg and SparsePIRc, that further reduces the size of the serving database at the cost of increased encoding time and small additional client storage, respectively. Our frameworks enable performing keyword PIR with, essentially, the same costs as standard PIR. Finally, we also show that SparsePIR may be used to build batch keyword PIR with halved response overhead without any client mappings.
GigaDORAM: Breaking the Billion Address Barrier
Brett Falk, University of Pennsylvania; Rafail Ostrovsky, Matan Shtepel, and Jacob Zhang, University of California, Los Angeles
One Server for the Price of Two: Simple and Fast Single-Server Private Information Retrieval
Alexandra Henzinger, Matthew M. Hong, and Henry Corrigan-Gibbs, MIT; Sarah Meiklejohn, Google; Vinod Vaikuntanathan, MIT
We present SimplePIR, the fastest single-server private information retrieval scheme known to date. SimplePIR’s security holds under the learning-with-errors assumption. To answer a client’s query, the SimplePIR server performs fewer than one 32-bit multiplication and one 32-bit addition per database byte. SimplePIR achieves 10 GB/s/core server throughput, which approaches the memory bandwidth of the machine and the performance of the fastest two-server private-information-retrieval schemes (which require non-colluding servers). SimplePIR has relatively large communication costs: to make queries to a 1 GB database, the client must download a 121 MB "hint" about the database contents; thereafter, the client may make an unbounded number of queries, each requiring 242 KB of communication. We present a second single-server scheme, DoublePIR, that shrinks the hint to 16 MB at the cost of slightly higher per-query communication (345 KB) and slightly lower throughput (7.4 GB/s/core). Finally, we apply our new private-information-retrieval schemes, together with a novel data structure for approximate set membership, to the task of private auditing in Certificate Transparency. We achieve a strictly stronger notion of privacy than Google Chrome’s current approach with 13x more communication: 16 MB of download per week, along with 1.5 KB per TLS connection.
Duoram: A Bandwidth-Efficient Distributed ORAM for 2- and 3-Party Computation
Adithya Vadapalli, University of Waterloo; Ryan Henry, University of Calgary; Ian Goldberg, University of Waterloo
We design, analyze, and implement Duoram, a fast and bandwidth-efficient distributed ORAM protocol suitable for secure 2- and 3-party computation settings. Following Doerner and shelat's Floram construction (CCS 2017), Duoram leverages (2,2)-distributed point functions (DPFs) to represent PIR and PIR-writing queries compactly—but with a host of innovations that yield massive asymptotic reductions in communication cost and notable speedups in practice, even for modestly sized instances. Specifically, Duoram introduces a novel method for evaluating dot products of certain secret-shared vectors using communication that is only logarithmic in the vector length. As a result, for memories with n addressable locations, Duoram can perform a sequence of m arbitrarily interleaved reads and writes using just O(mlgn) words of communication, compared with Floram's O(m√n) words. Moreover, most of this work can occur during a data-independent preprocessing phase, leaving just O(m) words of online communication cost for the sequence—i.e., a constant online communication cost per memory access.
It’s All Fun and Games Until...
A Peek into the Metaverse: Detecting 3D Model Clones in Mobile Games
Chaoshun Zuo, Chao Wang, and Zhiqiang Lin, The Ohio State University
PATROL: Provable Defense against Adversarial Policy in Two-player Games
Wenbo Guo, UC Berkeley; Xian Wu, Northwestern University; Lun Wang, University of California, Berkeley; Xinyu Xing, Northwestern University; Dawn Song, UC Berkeley
The Blockchain Imitation Game
Kaihua Qin, Imperial College London, RDI; Stefanos Chaliasos, Imperial College London; Liyi Zhou, Imperial College London, RDI; Benjamin Livshits, Imperial College London; Dawn Song, UC Berkeley, RDI; Arthur Gervais, University College London, RDI
The use of blockchains for automated and adversarial trading has become commonplace. However, due to the transparent nature of blockchains, an adversary is able to observe any pending, not-yet-mined transactions, along with their execution logic. This transparency further enables a new type of adversary, which copies and front-runs profitable pending transactions in real-time, yielding significant financial gains.
Shedding light on such ''copy-paste'' malpractice, this paper introduces the Blockchain Imitation Game and proposes a generalized imitation attack methodology called Ape. Leveraging dynamic program analysis techniques, Ape supports the automatic synthesis of adversarial smart contracts. Over a timeframe of one year (1st of August, 2021 to 31st of July, 2022), Ape could have yielded 148.96M USD in profit on Ethereum, and 42.70M USD on BNB Smart Chain (BSC).
Not only as a malicious attack, we further show the potential of transaction and contract imitation as a defensive strategy. Within one year, we find that Ape could have successfully imitated 13 and 22 known DeFi attacks on Ethereum and BSC, respectively. Our findings suggest that blockchain validators can imitate attacks in real-time to prevent intrusions in DeFi.
It's all in your head(set): Side-channel attacks on AR/VR systems
Yicheng Zhang, Carter Slocum, Jiasi Chen, and Nael Abu-Ghazaleh, University of California, Riverside
With the increasing adoption of Augmented Reality/Virtual Reality (AR/VR) systems, security and privacy concerns attract attention from both academia and industry. This paper demonstrates that AR/VR systems are vulnerable to side-channel attacks launched from software; a malicious application without any special permissions can infer private information about user interactions, other concurrent applications, or even the surrounding world. We develop a number of side-channel attacks targeting different types of private information. Specifically, we demonstrate three attacks on the victim's interactions, successfully recovering hand gestures, voice commands made by victims, and keystrokes on a virtual keyboard, with accuracy exceeding 90%. We also demonstrate an application fingerprinting attack where the spy is able to identify an application being launched by the victim. The final attack demonstrates that the adversary can perceive a bystander in the real-world environment and estimate the bystander's distance with Mean Absolute Error (MAE) of 10.3 cm. We believe the threats presented by our attacks are pressing; they expand our understanding of the threat model faced by these emerging systems and inform the development of new AR/VR systems that are resistant to these threats.
Egg Hunt in Tesla Infotainment: A First Look at Reverse Engineering of Qt Binaries
Haohuang Wen and Zhiqiang Lin, The Ohio State University
As one of the most popular C++ extensions for developing graphical user interface (GUI) based applications, Qt has been widely used in desktops, mobiles, IoTs, automobiles, etc. Although existing binary analysis platforms (e.g., angr and Ghidra) could help reverse engineer Qt binaries, they still need to address many fundamental challenges such as the recovery of control flow graphs and symbols. In this paper, we take a first look at understanding the unique challenges and opportunities in Qt binary analysis, developing enabling techniques, and demonstrating novel applications. In particular, although callbacks make control flow recovery challenging, we notice that Qt’s signal and slot mechanism can be used to recover function callbacks. More interestingly, Qt’s unique dynamic introspection can also be repurposed to recover semantic symbols. Based on these insights, we develop QtRE for function callback and semantic symbol recovery for Qt binaries. We have tested QtRE with two suites of Qt binaries: Linux KDE and the Tesla Model S firmware, where QtRE additionally recovered 10,867 callback instances and 24,973 semantic symbols from 123 binaries, which cannot be identified by existing tools. We demonstrate a novel application of using QtRE to extract hidden commands from a Tesla Model S firmware. QtRE discovered 12 hidden commands including five unknown to the public, which can potentially be exploited to manipulate vehicle settings.
2:45 pm–3:15 pm
Break with Refreshments
3:15 pm–4:30 pm
Enclaves and Serverless Computing
Reusable Enclaves for Confidential Serverless Computing
Shixuan Zhao, The Ohio State University; Pinshen Xu, Southern University of Science and Technology; Guoxing Chen, Shanghai Jiao Tong University; Mengya Zhang, The Ohio State University; Yinqian Zhang, Southern University of Science and Technology; Zhiqiang Lin, The Ohio State University
EnigMap: External-Memory Oblivious Map for Secure Enclaves
Afonso Tinoco, Sixiang Gao, and Elaine Shi, CMU
AEX-Notify: Thwarting Precise Single-Stepping Attacks through Interrupt Awareness for Intel SGX Enclaves
Scott Constable, Intel Corporation; Jo Van Bulck, imec-DistriNet, KU Leuven; Xiang Cheng, Georgia Institute of Technology; Yuan Xiao, Cedric Xing, and Ilya Alexandrovich, Intel Corporation; Taesoo Kim, Georgia Institute of Technology; Frank Piessens, imec-DistriNet, KU Leuven; Mona Vij, Intel Corporation; Mark Silberstein, Technion
Controlled Data Races in Enclaves: Attacks and Detection
Sanchuan Chen, Fordham University; Zhiqiang Lin, The Ohio State University; Yinqian Zhang, Southern University of Science and Technology
This paper introduces controlled data race attacks, a new class of attacks against programs guarded by trusted execution environments such as Intel SGX. Controlled data race attacks are analog to controlled channel attacks, where the adversary controls the underlying operating system and manipulates the scheduling of enclave threads and handling of interrupts and exceptions. Controlled data race attacks are of particular interest for two reasons: First, traditionally non-deterministic data race bugs can be triggered deterministically and exploited for security violation in the context of SGX enclaves. Second, an intended single-threaded enclave can be concurrently invoked by the adversary, which triggers unique interleaving patterns that would not occur in traditional settings. To detect the controlled data race vulnerabilities in real-world enclave binaries (including the code linked with the SGX libraries), we present a lockset-based binary analysis detection algorithm. We have implemented our algorithm in a tool named SGXRacer, and evaluated it with four SGX SDKs and eight open-source SGX projects, identifying 1,780 data races originated from 476 shared variables.
Guarding Serverless Applications with Kalium
Deepak Sirone Jegan, University of Wisconsin-Madison; Liang Wang, Princeton University; Siddhant Bhagat, Microsoft; Michael Swift, University of Wisconsin-Madison
As an emerging application paradigm, serverless computing attracts attention from more and more adversaries. Unfortunately, security tools for conventional web applications cannot be easily ported to serverless computing due to its distributed nature, and existing serverless security solutions focus on enforcing user specified information flow policies which are unable to detect the manipulation of the order of functions in application control flow paths. In this paper, we present Kalium, an extensible security framework that leverages local function state and global application state to enforce control-flow integrity (CFI) in serverless applications. We evaluate the performance overhead and security of Kalium using realistic open-source applications; our results show that Kalium mitigates several classes of attacks with relatively low performance overhead and outperforms the state-of-the-art serverless information flow protection systems.
Email and Phishing
“To Do This Properly, You Need More Resources”: The Hidden Costs of Introducing Simulated Phishing Campaigns
Lina Brunken, Annalina Buckmann, Jonas Hielscher, and M. Angela Sasse, Ruhr University Bochum
You’ve Got Report: Measurement and Security Implications of DMARC Reporting
Md. Ishtiaq Ashiq and Weitong Li, Virginia Tech; Tobias Fiebig, Max-Planck-Institut für Informatik; Taejoong Chung, Virginia Tech
Knowledge Expansion and Counterfactual Interaction for Reference-Based Phishing Detection
Ruofan Liu, National University of Singapore; Yun Lin, Shanghai Jiao Tong University; Yifan Zhang, Penn Han Lee, and Jin Song Dong, National University of Singapore
Rods with Laser Beams: Understanding Browser Fingerprinting on Phishing Pages
Iskander Sanchez-Rola and Leyla Bilge, Norton Research Group; Davide Balzarotti, EURECOM; Armin Buescher, Crosspoint Labs; Petros Efstathopoulos, Norton Research Group
Content-Type: multipart/oracle - Tapping into Format Oracles in Email End-to-End Encryption
Fabian Ising, Münster University of Applied Sciences and National Research Center for Applied Cybersecurity ATHENE; Damian Poddebniak and Tobias Kappert, Münster University of Applied Sciences; Christoph Saatjohann and Sebastian Schinzel, Münster University of Applied Sciences and National Research Center for Applied Cybersecurity ATHENE
S/MIME and OpenPGP use cryptographic constructions repeatedly shown to be vulnerable to format oracle attacks in protocols like TLS, SSH, or IKE. However, format oracle attacks in the End-to-End Encryption (E2EE) email setting are considered impractical as victims would need to open many attacker-modified emails and communicate the decryption result to the attacker. But is this really the case?
In this paper, we survey how an attacker may remotely learn the decryption state in email E2EE. We analyze the interplay of MIME and IMAP and describe side-channels emerging from network patterns that leak the decryption status in Mail User Agents (MUAs). Concretely, we introduce specific MIME trees that produce decryption-dependent network patterns when opened in a victim’s email client.
We survey 19 OpenPGP- and S/MIME-enabled email clients and four cryptographic libraries and uncover a side-channel leaking the decryption status of S/MIME messages in one client. Further, we discuss why the exploitation in the other clients is impractical and show that it is due to missing feature support and implementation quirks. These unintended defenses create an unfortunate conflict between usability and security. We present more rigid countermeasures for MUA developers and the standards to prevent exploitation.
OSes and Security
PET: Prevent Discovered Errors from Being Triggered in the Linux Kernel
Zicheng Wang, Nanjing University; Yueqi Chen, University of Colorado Boulder; Qingkai Zeng, Nanjing University
A Hybrid Alias Analysis and Its Application to Global Variable Protection in the Linux Kernel
Guoren Li, University of California, Riverside; Hang Zhang, Georgia Institute of Technology; Jinmeng Zhou and Wenbo Shen, Zhejiang University; Yulei Sui, University of New South Wales; Zhiyun Qian, University of California, Riverside
AlphaEXP: An Expert System for Identifying Security-Sensitive Kernel Objects
Ruipeng Wang, National University of Defense Technology; Kaixiang Chen and Chao Zhang, Tsinghua University; Zulie Pan and Qianyu Li, National University of Defense Technology; Siliang Qin, University of Chinese Academy of Sciences; Shenglin Xu, Min Zhang, and Yang Li, National University of Defense Technology
Mitigating Security Risks in Linux with KLAUS : A Method for Evaluating Patch Correctness
Yuhang Wu and Zhenpeng Lin, Northwestern University; Yueqi Chen, University of Colorado Boulder; Dang K Le, Northwestern University; Dongliang Mu, Huazhong University of Science and Technology; Xinyu Xing, Northwestern University
Detecting Union Type Confusion in Component Object Model
Yuxing Zhang, East China Normal University; Xiaogang Zhu, Swinburne University of Technology; Daojing He, East China Normal University; Harbin Institute of Technology, Shenzhen; Minhui Xue, CSIRO’s Data61; Shouling Ji, Zhejiang University; Mohammad Sayad Haghighi and Sheng Wen, Swinburne University of Technology; Zhiniang Peng, Sangfor Technologies Inc.
Intrusion Detection
Network Detection of Interactive SSH Impostors Using Deep Learning
Julien Piet, UC Berkeley and Corelight; Aashish Sharma, Lawrence Berkeley National Laboratory; Vern Paxson, Corelight and UC Berkeley; David Wagner, UC Berkeley
ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks
Phillip Rieger, Marco Chilese, Reham Mohamed, Markus Miettinen, Hossein Fereidooni, and Ahmad-Reza Sadeghi, Technical University of Darmstadt
IoT application domains, device diversity and connectivity are rapidly growing. IoT devices control various functions in smart homes and buildings, smart cities, and smart factories, making these devices an attractive target for attackers. On the other hand, the large variability of different application scenarios and inherent heterogeneity of devices make it very challenging to reliably detect abnormal IoT device behaviors and distinguish these from benign behaviors. Existing approaches for detecting attacks are mostly limited to attacks directly compromising individual IoT devices, or, require predefined detection policies. They cannot detect attacks that utilize the control plane of the IoT system to trigger actions in an unintended/malicious context, e.g., opening a smart lock while the smart home residents are absent.
In this paper, we tackle this problem and propose ARGUS, the first self-learning intrusion detection system for detecting contextual attacks on IoT environments, in which the attacker maliciously invokes IoT device actions to reach its goals. ARGUS monitors the contextual setting based on the state and actions of IoT devices in the environment. An unsupervised Deep Neural Network (DNN) is used for modeling the typical contextual device behavior and detecting actions taking place in abnormal contextual settings. This unsupervised approach ensures that ARGUS is not restricted to detecting previously known attacks but is also able to detect new attacks. We evaluated ARGUS on heterogeneous real-world smart-home settings and achieve at least an F1-Score of 99.64% for each setup, with a false positive rate (FPR) of at most 0.03%.
Generative Intrusion Detection and Prevention on Data Stream
HyungBin Seo and MyungKeun Yoon, Kookmin University
xNIDS: Explaining Deep Learning-based Network Intrusion Detection Systems for Active Intrusion Responses
Feng Wei, University at Buffalo; Hongda Li, Palo Alto Networks; Ziming Zhao and Hongxin Hu, University at Buffalo
While Deep Learning-based Network Intrusion Detection Systems (DL-NIDS) have recently been significantly explored and shown superior performance, they are insufficient to actively respond to the detected intrusions due to the semantic gap between their detection results and actionable interpretations. Furthermore, their high error costs make network operators unwilling to respond solely based on their detection results. The root cause of these drawbacks can be traced to the lack of explainability of DL-NIDS. Although some methods have been developed to explain deep learning-based systems, they are incapable of handling the history inputs and complex feature dependencies of structured data and do not perform well in explaining DL-NIDS.
In this paper, we present XNIDS, a novel framework that facilitates active intrusion responses by explaining DL-NIDS. Our explanation method is highlighted by: (1) approximating and sampling around history inputs; and (2) capturing feature dependencies of structured data to achieve a high-fidelity explanation. Based on the explanation results, XNIDS can further generate actionable defense rules. We evaluate XNIDS with four state-of-the-art DL-NIDS. Our evaluation results show that XNIDS outperforms previous explanation methods in terms of fidelity, sparsity, completeness, and stability, all of which are important to active intrusion responses. Moreover, we demonstrate that XNIDS can efficiently generate practical defense rules, help understand DL-NIDS behaviors and troubleshoot detection errors
PROGRAPHER: An Anomaly Detection System based on Provenance Graph Embedding
Fan Yang, The Chinese University of Hong Kong; Jiacen Xu, University of California, Irvine; Chunlin Xiong, Sangfor Technologies Inc.; Zhou Li, University of California, Irvine; Kehuan Zhang, The Chinese University of Hong Kong
In recent years, the Advanced Persistent Threat (APT), which involves complex and malicious actions over a long period, has become one of the biggest threats against the security of the modern computing environment. As a countermeasure, data provenance is leveraged to capture the complex relations between entities in a computing system/network, and uses such information to detect sophisticated APT attacks. Though showing promise in countering APT attacks, the existing systems still cannot achieve a good balance between efficiency, accuracy, and granularity.
In this work, we design a new anomaly detection system on provenance graphs, termed PROGRAPHER. To address the problem of “dependency explosion” of provenance graphs and achieve high efficiency, PROGRAPHER extracts temporal-ordered snapshots from the ingested logs and performs detection on the snapshots. To capture the rich structural properties of a graph, whole graph embedding and sequence-based learning are applied. Finally, key indicators are extracted from the abnormal snapshots and reported to the analysts, so their workload will be greatly reduced.
We evaluate PROGRAPHER on five real-world datasets. The results show that PROGRAPHER can detect standard attacks and APT attacks with high accuracy and outperform the state-of-the-art detection systems.
Privacy Preserving Crypto Blocks
Dubhe: Succinct Zero-Knowledge Proofs for Standard AES and related Applications
Changchang Ding and Yan Huang, Indiana University Bloomington
Curve Trees: Practical and Transparent Zero-Knowledge Accumulators
Matteo Campanelli, Protocol Labs; Mathias Hall-Andersen and Simon Holmgaard Kamp, Aarhus University, Denmark
BalanceProofs: Maintainable Vector Commitments with Fast Aggregation
Weijie Wang, Annie Ulichney, and Charalampos Papamanthou, Yale University
We present BalanceProofs, the first vector commitment that is maintainable (i.e., supporting sublinear updates) while also enjoying fast proof aggregation and verification. The basic version of BalanceProofs has O(√nlogn) update time and O(√n) query time and its constant-size aggregated proofs can be produced and verified in milliseconds. In particular, BalanceProofs improves the aggregation time and aggregation verification time of the only known maintainable and aggregatable vector commitment scheme, Hyperproofs (USENIX SECURITY 2022), by up to 1000× and up to 100× respectively. Fast verification of aggregated proofs is particularly useful for applications such as stateless cryptocurrencies (and was a major bottleneck for Hyperproofs), where an aggregated proof of balances is produced once but must be verified multiple times and by a large number of nodes. As a limitation, the updating time in BalanceProofs compared to Hyperproofs is roughly 6× slower, but always stays in the range from 10 to 18 milliseconds. We finally study useful tradeoffs in BalanceProofs between (aggregate) proof size, update time and (aggregate) proof computation and verification, by introducing a bucketing technique, and present an extensive evaluation as well as a comparison to Hyperproofs.
zkSaaS: Zero-Knowledge SNARKs as a Service
Sanjam Garg, University of California, Berkeley; Aarushi Goel, NTT Research; Abhishek Jain, Johns Hopkins University; Guru Vamsi Policharla and Sruthi Sekar, University of California, Berkeley
VeriZexe: Decentralized Private Computation with Universal Setup
Alex Luoyuan Xiong, Espresso Systems, National University of Singapore; Binyi Chen and Zhenfei Zhang, Espresso Systems; Benedikt Bünz, Espresso Systems, Stanford University; Ben Fisch, Espresso Systems, Yale University; Fernando Krell and Philippe Camacho, Espresso Systems
Traditional blockchain systems execute program state transitions on-chain, requiring each network node participating in state-machine replication to re-compute every step of the program when validating transactions. This limits both scalability and privacy. Recently, Bowe et al. introduced a primitive called decentralized private computation (DPC) and provided an instantiation called Zexe, which allows users to execute arbitrary computations off-chain without revealing the program logic to the network. Moreover, transaction validation takes only constant time, independent of the off-chain computation. However, Zexe required a separate trusted setup for each application, which is highly impractical. Prior attempts to remove this per-application setup incurred significant performance loss.
We propose a new DPC instantiation VeriZexe that is highly efficient and requires only a single universal setup to support an arbitrary number of applications. Our benchmark improves the state-of-the-art by 9x in transaction generation time and by 3.4x in memory usage. Along the way, we also design efficient gadgets for variable-base multi-scalar multiplication and modular arithmetic within the Plonk constraint system, leading to a Plonk verifier gadget using only ∼ 21k Plonk constraints.
Warm and Fuzzing
Intender: Fuzzing Intent-Based Networking with Intent-State Transition Guidance
Jiwon Kim, Purdue University; Benjamin E. Ujcich, Georgetown University; Dave (Jing) Tian, Purdue University
Intent-based networking (IBN) abstracts network configuration complexity from network operators by focusing on what operators want the network to do rather than how such configuration should be implemented. While such abstraction eases network management challenges, little attention to date has focused on IBN’s new security concerns that adversely impact an entire network’s correct operation. To motivate the prevalence of such security concerns, we systematize IBN’s security challenges by studying existing bug reports from a representative IBN implementation within the ONOS network operating system. We find that 61% of IBN-related bugs are semantic bugs that are challenging, if not impossible, to detect efficiently by state-of-the-art vulnerability discovery tools.
To tackle existing limitations, we present Intender, the first semantically-aware fuzzing framework for IBN. Intender leverages network topology information and intent-operation dependencies (IOD) to efficiently generate testing inputs. Intender introduces a new feedback mechanism, intent-state transition guidance (ISTG), which traces the history of transitions in intent states. We evaluate Intender using ONOS and find 12 bugs, 11 of which were CVE-assigned security-critical vulnerabilities affecting network-wide control plane integrity and availability. Compared to state-of-the-art fuzzing tools AFL, Jazzer, Zest, and PAZZ, Intender generates up to 78.7× more valid fuzzing input, achieves up to 2.2× better coverage, and detects up to 82.6× more unique errors. Intender with IOD reduces 73.02% of redundant operations and spends 10.74% more time on valid operations. Intender with ISTG leads to 1.8× more intent-state transitions compared to code-coverage guidance.
Bleem: Packet Sequence Oriented Fuzzing for Protocol Implementations
Zhengxiong Luo, Junze Yu, Feilong Zuo, Jianzhong Liu, and Yu Jiang, Tsinghua University; Ting Chen, University of Electronic Science and Technology of China; Abhik Roychoudhury, National University of Singapore; Jiaguang Sun, Tsinghua University
Protocol implementations are essential components in network infrastructures. Flaws hidden in the implementations can easily render devices vulnerable to adversaries. Therefore, guaranteeing their correctness is important. However, commonly used vulnerability detection techniques, such as fuzz testing, face increasing challenges in testing these implementations due to ineffective feedback mechanisms and insufficient protocol state-space exploration techniques.
This paper presents Bleem, a packet-sequence-oriented black-box fuzzer for vulnerability detection of protocol implementations. Instead of focusing on individual packet generation, Bleem generates packets on a sequence level. It provides an effective feedback mechanism by analyzing the system output sequence noninvasively, supports guided fuzzing by resorting to state-space tracking that encompasses all parties timely, and utilizes interactive traffic information to generate protocol-logic-aware packet sequences. We evaluate Bleem on 15 widely-used implementations of well-known protocols (e.g., TLS and QUIC). Results show that, compared to the state-of-the-art protocol fuzzers such as Peach, Bleem achieves substantially higher branch coverage (up to 174.93% improvement) within 24 hours. Furthermore, Bleem exposed 15 security-critical vulnerabilities in prominent protocol implementations, with 10 CVEs assigned.
Automated Exploitable Heap Layout Generation for Heap Overflows Through Manipulation Distance-Guided Fuzzing
Bin Zhang, Jiongyi Chen, Runhao Li, Chao Feng, Ruilin Li, and Chaojing Tang, National University of Defense Technology
Generating exploitable heap layouts is a fundamental step to produce working exploits for heap overflows. For this purpose, the heap primitives identified from the target program, serving as functional units to manipulate the heap layout, are strategically leveraged to construct exploitable states. To flexibly use primitives, prior efforts only focus on particular program types or programs with dispatcher-loop structures. Beyond that, automatically generating exploitable heap layouts is hard for general-purpose programs due to the difficulties in explicitly and flexibly using primitives.
This paper presents Scatter, enabling the generation of exploitable heap layouts for heap overflows in general-purpose programs in a primitive-free manner. At the center of Scatter is a fuzzer that is guided by a new manipulation distance which measures the distance to the corruption of a victim object in the heap layout space. To make the fuzzing-based approach practical, Scatter leverages a set of techniques to improve the efficiency and handle the side effects introduced by the heap manager's sophisticated behaviors in the real-world environment. Our evaluation demonstrates that Scatter can successfully generate a total of 126 exploitable heap layouts for 18 out of 27 heap overflows in 10 general-purpose programs.
MINER: A Hybrid Data-Driven Approach for REST API Fuzzing
Chenyang Lyu, Jiacheng Xu, Shouling Ji, Xuhong Zhang, and Qinying Wang, Zhejiang University; Binbin Zhao, Georgia Institute of Technology; Gaoning Pan, Zhejiang University; Wei Cao and Peng Chen, Ant Group; Raheem Beyah, Georgia Institute of Technology
In recent years, REST API fuzzing has emerged to explore errors on a cloud service. Its performance highly depends on the sequence construction and request generation. However, existing REST API fuzzers have trouble generating long sequences with well-constructed requests to trigger hard-to-reach states in a cloud service, which limits their performance of finding deep errors and security bugs. Further, they cannot find the specific errors caused by using undefined parameters during request generation. Therefore, in this paper, we propose a novel hybrid data-driven solution, named MINER, with three new designs working together to address the above limitations. First, MINER collects the valid sequences whose requests pass the cloud service's checking as the templates, and assigns more executions to long sequence templates. Second, to improve the generation quality of requests in a sequence template, MINER creatively leverages the state-of-the-art neural network model to predict key request parameters and provide them with appropriate parameter values. Third, MINER implements a new data-driven security rule checker to capture the new kind of errors caused by undefined parameters. We evaluate MINER against the state-of-the-art fuzzer RESTler on GitLab, Bugzilla, and WordPress via 11 REST APIs. The results demonstrate that the average pass rate of MINER is 23.42% higher than RESTler. MINER finds 97.54% more unique errors than RESTler on average and 142.86% more reproducible errors after manual analysis. We have reported all the newly found errors, and 7 of them have been confirmed as logic bugs by the corresponding vendors.
Systematic Assessment of Fuzzers using Mutation Analysis
Philipp Görz, Björn Mathis, and Keno Hassler, CISPA Helmholtz Center for Information Security; Emre Güler, Ruhr-Universität Bochum; Thorsten Holz and Andreas Zeller, CISPA Helmholtz Center for Information Security; Rahul Gopinath, University of Sydney, Australia
4:30 pm–4:45 pm
Short Break
4:45 pm–5:45 pm
Remote Attacks
HOMESPY: The Invisible Sniffer of Infrared Remote Control of Smart TVs
Kong Huang, YuTong Zhou, and Ke Zhang, The Chinese University of Hong Kong; Jiacen Xu, University of California, Irvine; Jiongyi Chen, National University of Defense Technology; Di Tang, Indiana University Bloomington; Kehuan Zhang, The Chinese University of Hong Kong
Infrared (IR) remote control is a widely used technology at home due to its simplicity and low cost. Most considered it to be "secure'' because of the line-of-sight usage within the home. In this paper, we revisit the security of IR remote control schemes and examine their security assumptions under the settings of internet-connected smart homes. We focus on two specific questions: (1) whether IR signals could be sniffed by an IoT device; and (2) what information could be leaked out through the sniffed IR control signals.
To answer these questions, we design a sniff module using a commercial-off-the-shelf IR receiver on a Raspberry Pi and show that the Infrared (IR) signal emanating from the remote control of a Smart TV can be captured by one of the nearby IoT devices, for example, a smart air-conditioner, even the signal is not aimed at the air-conditioner. The IR signal range and receiving angle are larger than most have thought. We also developed algorithms to extract semantic information from the sniffed IR control signals, and evaluated with real-world applications. The results showed that lots of sensitive information could be leaked out through the sniffed IR control signals, including account name and password, PIN code, and even payment information.
Remote Attacks on Speech Recognition Systems Using Sound from Power Supply
Lanqing Yang, Xinqi Chen, Xiangyong Jian, Leping Yang, Yijie Li, Qianfei Ren, Yi-Chao Chen, and Guangtao Xue, Shanghai Jiao Tong University; Xiaoyu Ji, Zhejiang University
Speech recognition (SR) systems are used on smart phones and speakers to make inquiries, compose emails, and initiate phone calls. However, they also impose a serious security risk. Researchers have demonstrated that the introduction of certain sounds can threaten the security of SR systems. Nonetheless, most of those methods require that the attacker approach to within a short distance of the victim, thereby limiting the applicability of such schemes. Other researchers have attacked SR systems remotely using peripheral devices (e.g., lasers); however, those methods require line of sight access and an always-on speaker in the vicinity of the victim. To the best of our knowledge, this paper presents the first-ever scheme, named SingAttack, in which SR systems are manipulated by human-like sounds generated in the switching mode power supply of the victim’s device. The fact that attack signals are transmitted via the power grid enables long-range attacks on existing SR systems. The proposed SingAttack system does not rely on extraneous hardware or unrealistic assumptions pertaining to device access. In experiments on ten SR systems, SingAttack achieved Mel-Cepstral Distortion of 7.8 from an attack initiated at a distance of 23m.
Near-Ultrasound Inaudible Trojan (Nuit): Exploiting Your Speaker to Attack Your Microphone
Qi Xia and Qian Chen, University of Texas at San Antonio; Shouhuai Xu, University of Colorado Colorado Springs
Voice Control Systems (VCSs) offer a convenient interface for issuing voice commands to smart devices. However, VCS security has yet to be adequately understood and addressed as evidenced by the presence of two classes of attacks: (i) inaudible attacks, which can be waged when the attacker and the victim are in proximity to each other; and (ii) audible attacks, which can be waged remotely by embedding attack signals into audios. In this paper, we introduce a new class of attacks, dubbed near-ultrasound inaudible trojan (Nuit). Nuit attacks achieve the best of the two classes of attacks mentioned above: they are inaudible and can be waged remotely. Moreover, Nuit attacks can achieve end-to-end unnoticeability, which is important but has not been paid due attention in the literature. Another feature of Nuit attacks is that they exploit victim speakers to attack victim microphones and their associated VCSs, meaning the attacker does not need to use any special speaker. We demonstrate the feasibility of Nuit attacks and propose an effective defense against them.
Medusa Attack: Exploring Security Hazards of In-App QR Code Scanning
Xing Han, Yuheng Zhang, and Xue Zhang, University of Electronic Science and Technology of China; Zeyuan Chen, G.O.S.S.I.P; Mingzhe Wang, Xidian University; Yiwei Zhang, Purdue University; Siqi Ma, The University of New South Wales; Yu Yu, Shanghai Jiao Tong University; Elisa Bertino, Purdue University; Juanru Li, G.O.S.S.I.P
Understanding Communities, Part 1
Othered, Silenced and Scapegoated: Understanding the Situated Security of Marginalised Populations in Lebanon
Jessica McClearn and Rikke Bjerg Jensen, Royal Holloway, University of London; Reem Talhouk, Northumbria University
Examining Power Dynamics and User Privacy in Smart Technology Use Among Jordanian Households
Wael Albayaydh and Ivan Flechais, University of Oxford
“If sighted people know, I should be able to know:” Privacy Perceptions of Bystanders with Visual Impairments around Camera-based Technology
Yuhang Zhao, University of Wisconsin—Madison; Yaxing Yao, University of Maryland, Baltimore County; Jiaru Fu and Nihan Zhou, University of Wisconsin—Madison
Camera-based technology can be privacy-invasive, especially for bystanders who can be captured by the cameras but do not have direct control or access to the devices. The privacy threats become even more significant to bystanders with visual impairments (BVI) since they cannot visually discover the use of cameras nearby and effectively avoid being captured. While some prior research has studied visually impaired people's privacy concerns as direct users of camera-based assistive technologies, no research has explored their unique privacy perceptions and needs as bystanders. We conducted an in-depth interview study with 16 visually impaired participants to understand BVI's privacy concerns, expectations, and needs in different camera usage scenarios. A preliminary survey with 90 visually impaired respondents and 96 sighted controls was conducted to compare BVI and sighted bystanders' general attitudes towards cameras and elicit camera usage scenarios for the interview study. Our research revealed BVI's unique privacy challenges and perceptions around cameras, highlighting their needs for privacy awareness and protection. We summarized design considerations for future privacy-enhancing technologies to fulfill BVI's privacy needs.
A Research Framework and Initial Study of Browser Security for the Visually Impaired
Elaine Lau and Zachary Peterson, Cal Poly, San Luis Obispo
The growth of web-based malware and phishing attacks has catalyzed significant advances in the research and use of interstitial warning pages and modals by a browser prior to loading the content of a suspect site. These warnings commonly use visual cues to attract users' attention, including specialized iconography, color, and the placement and size of buttons to communicate the importance of the scenario. While the efficacy of visual techniques has improved safety for sighted users, these techniques are unsuitable for blind and visually impaired users. We attribute this not to a lack of interest or technical capability by browser manufactures, where universal design is a core tenet of their engineering practices, but instead a reflection of the very real dearth of research literature to inform their choices, exacerbated by a deficit of clear methodologies for conducting studies with this population. Indeed, the challenges are manifold. In this paper, we analyze and address the methodological challenges of conducting security and privacy research with a visually impaired population, and contribute a new set of methodological best practices when conducting a study of this kind. Using our methodology, we conduct a preliminary study analyzing the experiences of the visually impaired with browser security warnings, perform a thematic analysis identifying common challenges visually impaired users experience, and present some initial solutions that could improve security for this population.
Keeping Computations Confidential
ELASM: Error-Latency-Aware Scale Management for Fully Homomorphic Encryption
Yongwoo Lee, Seonyoung Cheon, and Dongkwan Kim, Yonsei University; Dongyoon Lee, Stony Brook University; Hanjun Kim, Yonsei University
Thanks to its fixed-point arithmetic and SIMD-like vectorization, among fully homomorphic encryption (FHE) schemes that allow computation on encrypted data, RNS-CKKS is widely used for privacy-preserving machine learning services. Prior works have partly automated a daunting scale management task required for RNS-CKKS fixed-point arithmetic, yet none takes an output error into consideration, preventing users from exploring a better error-latency trade-off.
This work proposes a new error- and latency-aware scale management (ELASM) scheme for the RNS-CKKS FHE scheme. By actively controlling the scale of a ciphertext, one can effectively make the impact of noise on an error smaller because an error is a scaled noise introduced by an RNS-CKKS operation. ELASM explores different scale management plans that repurpose an upscale operation as an error reduction operation, estimates the output error and latency of each plan, and iteratively finds the best plan that minimizes the error-latency cost function. In addition, this work proposes a new scale-to-noise ratio (SNR) parameter and introduces fine-grained noise-aware waterlines (a minimum scale requirement) for different RNS-CKKS operations, opening a new opportunity to further improve an error-latency trade-off.
This work implements the proposed ideas in the ELASM compiler along with a new FHE language and type system that enforces the RNS-CKKS constraints including SNR-based noise-aware waterlines. For ten machine and deep learning benchmarks, ELASM finds the better error and latency trade-offs (lower Pareto curves) than the state-of-the-art solutions such as EVA and Hecate.
HECO: Fully Homomorphic Encryption Compiler
Alexander Viand, Patrick Jattke, Miro Haller, and Anwar Hithnawi, ETH Zurich
In recent years, Fully Homomorphic Encryption ( FHE) has undergone several breakthroughs and advancements leading to a leap in performance. Today, performance is no longer a major barrier to adoption. Instead, it is the complexity of developing an efficient FHE application that currently limits deploying FHE in practice and at scale. Several FHE compilers have emerged recently to ease FHE development. However, none of these answer how to automatically transform imperative programs to secure and efficient FHE implementations. This is a fundamental issue that needs to be addressed before we can realistically expect broader use of FHE. Automating these transformations is challenging because the restrictive set of operations in FHE and their non-intuitive performance characteristics require programs to be drastically transformed to achieve efficiency. Moreover, existing tools are monolithic and focus on individual optimizations. Therefore, they fail to fully address the needs of end-to-end FHE development. In this paper, we present HECO, a new end-to-end design for FHE compilers that takes high-level imperative programs and emits efficient and secure FHE implementations. In our design, we take a broader view of FHE development, extending the scope of optimizations beyond the cryptographic challenges existing tools focus on.
A Verified Confidential Computing as a Service Framework for Privacy Preservation
Hongbo Chen and Haobin Hiroki Chen, Indiana University Bloomington; Mingshen Sun, Independent Researcher; Kang Li and Zhaofeng Chen, CertiK; XiaoFeng Wang, Indiana University Bloomington
CSHER: A System for Compact Storage with HE-Retrieval
Adi Akavia and Neta Oren, University of Haifa; Boaz Sapir and Margarita Vald, Intuit Israel Inc.
Homomorphic encryption (HE) is a promising technology for protecting data in use, with considerable progress in recent years towards attaining practical runtime performance. However, the high storage overhead associated with HE remains an obstacle to its large-scale adoption. In this work we propose a new storage solution in the two-server model resolving the high storage overhead associated with HE, while preserving rigorous data confidentiality. We empirically evaluated our solution in a proof-of-concept system running on AWS EC2 instances with AWS S3 storage, demonstrating storage size with zero overhead over storing AES ciphertexts, and 10µs amortized end-to-end runtime. In addition, we performed experiments on multiple clouds, i.e., where each server resides on a different cloud, exhibiting similar results. As a central tool we introduce the first perfect secret sharing scheme with fast homomorphic reconstruction over the reals; this may be of independent interest.
Towards Robust Learning
Precise and Generalized Robustness Certification for Neural Networks
Yuanyuan Yuan and Shuai Wang, The Hong Kong University of Science and Technology; Zhendong Su, ETH Zurich
DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing
Jiawei Zhang, UIUC; Zhongzhu Chen, University of Michigan, Ann Arbor; Huan Zhang, Carnegie Mellon University; Chaowei Xiao, Arizona State University; Bo Li, UIUC
Diffusion models have been leveraged to perform adversarial purification and thus provide both empirical and certified robustness for a standard model. On the other hand, different robustly trained smoothed models have been studied to improve the certified robustness. Thus, it raises a natural question: Can diffusion model be used to achieve improved certified robustness on those robustly trained smoothed models? In this work, we first theoretically show that recovered instances by diffusion models are in the bounded neighborhood of the original instance with high probability; and the "one-shot" denoising diffusion probabilistic models (DDPM) can approximate the mean of the generated distribution of a continuous-time diffusion model, which approximates the original instance under mild conditions. Inspired by our analysis, we propose a certifiably robust pipeline DiffSmooth, which first performs adversarial purification via diffusion models and then maps the purified instances to a common region via a simple yet effective local smoothing strategy. We conduct extensive experiments on different datasets and show that DiffSmooth achieves SOTA-certified robustness compared with eight baselines. For instance, DiffSmooth improves the SOTA-certified accuracy from 36.0% to 53.0% under ℓ2 radius 1.5 on ImageNet.
ACORN: Input Validation for Secure Aggregation
James Bell and Adrià Gascón, Google LLC; Tancrède Lepoint, Amazon; Baiyu Li, Sarah Meiklejohn, and Mariana Raykova, Google LLC; Cathie Yun
HOLMES: Efficient Distribution Testing for Secure Collaborative Learning
Ian Chang and Katerina Sotiraki, UC Berkeley; Weikeng Chen, UC Berkeley & DZK Labs; Murat Kantarcioglu, University of Texas at Dallas & UC Berkeley; Raluca Popa, UC Berkeley
Using secure multiparty computation (MPC), organizations which own sensitive data (e.g., in healthcare, finance or law enforcement) can train machine learning models over their joint dataset without revealing their data to each other. At the same time, secure computation restricts operations on the joint dataset, which impedes computation to assess its quality. Without such an assessment, deploying a jointly trained model is potentially illegal. Regulations, such as the European Union's General Data Protection Regulation (GDPR), require organizations to be legally responsible for the errors, bias, or discrimination caused by their machine learning models. Hence, testing data quality emerges as an indispensable step in secure collaborative learning. However, performing distribution testing is prohibitively expensive using current techniques, as shown in our experiments.
We present HOLMES, a protocol for performing distribution testing efficiently. In our experiments, compared with three non-trivial baselines, HOLMES achieves a speedup of more than 10× for classical distribution tests and up to 104× for multidimensional tests. The core of HOLMES is a hybrid protocol that integrates MPC with zero-knowledge proofs and a new ZK-friendly and naturally oblivious sketching algorithm for multidimensional tests, both with significantly lower computational complexity and concrete execution costs.
Network Cryptographic Protocols
Keep Your Friends Close, but Your Routeservers Closer: Insights into RPKI Validation in the Internet
Tomas Hlavacek, Fraunhofer SIT; Haya Shulman, Goethe-Universität Frankfurt and Fraunhofer Institute for Secure Information Technology SIT; Niklas Vogel, TU Darmstadt and Fraunhofer SIT; Michael Waidner, Fraunhofer SIT and TU Darmstadt
Exploring the Unknown DTLS Universe: Analysis of the DTLS Server Ecosystem on the Internet
Nurullah Erinola and Marcel Maehren, Ruhr University Bochum; Robert Merget, Technology Innovation Institute; Juraj Somorovsky, Paderborn University; Jörg Schwenk, Ruhr University Bochum
We Really Need to Talk About Session Tickets: A Large-Scale Analysis of Cryptographic Dangers with TLS Session Tickets
Sven Hebrok, Paderborn University; Simon Nachtigall, Paderborn University and achelos GmbH; Marcel Maehren and Nurullah Erinola, Ruhr University Bochum; Robert Merget, Technology Innovation Institute and Ruhr University Bochum; Juraj Somorovsky, Paderborn University; Jörg Schwenk, Ruhr University Bochum
Session tickets improve the performance of the TLS protocol. They allow abbreviating the handshake by using secrets from a previous session. To this end, the server encrypts the secrets using a Session Ticket Encryption Key (STEK) only know to the server, which the client stores as a ticket and sends back upon resumption. The standard leaves details such as data formats, encryption algorithms, and key management to the server implementation.
TLS session tickets have been criticized by security experts, for undermining the security guarantees of TLS. An adversary, who can guess or compromise the STEK, can passively record and decrypt TLS sessions and may impersonate the server. Thus, weak implementations of this mechanism may completely undermine TLS security guarantees.
We performed the first systematic large-scale analysis of the cryptographic pitfalls of session ticket implementations. (1) We determined the data formats and cryptographic algorithms used by 12 open-source implementations and designed online and offline tests to identify vulnerable implementations. (2) We performed several large-scale scans and collected session tickets for extended offline analyses.
We found significant differences in session ticket implementations and critical security issues in the analyzed servers. Vulnerable servers used weak keys or repeating keystreams in the used tickets, allowing for session ticket decryption. Among others, our analysis revealed a widespread implemen tation flaw within the Amazon AWS ecosystem that allowed for passive traffic decryption for at least 1.9% of the Tranco Top 100k servers.
Extended Hell(o): A Comprehensive Large-Scale Study on Email Confidentiality and Integrity Mechanisms in the Wild
Birk Blechschmidt, Saarland University; Ben Stock, CISPA Helmholtz Center for Information Security
Warmer and Fuzzers
No Linux, No Problem: Fast and Correct Windows Binary Fuzzing via Target-embedded Snapshotting
Leo Stone and Rishi Ranjan, Virginia Tech; Stefan Nagy, University of Utah; Matthew Hicks, Virginia Tech
DAFL: Directed Grey-box Fuzzing guided by Data Dependency
Tae Eun Kim, KAIST; Jaeseung Choi, Sogang University; Kihong Heo and Sang Kil Cha, KAIST
DynSQL: Stateful Fuzzing for Database Management Systems with Complex and Valid SQL Query Generation
Zu-Ming Jiang, ETH Zurich; Jia-Ju Bai, Tsinghua University; Zhendong Su, ETH Zurich
Database management systems (DBMSs) are essential parts of modern software. To ensure the security of DBMSs, recent approaches apply fuzzing to testing DBMSs by automatically generating SQL queries. However, existing DBMS fuzzers are limited in generating complex and valid queries, as they heavily rely on their predefined grammar models and fixed knowledge about DBMSs, but do not capture DBMS-specific state information. As a result, these approaches miss many deep bugs in DBMSs.
In this paper, we propose a novel stateful fuzzing approach to effectively test DBMSs and find deep bugs. Our basic idea is that after DBMSs process each SQL statement, there is useful state information that can be dynamically collected to facilitate later query generation. Based on this idea, our approach performs dynamic query interaction to incrementally generate complex and valid SQL queries, using the captured state information. To further improve the validity of generated queries, our approach uses the error status of query processing to filter out invalid test cases. We implement our approach as a fully automatic fuzzing framework, DynSQL. DynSQL is evaluated on 6 widely-used DBMSs (including SQLite, MySQL, MariaDB, PostgreSQL, MonetDB, and ClickHouse) and finds 40 unique bugs. Among these bugs, 38 have been confirmed, 21 have been fixed, and 19 have been assigned with CVE IDs. In our evaluation, DynSQL outperforms other state-of-the-art DBMS fuzzers, achieving 41% higher code coverage and finding many bugs missed by other fuzzers.
AIFORE: Smart Fuzzing Based on Automatic Input Format Reverse Engineering
Ji Shi, {CAS-KLONAT, BKLONSPT}, Institute of Information Engineering, Chinese Academy of Sciences; Institute for Network Science and Cyberspace & BNRist, Tsinghua University; Zhongguancun Lab; Singular Security Lab, Huawei Technologies; School of Cyber Security, University of Chinese Academy of Sciences; Zhun Wang, Institute for Network Science and Cyberspace & BNRist, Tsinghua University; Zhongguancun Lab; Zhiyao Feng, Institute for Network Science and Cyberspace & BNRist, Tsinghua University; Zhongguancun Lab; EPFL; Yang Lan and Shisong Qin, Institute for Network Science and Cyberspace & BNRist, Tsinghua University; Zhongguancun Lab; Wei You, Renmin University of China; Wei Zou, {CAS-KLONAT, BKLONSPT}, Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences; Mathias Payer, EPFL; Chao Zhang, Institute for Network Science and Cyberspace & BNRist, Tsinghua University; Zhongguancun Lab
Knowledge of a program’s input format is essential for effective input generation in fuzzing. Automated input format reverse engineering represents an attractive but challenging approach to learning the format. In this paper, we address several challenges of automated input format reverse engineering, and present a smart fuzzing solution AIFORE which makes full use of the reversed format and benefits from it. The structures and semantics of input fields are determined by the basic blocks (BBs) that process them rather than the input specification. Therefore, we first utilize byte-level taint analysis to recognize the input bytes processed by each BB, then identify indivisible input fields that are always processed together with a minimum cluster algorithm, and learn their types with a neural network model that characterizes the behavior of BBs. Lastly, we design a new power scheduling algorithm based on the inferred format knowledge to guide smart fuzzing. We implement a prototype of AIFORE and evaluate both the accuracy of format inference and the performance of fuzzing against state-of-the-art (SOTA) format reversing solutions and fuzzers. AIFORE significantly outperforms SOTA baselines on the accuracy of field boundary and type recognition. With AIFORE, we uncovered 20 bugs in 15 programs that were missed by other fuzzers.
6:00 pm–7:30 pm
Poster Session and Happy Hour
Friday, August 11
8:00 am–9:00 am
Continental Breakfast
9:00 am–10:15 am
Kernel Analysis
BoKASAN: Binary-only Kernel Address Sanitizer for Effective Kernel Fuzzing
Mingi Cho, Dohyeon An, Hoyong Jin, and Taekyoung Kwon, Yonsei University
Kernel Address Sanitizer (KASAN), an invaluable tool for finding use-after-free and out-of-bounds bugs in the Linux kernel, needs the kernel source for compile-time instrumentation. To apply KASAN to closed-source systems, we should develop a binary-only KASAN, which is challenging. A technique that uses binary rewriting and processor support to run KASAN for binary modules needs a KASAN-applied kernel, thereby still the kernel source. Dynamic instrumentation offers an alternative way to it but greatly increases the performance overhead, rendering the kernel fuzzing impractical.
To address these problems, we present the first practical, binary-only KASAN named BoKASAN, which conducts address sanitization through dynamic instrumentation for the entire kernel binaries efficiently. Our key idea is selective sanitization, which identifies target processes to sanitize and hooks the page fault mechanism for significantly reducing the performance overhead of dynamic instrumentation. Our key insight is that the kernel bugs are most relevant to the processes created by a fuzzer. Thus, BoKASAN deliberately sanitizes the target memory regions related to these processes and leaves the remains unsanitized for effective kernel fuzzing.
Our evaluation results show that BoKASAN is practical on closed-source systems, achieving the compiler-level performance of KASAN even on binary-only kernels and modules. Compared to KASAN on the Linux kernel, BoKASAN detected slightly more bugs in the Janus dataset and slightly fewer bugs in the Syzkaller/SyzVegas dataset; and BoKASAN found the same number of unique bugs in the 5-day fuzzing and executed the similar number of basic blocks. For binary modules on the Windows kernel and the Linux kernel, resp., BoKASAN was effective in finding bugs. An ablation result shows that selective sanitization affected these outcomes.
ACTOR: Action-Guided Kernel Fuzzing
Marius Fleischer, Dipanjan Das, and Priyanka Bose, University of California, Santa Barbara; Weiheng Bai and Kangjie Lu, University of Minnesota; Mathias Payer, EPFL; Christopher Kruegel and Giovanni Vigna, University of California, Santa Barbara
FirmSolo: Enabling dynamic analysis of binary Linux-based IoT kernel modules
Ioannis Angelakopoulos, Gianluca Stringhini, and Manuel Egele, Boston University
The Linux-based firmware running on Internet of Things (IoT) devices is complex and consists of user level programs as well as kernel level code. Both components have been shown to have serious security vulnerabilities, and the risk linked to kernel vulnerabilities is particularly high, as these can lead to full system compromise. However, previous work only focuses on the user space component of embedded firmware. In this paper, we present Firmware Solution (FirmSolo), a system designed to incorporate the kernel space into firmware analysis. FirmSolo features the Kernel Configuration Reverse Engineering (K.C.R.E.) process that leverages information (i.e., exported and required symbols and version magic) from the kernel modules found in firmware images to build a kernel that can load the modules within an emulated environment. This capability allows downstream analysis to broaden their scope into code executing in privileged mode.
We evaluated FirmSolo on 1,470 images containing 56,688 kernel modules where it loaded 64% of the kernel modules. To demonstrate how FirmSolo aids downstream analysis, we integrate it with two representative analysis systems; the TriforceAFL kernel fuzzer and Firmadyne, a dynamic firmware analysis tool originally devoid of kernel mode analysis capabilities. Our TriforceAFL experiments on a subset of 75 kernel modules discovered 19 previously-unknown bugs in 11 distinct proprietary modules. Through Firmadyne we confirmed the presence of these previously-unknown bugs in 84 firmware images. Furthermore, by using FirmSolo, Firmadyne confirmed a previously-known memory corruption vulnerability in five different versions of the closed-source Kcodes' NetUSB module across 15 firmware images.
KextFuzz: Fuzzing macOS Kernel EXTensions on Apple Silicon via Exploiting Mitigations
Tingting Yin, Tsinghua University and Ant Group; Zicong Gao, State Key Laboratory of Mathematical Engineering and Advanced Computing; Zhenghang Xiao, Hunan University; Zheyu Ma, Tsinghua University; Min Zheng, Ant Group; Chao Zhang, Tsinghua University and Zhongguancun Laboratory
macOS drivers, i.e., Kernel EXTensions (kext), are attractive attack targets for adversaries. However, automatically discovering vulnerabilities in kexts is extremely challenging because kexts are mostly closed-source, and the latest macOS running on customized Apple Silicon has limited tool-chain support. Most existing static analysis and dynamic testing solutions cannot be applied to the latest macOS. In this paper, we present the first smart fuzzing solution KextFuzz to detect bugs in the latest macOS kexts running on Apple Silicon. Unlike existing driver fuzzing solutions, KextFuzz does not require source code, execution traces, hypervisors, or hardware features (e.g., coverage tracing) and thus is universal and practical. We note that macOS has deployed many mitigations, including pointer authentication, code signature, and userspace kernel layer wrappers, to thwart potential attacks. These mitigations can provide extra knowledge and resources for us to enable kernel fuzzing. KextFuzz exploits these mitigation schemes to instrument the binary for coverage tracking, test privileged kext code that is guarded and infrequently accessed, and infer the type and semantic information of the kext interfaces. KextFuzz has found 48 unique kernel bugs in the macOS kexts. Some of them could cause severe consequences like non-recoverable denial-of-service or damages.
Uncontained: Uncovering Container Confusion in the Linux Kernel
Jakob Koschel, Vrije Universiteit Amsterdam; Pietro Borrello and Daniele Cono D'Elia, Sapienza University of Rome; Herbert Bos and Cristiano Giuffrida, Vrije Universiteit Amsterdam
It's Academic
"I'm going to trust this until it burns me'' Parents’ Privacy Concerns and Delegation of Trust in K-8 Educational Technology
Victoria Zhong, New York University; Susan E. McGregor, Data Science Institute, Columbia University; Rachel Greenstadt, New York University
Educators’ Perspectives of Using (or Not Using) Online Exam Proctoring
David G. Balash, The George Washington University; Rahel A. Fainchtein, Georgetown University; Elena Korkes and Miles Grant, The George Washington University; Micah Sherr, Georgetown University; Adam J. Aviv, The George Washington University
The onset of the COVID-19 pandemic changed the landscape of education and led to increased usage of remote proctoring tools that are designed to monitor students when they take assessments outside the classroom. While prior work has explored students' privacy and security concerns regarding online proctoring tools, the perspective of educators is under explored. Notably, educators are the decision makers in the classrooms and choose which remote proctoring services and the level of observations they deem appropriate. To explore how educators balance the security and privacy of their students with the requirements of remote exams, we sent survey requests to over 3,400 instructors at a large private university that taught online classes during the 2020/21 academic year. We had n=125 responses: 21% of the educators surveyed used online exam proctoring services during the remote learning period, and of those, 35% plan to continue using the tools even when there is a full return to in-person learning. Educators who use exam proctoring services are often comfortable with their monitoring capabilities. However, educators are concerned about students sharing certain types of information with exam proctoring companies, particularly when proctoring services collect identifiable information to validate students' identities. Our results suggest that many educators developed alternative assessments that did not require online proctoring and that those who did use online proctoring services often considered the tradeoffs between the potential risks to student privacy and the utility or necessity of exam proctoring services.
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning
Thorsten Eisenhofer, Ruhr University Bochum; Erwin Quiring, Ruhr University Bochum and International Computer Science Institute (ISCI) Berkeley; Jonas Möller, Technische Universität Berlin; Doreen Riepel, Ruhr University Bochum; Thorsten Holz, CISPA Helmholtz Center for Information Security; Konrad Rieck, Technische Universität Berlin
The number of papers submitted to academic conferences is steadily rising in many scientific disciplines. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems use statistical topic models to characterize the content of submissions and automate the assignment to reviewers. In this paper, we show that this automation can be manipulated using adversarial learning. We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers. Our attack is based on a novel optimization strategy that alternates between the feature space and problem space to realize unobtrusive changes to the paper. To evaluate the feasibility of our attack, we simulate the paper-reviewer assignment of an actual security conference (IEEE S&P) with 165 reviewers on the program committee. Our results show that we can successfully select and remove reviewers without access to the assignment system. Moreover, we demonstrate that the manipulated papers remain plausible and are often indistinguishable from benign submissions.
A Two-Decade Retrospective Analysis of a University's Vulnerability to Attacks Exploiting Reused Passwords
Alexandra Nisenoff, University of Chicago / Carnegie Mellon University; Maximilian Golla, University of Chicago / Max Planck Institute for Security and Privacy; Miranda Wei, University of Chicago / University of Washington; Juliette Hainline, Hayley Szymanek, Annika Braun, Annika Hildebrandt, Blair Christensen, David Langenberg, and Blase Ur, University of Chicago
Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversations
Tadayoshi Kohno, University of Washington; Yasemin Acar, Paderborn University & George Washington University; Wulf Loh, Universität Tübingen
De-anonymization and Re-identification
Catch You and I Can: Revealing Source Voiceprint Against Voice Conversion
Jiangyi Deng, Yanjiao Chen, Yinan Zhong, and Qianhao Miao, Zhejiang University; Xueluan Gong, Wuhan University; Wenyuan Xu, Zhejiang University
Voice conversion (VC) techniques can be abused by malicious parties to transform their audios to sound like a target speaker, making it hard for a human being or a speaker verification/identification system to trace the source speaker. In this paper, we make the first attempt to restore the source voiceprint from audios synthesized by voice conversion methods with high credit. However, unveiling the features of the source speaker from a converted audio is challenging since the voice conversion operation intends to disentangle the original features and infuse the features of the target speaker. To fulfill our goal, we develop Revelio, a representation learning model, which learns to effectively extract the voiceprint of the source speaker from converted audio samples. We equip Revelio with a carefully-designed differential rectification algorithm to eliminate the influence of the target speaker by removing the representation component that is parallel to the voiceprint of the target speaker. We have conducted extensive experiments to evaluate the capability of Revelio in restoring voiceprint from audios converted by VQVC, VQVC+, AGAIN, and BNE. The experiments verify that Revelio is able to rebuild voiceprints that can be traced to the source speaker by speaker verification and identification systems. Revelio also exhibits robust performance under inter-gender conversion, unseen languages, and telephony networks.
V-Cloak: Intelligibility-, Naturalness- & Timbre-Preserving Real-Time Voice Anonymization
Jiangyi Deng, Fei Teng, and Yanjiao Chen, Zhejiang University; Xiaofu Chen and Zhaohui Wang, Wuhan University; Wenyuan Xu, Zhejiang University
Voice data generated on instant messaging or social media applications contains unique user voiceprints that may be abused by malicious adversaries for identity inference or identity theft. Existing voice anonymization techniques, e.g., signal processing and voice conversion/synthesis, suffer from degradation of perceptual quality. In this paper, we develop a voice anonymization system, named V-Cloak, which attains real-time voice anonymization while preserving the intelligibility, naturalness and timbre of the audio. Our designed anonymizer features a one-shot generative model that modulates the features of the original audio at different frequency levels. We train the anonymizer with a carefully-designed loss function. Apart from the anonymity loss, we further incorporate the intelligibility loss and the psychoacoustics-based naturalness loss. The anonymizer can realize untargeted and targeted anonymization to achieve the anonymity goals of unidentifiability and unlinkability.
We have conducted extensive experiments on four datasets, i.e., LibriSpeech (English), AISHELL (Chinese), CommonVoice (French) and CommonVoice (Italian), five Automatic Speaker Verification (ASV) systems (including two DNN-based, two statistical and one commercial ASV), and eleven Automatic Speech Recognition (ASR) systems (for different languages). Experiment results confirm that V-Cloak outperforms five baselines in terms of anonymity performance. We also demonstrate that V-Cloak trained only on the VoxCeleb1 dataset against ECAPA-TDNN ASV and DeepSpeech2 ASR has transferable anonymity against other ASVs and cross-language intelligibility for other ASRs. Furthermore, we verify the robustness of V-Cloak against various de-noising techniques and adaptive attacks. Hopefully, V-Cloak may provide a cloak for us in a prism world.
Assessing Anonymity Techniques Employed in German Court Decisions: A De-Anonymization Experiment
Dominic Deuber and Michael Keuchen, Friedrich-Alexander-Universität Erlangen-Nürnberg; Nicolas Christin, Carnegie Mellon University
Democracy requires transparency. Consequently, courts of law must publish their decisions. At the same time, the interests of the persons involved in these court decisions must be protected. For this reason, court decisions in Europe are anonymized using a variety of techniques. To understand how well these techniques protect the persons involved, we conducted an empirical experiment with 54 law students, whom we asked to de-anonymize 50 German court decisions. We found that all anonymization techniques used in these court decisions were vulnerable, most notably the use of initials. Since even supposedly secure anonymization techniques proved vulnerable, our work empirically reveals the complexity involved in the anonymization of court decisions, and thus calls for further research to increase anonymity while preserving comprehensibility. Toward that end, we provide recommendations for improving anonymization quality. Finally, we provide an empirical notion of “reasonable effort,” to flesh out the definition of anonymity in the legal context. In doing so, we bridge the gap between the technical and the legal understandings of anonymity.
Person Re-identification in 3D Space: A WiFi Vision-based Approach
Yili Ren and Yichao Wang, Florida State University; Sheng Tan, Trinity University; Yingying Chen, Rutgers University; Jie Yang, Florida State University
Person re-identification (Re-ID) has become increasingly important as it supports a wide range of security applications. Traditional person Re-ID mainly relies on optical camera-based systems, which incur several limitations due to the changes in the appearance of people, occlusions, and human poses. In this work, we propose a WiFi vision-based system, 3D-ID, for person Re-ID in 3D space. Our system leverages the advances of WiFi and deep learning to help WiFi devices "see'', identify, and recognize people. In particular, we leverage multiple antennas on next-generation WiFi devices and 2D AoA estimation of the signal reflections to enable WiFi to visualize a person in the physical environment. We then leverage deep learning to digitize the visualization of the person into 3D body representation and extract both the static body shape and dynamic walking patterns for person Re-ID. Our evaluation results under various indoor environments show that the 3D-ID system achieves an overall rank-1 accuracy of 85.3%. Results also show that our system is resistant to various attacks. The proposed 3D-ID is thus very promising as it could augment or complement camera-based systems.
In the Quest to Protect Users from Side-Channel Attacks – A User-Centred Design Space to Mitigate Thermal Attacks on Public Payment Terminals
Karola Marky, Ruhr-University Bochum; Shaun Macdonald, University of Glasgow; Yasmeen Abdrabou, Lancaster University; Mohamed Khamis, University of Glasgow
Thieves in the House
Extracting Training Data from Diffusion Models
Nicholas Carlini and Milad Nasr, Google Brain; Jamie Hayes, Google DeepMind; Matthew Jagielski, Google Research; Vikash Sehwag, Princeton University; Florian Tramer, ETH Zurich; Borja De Balle Pigem, Google DeepMind; Daphne Ippolito, Google Brain; Eric Wallace, UC Berkeley
PCAT: Functionality and Data Stealing from Split Learning by Pseudo-Client Attack
Xinben Gao and Lan Zhang, University of Science and Technology of China
Split learning (SL) is a popular framework to protect a client's training data by splitting up a model among the client and the server. Previous efforts have shown that a semi-honest server can conduct a model inversion attack to recover the client's inputs and model parameters to some extent, as well as to infer the labels. However, those attacks require the knowledge of the client network structure and the performance deteriorates dramatically as the client network gets deeper (≥ 2 layers). In this work, we explore the attack on SL in a more general and challenging situation where the client model is a unknown to the server and gets more complex and deeper. Different from the conventional model inversion, we investigate the inherent privacy leakage through the server model in SL and reveal that clients' functionality and private data can be easily stolen by the server model, and a series of intermediate server models during SL can even cause more leakage. Based on the insights, we propose a new attack on SL: Pseudo-Client ATtack (PCAT). To the best of our knowledge, this is the first attack for a semi-honest server to steal clients' functionality, reconstruct private inputs and infer private labels without any knowledge about the clients' model. The only requirement for the server is a tiny dataset (about 0.1% - 5% of the private training set) for the same learning task. What's more, the attack is transparent to clients, so a server can obtain clients' privacy without taking any risk of being detected by the client. We implement PCAT on various benchmark datasets and models. Extensive experiments testify that our attack significantly outperforms the state-of-the-art attack in various conditions, including more complex models and learning tasks, even in non-i.i.d. conditions. Moreover, our functionality stealing attack is resilient to the existing defensive mechanism.
A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Boyang Zhang and Xinlei He, CISPA Helmholtz Center for Information Security; Yun Shen, NetApp; Tianhao Wang, University of Virginia; Yang Zhang, CISPA Helmholtz Center for Information Security
Building advanced machine learning (ML) models requires expert knowledge and many trials to discover the best architecture and hyperparameter settings. Previous work demonstrates that model information can be leveraged to assist other attacks, such as membership inference, generating adversarial examples. Therefore, such information, e.g., hyperparameters, should be kept confidential. It is well known that an adversary can leverage a target ML model's output to steal the model's information. In this paper, we discover a new side channel for model information stealing attacks, i.e., models' scientific plots which are extensively used to demonstrate model performance and are easily accessible. Our attack is simple and straightforward. We leverage the shadow model training techniques to generate training data for the attack model which is essentially an image classifier. Extensive evaluation on three benchmark datasets shows that our proposed attack can effectively infer the architecture/hyperparameters of image classifiers based on convolutional neural network (CNN) given the scientific plot generated from it. We also reveal that the attack's success is mainly caused by the shape of the scientific plots, and further demonstrate that the attacks are robust in various scenarios. Given the simplicity and effectiveness of the attack method, our study indicates scientific plots indeed constitute a valid side channel for model information stealing attacks. To mitigate the attacks, we propose several defense mechanisms that can reduce the original attacks' accuracy while maintaining the plot utility. However, such defenses can still be bypassed by adaptive attacks.
Beyond The Gates: An Empirical Analysis of HTTP-Managed Password Stealers and Operators
Athanasios Avgetidis, Omar Alrawi, Kevin Valakuzhy, and Charles Lever, Georgia Institute of Technology; Paul Burbage, MalBeacon; Angelos D. Keromytis, Fabian Monrose, and Manos Antonakakis, Georgia Institute of Technology
Password Stealers (Stealers) are commodity malware that specialize in credential theft. This work presents a large-scale longitudinal study of Stealers and their operators. Using a commercial dataset, we characterize the activity of over 4, 586 distinct Stealer operators through their devices spanning 10 different Stealer families. Operators make heavy use of proxies, including traditional VPNs, residential proxies, mobile proxies, and the Tor network when managing their botnet. Our affiliation analysis unveils a stratified enterprise of cybercriminals for each service offering and we identify privileged operators using graph analysis. We find several Stealer-as-a-Service providers that lower the economical and technical barrier for many cybercriminals. We estimate that service providers benefit from high-profit margins (up to 98%) and a lower-bound profit estimate of $11, 000 per month. We find high-profile targeting like the Social Security Administration, the U.S. House of Representatives, and the U.S. Senate. We share our findings with law enforcement and publish six months of the dataset, analysis artifact, and code.
LightThief: Your Optical Communication Information is Stolen behind the Wall
Xin Liu, The Ohio State University; Wei Wang, Saint Louis University; Guanqun Song and Ting Zhu, The Ohio State University
Distributed Secure Computations
WaterBear: Practical Asynchronous BFT Matching Security Guarantees of Partially Synchronous BFT
Haibin Zhang, Beijing Institute of Technology; Sisi Duan, Tsinghua University; Boxin Zhao, Zhongguancun Laboratory; Liehuang Zhu, Beijing Institute of Technology
Practical Asynchronous High-threshold Distributed Key Generation and Distributed Polynomial Sampling
Sourav Das, University of Illinois at Urbana-Champaign; Zhuolun Xiang, Aptos; Lefteris Kokoris-Kogias, IST Austria and Mysten Labs; Ling Ren, University of Illinois at Urbana-Champaign
Distributed Key Generation (DKG) is a technique to bootstrap threshold cryptosystems without a trusted party. DKG is an essential building block to many decentralized protocols such as randomness beacons, threshold signatures, Byzantine consensus, and multiparty computation. While significant progress has been made recently, existing asynchronous DKG constructions are inefficient when the reconstruction threshold is larger than one-third of the total nodes. In this paper, we present a simple and concretely efficient \emph{asynchronous} DKG (ADKG) protocol among n = 3t + 1 nodes that can tolerate up to t malicious nodes and support any reconstruction threshold ℓ ≥ t. Our protocol has an expected O(κn3) communication cost, where κ is the security parameter, and only assumes the hardness of the Discrete Logarithm. The core ingredient of our ADKG protocol is an asynchronous protocol to secret share a random polynomial of degree ℓ ≥ t, which has other applications, such as asynchronous proactive secret sharing and asynchronous multiparty computation. We implement our high-threshold ADKG protocol and evaluate it using a network of up to 128 geographically distributed nodes. Our evaluation shows that our high-threshold ADKG protocol reduces the running time by 90% and bandwidth usage by 80% over the state-of-the-art.
Efficient 3PC for Binary Circuits with Application to Maliciously-Secure DNN Inference
Yun Li, Tsinghua University, Ant Group; Yufei Duan, Tsinghua University; Zhicong Huang, Alibaba Group; Cheng Hong, Ant Group; Chao Zhang and Yifan Song, Tsinghua University
TVA: A multi-party computation system for secure and expressive time series analytics
Muhammad Faisal, Boston University; Jerry Zhang, University of California San Diego; John Liagouris, Vasiliki Kalavri, and Mayank Varia, Boston University
Long Live The Honey Badger: Robust Asynchronous DPSS and its Applications
Thomas Yurek, University of Illinois at Urbana-Champaign, NTT Research, and IC3; Zhuolun Xiang, Aptos; Yu Xia, MIT CSAIL and NTT Research; Andrew Miller, University of Illinois at Urbana-Champaign and IC3
Secret sharing is an essential tool for many distributed applications, including distributed key generation and multiparty computation. For many practical applications, we would like to tolerate network churn, meaning participants can dynamically enter and leave the pool of protocol participants as they please. Such protocols, called Dynamic-committee Proactive Secret Sharing (DPSS) have recently been studied; however, existing DPSS protocols do not gracefully handle faults: the presence of even one unexpectedly slow node can often slow down the whole protocol by a factor of O(n).
In this work, we explore optimally fault-tolerant asynchronous DPSS that is not slowed down by crash faults and even handles byzantine faults while maintaining the same performance. We first introduce the first high-threshold DPSS, which offers favorable characteristics relative to prior non-synchronous works in the presence of faults while simultaneously supporting higher privacy thresholds. We then batch-amortize this scheme along with a parallel non-high-threshold scheme which achieves optimal bandwidth characteristics. We implement our schemes and demonstrate that they can compete with prior work in best-case performance while outperforming it in non-optimal settings.
Mobile Security and Privacy
Powering Privacy: On the Energy Demand and Feasibility of Anonymity Networks on Smartphones
Daniel Hugenroth and Alastair R. Beresford, University of Cambridge
Eye-Shield: Real-Time Protection of Mobile Device Screen Information from Shoulder Surfing
Brian Jay Tang and Kang G. Shin, University of Michigan
This paper and abstract are under embargo and will be released to the public on the first day of the symposium, Wednesday, August 9, 2023.
The OK Is Not Enough: A Large Scale Study of Consent Dialogs in Smartphone Applications
Simon Koch, TU Braunschweig; Benjamin Altpeter, Datenanfragen.de e.V.; Martin Johns, TU Braunschweig
Mobile applications leaking personal information is a well established observation pre and post GDPR. The legal requirements for personal data collection in the context of tracking are specified by GDPR and the common understanding is, that tracking must be based on proper consent. Studies of the consent dialogs on websites revealed severe issues including dark patterns. However, the mobile space is currently underexplored with initial observations pointing towards a similar state of affairs. To address this research gap we analyze a subset of possible consent dialogs, namely privacy consent dialogs, in 3006 Android and 1773 iOS applications. We show that 22.3% of all apps have any form of dialog with only 11.9% giving the user some form of actionable choice, e.g., at least an accept button. However, this choice is limited as a large proportion of all such dialogs employ some form of dark pattern coercing the user to consent.
Notice the Imposter! A Study on User Tag Spoofing Attack in Mobile Apps
Shuai Li, Zhemin Yang, Guangliang Yang, Hange Zhang, Nan Hua, Yurui Huang, and Min Yang, Fudan University
Lost in Conversion: Exploit Data Structure Conversion with Attribute Loss to Break Android Systems
Rui Li, Wenrui Diao, and Shishuai Yang, Shandong University; Xiangyu Liu, Alibaba Group; Shanqing Guo, Shandong University; Kehuan Zhang, The Chinese University of Hong Kong
10:15 am–10:45 am
Break with Refreshments
10:45 am–12:00 pm
Web Security
Silent Spring: Prototype Pollution Leads to Remote Code Execution in Node.js
Mikhail Shcherbakov and Musard Balliu, KTH Royal Institute of Technology; Cristian-Alexandru Staicu, CISPA Helmholtz Center for Information Security
Prototype pollution is a dangerous vulnerability affecting prototype-based languages like JavaScript and the Node.js platform. It refers to the ability of an attacker to inject properties into an object's root prototype at runtime and subsequently trigger the execution of legitimate code gadgets that access these properties on the object's prototype, leading to attacks such as Denial of Service (DoS), privilege escalation, and Remote Code Execution (RCE). While there is anecdotal evidence that prototype pollution leads to RCE, current research does not tackle the challenge of gadget detection, thus only showing feasibility of DoS attacks, mainly against Node.js libraries.
In this paper, we set out to study the problem in a holistic way, from the detection of prototype pollution to detection of gadgets, with the ambitious goal of finding end-to-end exploits beyond DoS, in full-fledged Node.js applications. We build the first multi-staged framework that uses multi-label static taint analysis to identify prototype pollution in Node.js libraries and applications, as well as a hybrid approach to detect universal gadgets, notably, by analyzing the Node.js source code. We implement our framework on top of GitHub's static analysis framework CodeQL to find 11 universal gadgets in core Node.js APIs, leading to code execution. Furthermore, we use our methodology in a study of 15 popular Node.js applications to identify prototype pollutions and gadgets. We manually exploit eight RCE vulnerabilities in three high-profile applications such as NPM CLI, Parse Server, and Rocket.Chat. Our results provide alarming evidence that prototype pollution in combination with powerful universal gadgets lead to RCE in Node.js.
Cookie Crumbles: Breaking and Fixing Web Session Integrity
Marco Squarcina, TU Wien; Pedro Adão, Instituto Superior Técnico, ULisboa, Instituto de Telecomunicações; Lorenzo Veronese and Matteo Maffei, TU Wien
Minimalist: Semi-automated Debloating of PHP Web Applications through Static Analysis
Rasoul Jahanshahi, Boston University; Babak Amin Azad and Nick Nikiforakis, Stony Brook University; Manuel Egele, Boston University
As web applications grow more complicated and rely on third-party libraries to deliver new features to their users, they become bloated with unnecessary code. This unnecessary code increases a web application’s attack surface, which can be exploited to steal user data and compromise the underlying web server. One approach to deal with bloated code is the process of selectively removing features that users do not require – debloating.
In this paper, we identify the current challenges with debloating web applications and propose a semi-automated static debloating scheme. We implement a prototype of our proposed method, called Minimalist that generates a call-graph for a given PHP web application. Minimalist performs a reachability analysis for the features users require and removes unreachable functions in the analyzed web application. Compared to prior work, Minimalist debloats web applications without relying on heavy runtime instrumentation. Furthermore, the call-graph generated by Minimalist can be reused (in combination with web server logs) to debloat different installations of the same web application. Due to the inherent complexity and highly dynamic nature of the PHP language, Minimalist cannot guarantee the soundness of its call-graph analysis. However, Minimalist follows a best-effort approach to model the majority of PHP features used by popular web applications, such as WordPress, phpMyAdmin, and others.
We evaluated Minimalist on 12 versions of four popular PHP web applications with 45 recent security vulnerabilities. We show that Minimalist reduces the size of web applications in our dataset on average by 18% and removes 38% of known vulnerabilities. Our results demonstrate that the principled debloating of web applications can lead to significant security gains without relying on instrumentation mechanisms that degrade the performance of the server.
AnimateDead: Debloating Web Applications Using Concolic Execution
Babak Amin Azad, Stony Brook University; Rasoul Jahanshahi, Boston University; Chris Tsoukaladelis, Stony Brook University; Manuel Egele, Boston University; Nick Nikiforakis, Stony Brook University
NAUTILUS: Automated RESTful API Vulnerability Detection
Gelei Deng, Nanyang Technological University; Zhiyi Zhang, CodeSafe Team, Qi An Xin Group Corp.; Yuekang Li, Yi Liu, Tianwei Zhang, and Yang Liu, Nanyang Technological University; Guo Yu, China Industrial Control Systems Cyber Emergency Response Team; Dongjin Wang, Institute of Scientific and Technical Information, China Academy of Railway Sciences
RESTful APIs have become arguably the most prevalent endpoint for accessing web services. Blackbox vulnerability scanners are a popular choice for detecting vulnerabilities in web services automatically. Unfortunately, they suffer from a number of limitations in RESTful API testing. Particularly, existing tools cannot effectively obtain the relations between API operations, and they lack the awareness of the correct sequence of API operations during testing. These drawbacks hinder the tools from requesting the API operations properly to detect potential vulnerabilities.
To address this challenge, we propose NAUTILUS, which includes a novel specification annotation strategy to uncover RESTful API vulnerabilities. The annotations encode the proper operation relations and parameter generation strategies for the RESTful service, which assist NAUTILUS to generate meaningful operation sequences and thus uncover vulnerabilities that require the execution of multiple API operations in the correct sequence. We experimentally compare NAUTILUS with four state-of-art vulnerability scanners and RESTful API testing tools on six RESTful services. Evaluation results demonstrate that NAUTILUS can successfully detect an average of 141% more vulnerabilities, and cover 104% more API operations. We also apply NAUTILUS to nine real-world RESTful services, and detected 23 unique 0-day vulnerabilities with 12 CVE numbers, including one remote code execution vulnerability in Atlassian Confluence, and three high-risk vulnerabilities in Microsoft Azure, which can affect millions of users.
Understanding Communities, Part 2
"Un-Equal Online Safety?" A Gender Analysis of Security and Privacy Protection Advice and Behaviour Patterns
Kovila P.L. Coopamootoo, King's College London; Magdalene Ng, University of Westminster
“Millions of people are watching you”: Understanding the Digital-Safety Needs and Practices of Creators
Patrawat Samermit, Anna Turner, Patrick Gage Kelley, Tara Matthews, Vanessia Wu, Sunny Consolvo, and Kurt Thomas, Google
This paper and abstract are under embargo and will be released to the public on the first day of the symposium, Wednesday, August 9, 2023.
How Library IT Staff Navigate Privacy and Security Challenges and Responsibilities
Alan F. Luo, Noel Warford, and Samuel Dooley, University of Maryland; Rachel Greenstadt, New York University; Michelle L. Mazurek, University of Maryland; Nora McDonald, George Mason University
Libraries provide critical IT services to patrons who lack access to computational and internet resources. We conducted 12 semi-structured interviews with library IT staff to learn about their privacy and security protocols and policies, the challenges they face implementing them, and how this relates to their patrons. We frame our findings using Sen's capabilities approach and find that library IT staff are primarily concerned with protecting their patrons' privacy from threats outside their walls—police, government authorities, and third parties. Despite their dedication to patron privacy, library IT staff frequently have to grapple with complex tradeoffs between providing easy, fluid, full-featured access to Internet technologies or third-party resources, protecting library infrastructure, and ensuring patron privacy.
Problematic Advertising and its Disparate Exposure on Facebook
Muhammad Ali, Northeastern University; Angelica Goetzen, Max Planck Institute for Software Systems; Alan Mislove, Northeastern University; Elissa M. Redmiles, Max Planck Institute for Software Systems; Piotr Sapiezynski, Northeastern University
One Size Does not Fit All: Quantifying the Risk of Malicious App Encounters for Different Android User Profiles
Savino Dambra, Leyla Bilge, and Platon Kotzias, Norton Research Group; Yun Shen, NetApp; Juan Caballero, IMDEA Software Institute
Previous work has investigated the particularities of security practices within specific user communities defined based on country of origin, age, prior tech abuse, and economic status. Their results highlight that current security solutions that adopt a one-size-fits-all-users approach ignore the differences and needs of particular user communities. However, those works focus on a single community or cluster users into hard-to-interpret sub-populations. In this work, we perform a large-scale quantitative analysis of the risk of encountering malware and other potentially unwanted applications (PUA) across user communities. At the core of our study is a dataset of app installation logs collected from 12M Android mobile devices. Leveraging user-installed apps, we define intuitive profiles based on users’ interests (e.g., gamers and investors), and fit a subset of 5.4M devices to those profiles. Our analysis is structured in three parts. First, we perform risk analysis on the whole population to measure how the risk of malicious app encounters is affected by different factors. Next, we create different profiles to investigate whether risk differences across users may be due to their interests. Finally, we compare a per-profile approach for classifying clean and infected devices with the classical approach that considers the whole population. We observe that features such as the diversity of the app signers and the use of alternative markets highly correlate with the risk of malicious app encounters. We also discover that some profiles such as gamers and social-media users are exposed to more than twice the risks experienced by the average users. We also show that the classification outcome has a marked accuracy improvement when using a per-profile approach to train the prediction models. Overall, our results confirm the inadequacy of one-size-fits-all protection solutions.
Routing and VPNs
How Effective is Multiple-Vantage-Point Domain Control Validation?
Grace Cimaszewski, Henry Birge-Lee, Liang Wang, Jennifer Rexford, and Prateek Mittal, Princeton University
Bypassing Tunnels: Leaking VPN Client Traffic by Abusing Routing Tables
Nian Xue, New York University; Yashaswi Malla, Zihang Xia, and Christina Pöpper, New York University Abu Dhabi; Mathy Vanhoef, imec-DistriNet, KU Leuven
Back to School: On the (In)Security of Academic VPNs
Ka Lok Wu, The Chinese University of Hong Kong; Man Hong Hue, The Chinese University of Hong Kong and Georgia Institute of Technology; Ngai Man Poon, The Chinese University of Hong Kong; Kin Man Leung, The University of British Columbia; Wai Yin Po, Kin Ting Wong, Sze Ho Hui, and Sze Yiu Chau, The Chinese University of Hong Kong
This paper is under embargo and will be on the first day of the symposium.
In this paper, we investigate the security of academic VPNs around the globe, covering various protocols that are used to realize VPN services. Our study considers 3 aspects that can go wrong in a VPN setup, which include (i) the design and implementation of VPN front-ends, (ii) the client-side configurations, and (iii) the back-end configurations. For (i), we tested more than 140 front-ends, and discovered numerous design and implementation issues that enable stealthy but severe attacks, including credential theft and remote code execution. For (ii), we collected and evaluated 2097 VPN setup guides from universities, and discovered many instances of secret key leakage and lack of consideration to potential attacks, leaving many client-side setups vulnerable. Finally, for (iii), we probed more than 2000 VPN back-ends to evaluate their overall health, and uncovered some concerning configuration and maintenance issues on many of them. Our findings suggest that severe cracks exist in the VPN setups of many organizations, making them profitable targets for criminals.
FABRID: Flexible Attestation-Based Routing for Inter-Domain Networks
Cyrill Krähenbühl, Marc Wyss, and David Basin, ETH Zürich; Vincent Lenders, armasuisse; Adrian Perrig, ETH Zürich; Martin Strohmeier, armasuisse
In its current state, the Internet does not provide end users with transparency and control regarding on-path forwarding devices. In particular, the lack of network device information reduces the trustworthiness of the forwarding path and prevents end-user applications requiring specific router capabilities from reaching their full potential. Moreover, the inability to influence the traffic's forwarding path results in applications communicating over undesired routes, while alternative paths with more desirable properties remain unusable.
In this work, we present FABRID, a system that enables applications to forward traffic flexibly, potentially on multiple paths selected to comply with user-defined preferences, where information about forwarding devices is exposed and transparently attested by autonomous systems (ASes). The granularity of this information is chosen by each AS individually, protecting them from leaking sensitive network details, while the secrecy and authenticity of preferences embedded within the users' packets are protected through efficient cryptographic operations. We show the viability of FABRID by deploying it on a global SCION network test bed, and we demonstrate high throughput on commodity hardware.
"All of them claim to be the best": Multi-perspective study of VPN users and VPN providers
Reethika Ramesh, University of Michigan; Anjali Vyas, Cornell Tech; Roya Ensafi, University of Michigan
As more users adopt VPNs for a variety of reasons, it is important to develop empirical knowledge of their needs and mental models of what a VPN offers. Moreover, studying VPN users alone is not enough because, by using a VPN, a user essentially transfers trust, say from their network provider, onto the VPN provider. To that end, we are the first to study the VPN ecosystem from both the users' and the providers' perspectives. In this paper, we conduct a quantitative survey of 1,252 VPN users in the U.S. and qualitative interviews of nine providers to answer several research questions regarding the motivations, needs, threat model, and mental model of users, and the key challenges and insights from VPN providers. We create novel insights by augmenting our multi-perspective results, and highlight cases where the user and provider perspectives are misaligned. Alarmingly, we find that users rely on and trust VPN review sites, but VPN providers shed light on how these sites are mostly motivated by money. Worryingly, we find that users have flawed mental models about the protection VPNs provide, and about data collected by VPNs. We present actionable recommendations for technologists and security and privacy advocates by identifying potential areas on which to focus efforts and improve the VPN ecosystem.
Embedded Systems and Firmware
Greenhouse: Single-Service Rehosting of Linux-Based Firmware Binaries in User-Space Emulation
Hui Jun Tay, Kyle Zeng, Jayakrishna Menon Vadayath, Arvind S Raj, Audrey Dutcher, Tejesh Reddy, Wil Gibbs, Zion Leonahenahe Basque, Fangzhou Dong, Adam Doupe, Tiffany Bao, Yan Shoshitaishvili, and Ruoyu Wang, Arizona State University
FuncTeller: How Well Does eFPGA Hide Functionality?
Zhaokun Han, Texas A&M University; Mohammed Shayan, The University of Texas at Dallas; Aneesh Dixit, Texas A&M University; Mustafa Shihab and Yiorgos Makris, The University of Texas at Dallas; Jeyavijayan Rajendran, Texas A&M University
ACFA: Secure Runtime Auditing & Guaranteed Device Healing via Active Control Flow Attestation
Adam Caulfield, Rochester Institute of Technology; Norrathep Rattanavipanon, Prince of Songkla Univeristy, Phuket Campus; Ivan De Oliveira Nunes, Rochester Institute of Technology
Fuzz The Power: Dual-role State Guided Black-box Fuzzing for USB Power Delivery
Kyungtae Kim and Sungwoo Kim, Purdue Univ.; Kevin Butler, University of Florida; Antonio Bianchi, Rick Kennell, and Dave (Jing) Tian, Purdue University
The Impostor Among US(B): Off-Path Injection Attacks on USB Communications
Robert Dumitru, The University of Adelaide and Defence Science and Technology Group; Daniel Genkin, Georgia Tech; Andrew Wabnitz, Defence Science and Technology Group; Yuval Yarom, The University of Adelaide
USB is the most prevalent peripheral interface in modern computer systems and its inherent insecurities make it an appealing attack vector. A well-known limitation of USB is that traffic is not encrypted. This allows on-path adversaries to trivially perform man-in-the-middle attacks. Off-path attacks that compromise the confidentiality of communications have also been shown to be possible. However, so far no off-path attacks that breach USB communications integrity have been demonstrated.
In this work we show that the integrity of USB communications is not guaranteed even against off-path attackers. Specifically, we design and build malicious devices that, even when placed outside of the path between a victim device and the host, can inject data to that path. Using our developed injectors we can falsify the provenance of data input as interpreted by a host computer system. By injecting on behalf of trusted victim devices we can circumvent any software-based authorisation policy defences that computer systems employ against common USB attacks. We demonstrate two concrete attacks. The first injects keystrokes allowing an attacker to execute commands. The second demonstrates file-contents replacement including during system install from a USB disk. We test the attacks on 29 USB 2.0 and USB 3.x hubs and find 14 of them to be vulnerable.
Attacks on Cryptography
A comprehensive, formal and automated analysis of the EDHOC protocol
Charlie Jacomme, Inria Paris; Elise Klein, Steve Kremer, and Maïwenn Racouchot, Inria Nancy Université de Lorraine
EDHOC is a key exchange proposed by IETF’s Lightweight Authenticated Key Exchange (LAKE) Working Group (WG). Its design focuses on small message sizes to be suitable for constrained IoT communication technologies. In this paper we provide an in-depth formal analysis of EDHOC–draft version 12, taking into account the different proposed authentication methods and various options. For our analysis we use the SAPIC+ protocol platform that allows to compile a single specification to 3 state-of-the-art protocol verification tools (PROVERIF, TAMARIN and DEEPSEC) and take advantage of the strengths of each of the tools. In our analysis we consider a large variety of compromise scenarios, and also exploit recent results that allow to model existing weaknesses in cryptographic primitives, relaxing the perfect cryptography assumption, common in symbolic analysis. While our analysis confirmed security for the most basic threat models, a number of weaknesses were uncovered in the current design when more advanced threat models were taken into account. These weaknesses have been acknowledged by the LAKE WG and the mitigations we propose (and prove secure) have been included in version 14 of the draft.
Hash Gone Bad: Automated discovery of protocol attacks that exploit hash function weaknesses
Vincent Cheval, Inria Paris; Cas Cremers and Alexander Dax, CISPA Helmholtz Center for Information Security; Lucca Hirschi, Inria & LORIA; Charlie Jacomme, Inria Paris; Steve Kremer, Université de Lorraine, LORIA, Inria Nancy Grand-Est
Most cryptographic protocols use cryptographic hash functions as a building block. The security analyses of these protocols typically assume that the hash functions are perfect (such as in the random oracle model). However, in practice, most widely deployed hash functions are far from perfect -- and as a result, the analysis may miss attacks that exploit the gap between the model and the actual hash function used.
We develop the first methodology to systematically discover attacks on security protocols that exploit weaknesses in widely deployed hash functions. We achieve this by revisiting the gap between theoretical properties of hash functions and the weaknesses of real-world hash functions, from which we develop a lattice of threat models. For all of these threat models, we develop fine-grained symbolic models.
Our methodology's fine-grained models cannot be directly encoded in existing state-of-the-art analysis tools by just using their equational reasoning. We therefore develop extensions for the two leading tools, Tamarin and Proverif. In extensive case studies using our methodology, the extended tools rediscover all attacks that were previously reported for these protocols and discover several new variants.
How fast do you heal? A taxonomy for post-compromise security in secure-channel establishment
Olivier Blazy, LIX, CNRS, Inria, École Polytechnique, Institut Polytechnique de Paris, France; Ioana Boureanu, University of Surrey, Surrey Centre for Cyber Security, UK; Pascal Lafourcade, LIMOS, University of Clermont Auvergne, France; Cristina Onete, XLIM, University of Limoges, France; Léo Robert, LIMOS, University of Clermont Auvergne, France
Post-Compromise Security (PCS) is a property of secure-channel establishment schemes, which limits the security breach of an adversary that has compromised one of the endpoint to a certain number of messages, after which the channel heals. An attractive property, especially in view of Snowden’s revelation of mass-surveillance, PCS was pioneered by the Signal messaging protocol, and is present in OTR. In this paper, we introduce a framework for quantifying and comparing PCS security, with respect to a broad taxonomy of adversaries. The generality and flexibility of our approach allows us to model the healing speed of a broad class of protocols, including Signal, but also an identity-based messaging protocol named SAID, and even a composition of 5G handover protocols.
Automated Analysis of Protocols that use Authenticated Encryption: How Subtle AEAD Differences can impact Protocol Security
Cas Cremers and Alexander Dax, CISPA Helmholtz Center for Information Security; Charlie Jacomme, Inria Paris; Mang Zhao, CISPA - Helmholtz Center for Information Security
High Recovery with Fewer Injections: Practical Binary Volumetric Injection Attacks against Dynamic Searchable Encryption
Xianglong Zhang and Wei Wang, Huazhong University of Science and Technology; Peng Xu, Huazhong University of Science and Technology and Hubei Key Laboratory of Distributed System Security; Laurence T. Yang, Huazhong University of Science and Technology and St. Francis Xavier University; Kaitai Liang, Delft University of Technology
Searchable symmetric encryption enables private queries over an encrypted database, but it can also result in information leakages. Adversaries can exploit these leakages to launch injection attacks (Zhang et al., USENIX Security'16) to recover the underlying keywords from queries. The performance of the existing injection attacks is strongly dependent on the amount of leaked information or injection. In this work, we propose two new injection attacks, namely BVA and BVMA, by leveraging a binary volumetric approach. We enable adversaries to inject fewer files than the existing volumetric attacks by using the known keywords and reveal the queries by observing the volume of the query results. Our attacks can thwart well-studied defenses (e.g., threshold countermeasure, padding) without exploiting the distribution of target queries and client databases. We evaluate the proposed attacks empirically in real-world datasets with practical queries. The results show that our attacks can obtain a high recovery rate (> 80%) in the best-case scenario and a roughly 60% recovery even under a large-scale dataset with a small number of injections (< 20 files).
Cloud Insecurity
Cross Container Attacks: The Bewildered eBPF on Clouds
Yi He and Roland Guo, Tsinghua University and BNRist; Yunlong Xing, George Mason University; Xijia Che, Tsinghua University and BNRist; Kun Sun, George Mason University; Zhuotao Liu, Ke Xu, and Qi Li, Tsinghua University
DScope: A Cloud-Native Internet Telescope
Eric Pauley, Paul Barford, and Patrick McDaniel, University of Wisconsin–Madison
Credit Karma: Understanding Security Implications of Exposed Cloud Services through Automated Capability Inference
Xueqiang Wang, University of Central Florida; Yuqiong Sun, Meta; Susanta Nanda, ServiceNow; XiaoFeng Wang, Indiana University Bloomington
The increasing popularity of mobile applications (apps) has led to a rapid increase in demand for backend services, such as notifications, data storage, authentication, etc., hosted in cloud platforms. This has induced the attackers to consistently target such cloud services, resulting in a rise in data security incidents. In this paper, we focus on one of the main reasons why cloud services become increasingly vulnerable: (over-)privileges in cloud credentials. We propose a systematic approach to recover cloud credentials from apps, infer their capabilities in cloud, and verify if the capabilities exceed the legitimate needs of the apps. We further look into the security implications of the leaked capabilities, demonstrating how seemingly benevolent, unprivileged capabilities, when combined, can lead to unexpected, severe security problems. A large-scale study of ~1.3 million apps over two types of cloud services, notification and storage, on three popular cloud platforms, AWS, Azure, and Alibaba Cloud, shows that ~27.3% of apps that use cloud services expose over-privileged cloud credentials. Moreover, a majority of over-privileged cloud credentials (~64.8%) potentially lead to data attacks. During the study, we also uncover new types of attacks enabled by regular cloud credentials, such as spear-phishing through push notification and targeted user data pollution. We have made responsible disclosures to both app vendors and cloud providers and start seeing the impact---over 300 app vendors already fixed the problems.
Detecting Multi-Step IAM Attacks in AWS Environments via Model Checking
Ilia Shevrin, Citi; Oded Margalit, Ben-Gurion University
Cloud services enjoy a surging popularity among IT professionals, owing to their rapid provision of virtual infrastructure on demand. Hand-in-hand with the growing usage, there is also a growing concern about potential security vulnerabilities arising from misconfigurations, exposing resources or allowing malicious actors to escalate privileges. Model checking is a known method for verifying that a finite-state Boolean model of a system satisfies certain properties, where the model and the properties are described in formal logic. In case it doesn’t, a finite trace leading to a violating state can be generated.
In this paper, we present an approach to construct a finite-state Boolean model from the Identity and Access Management (IAM) component of Amazon Web Services (AWS), and a property from an attack target, e.g., read a classified S3 bucket object. We run a model checker that detects whether some initial setup allows an attacker to escalate privileges and reach the target in one or more steps by applying IAM manipulating actions. We show that our approach can discover existing misconfigurations in real AWS environments, and that it can detect multi-step attacks in setups containing tens of AWS accounts with hundreds of resources in under a minute.
Remote Direct Memory Introspection
Hongyi Liu, Jiarong Xing, and Yibo Huang, Rice University; Danyang Zhuo, Duke University; Srinivas Devadas, Massachusetts Institute of Technology; Ang Chen, Rice University
12:00 pm–1:30 pm
Lunch (on your own)
1:30 pm–2:45 pm
More Web and Mobile Security
Auditing Framework APIs via Inferred App-side Security Specifications
Parjanya Vyas, Asim Waheed, Yousra Aafer, and N. Asokan, University of Waterloo
WHIP: Improving Static Vulnerability Detection in Web Application by Forcing tools to Collaborate
Feras Al-Kassar, EURECOM; Luca Compagna, SAP Security Research; Davide Balzarotti, EURECOM
SQIRL: Grey-Box Detection of SQL Injection Vulnerabilities Using Reinforcement Learning
Salim Al Wahaibi, Myles Foley, and Sergio Maffeis, Imperial College London
Hiding in Plain Sight: An Empirical Study of Web Application Abuse in Malware
Mingxuan Yao, Georgia Institute of Technology; Jonathan Fuller, United States Military Academy; Ranjita Pai Kasturi, Saumya Agarwal, Amit Kumar Sikder, and Brendan Saltaformaggio, Georgia Institute of Technology
Web applications provide a wide array of utilities that are abused by malware as a replacement for traditional attacker-controlled servers. Thwarting these Web App-Engaged (WAE) malware requires rapid collaboration between incident responders and web app providers. Unfortunately, our research found that delays in this collaboration allow WAE malware to thrive. We developed Marsea, an automated malware analysis pipeline that studies WAE malware and enables rapid remediation. Given 10K malware samples, Marsea revealed 893 WAE malware in 97 families abusing 29 web apps. Our research uncovered a 226% increase in the number of WAE malware since 2020 and that malware authors are beginning to reduce their reliance on attacker-controlled servers. In fact, we found a 13.7% decrease in WAE malware relying on attacker-controlled servers. To date, we have used Marsea to collaborate with the web app providers to take down 50% of the malicious web app content.
Bilingual Problems: Studying the Security Risks Incurred by Native Extensions in Scripting Languages
Cristian-Alexandru Staicu, CISPA Helmholtz Center for Information Security; Sazzadur Rahaman, University of Arizona; Ágnes Kiss and Michael Backes, CISPA Helmholtz Center for Information Security
Scripting languages are continuously gaining popularity due to their ease of use and the flourishing software ecosystems surrounding them. These languages offer crash and memory safety by design. Thus, developers do not need to understand and prevent low-level security issues like the ones plaguing the C code. However, scripting languages often allow native extensions, a way for custom C/C++ code to be invoked directly from the high-level language. While this feature promises several benefits, such as increased performance or the reuse of legacy code, it can also break the language’s guarantees, e.g., crash safety.
In this work, we first provide a comparative analysis of the security risks of native extension APIs in three popular scripting languages. Additionally, we discuss a novel methodology for studying the misuse of the native extension API. We then perform an in-depth study of npm, an ecosystem that is most exposed to threats introduced by native extensions. We show that vulnerabilities in extensions can be exploited in their embedding library by producing reads of uninitialized memory, hard crashes, or memory leaks in 33 npm packages simply by invoking their API with well-crafted inputs. Moreover, we identify six open-source web applications in which a weak adversary can deploy such exploits remotely. Finally, we were assigned seven security advisories for the work presented in this paper, most labeled as high severity.
Networks and Security
Did the Shark Eat the Watchdog in the NTP Pool? Deceiving the NTP Pool’s Monitoring System
Jonghoon Kwon, ETH Zürich; Jeonggyu Song and Junbeom Hur, Korea University; Adrian Perrig, ETH Zürich
The NTP pool has become a critical infrastructure for modern Internet services and applications. With voluntarily joined thousands of timeservers, it supplies millions of distributed (heterogeneous) systems with time. While numerous efforts have been made to enhance NTP's accuracy, reliability, and security, unfortunately, the NTP pool attracts relatively little attention. In this paper, we provide a comprehensive analysis of NTP pool security, in particular the NTP pool monitoring system, which oversees the correctness and responsiveness of the participating servers. We first investigate strategic attacks that deceive the pool's health-check system to remove legitimate timeservers from the pool. Then, through empirical analysis using monitoring servers and timeservers injected into the pool, we demonstrate the feasibility of our approaches, show their effectiveness, and debate the implications. Finally, we discuss designing a new pool monitoring system to mitigate these attacks.
Device Tracking via Linux’s New TCP Source Port Selection Algorithm
Moshe Kol, Amit Klein, and Yossi Gilad, Hebrew University of Jerusalem
We describe a tracking technique for Linux devices, exploiting a new TCP source port generation mechanism recently introduced to the Linux kernel. This mechanism is based on an algorithm, standardized in RFC 6056, for boosting security by better randomizing port selection. Our technique detects collisions in a hash function used in the said algorithm, based on sampling TCP source ports generated in an attacker-prescribed manner. These hash collisions depend solely on a per-device key, and thus the set of collisions forms a device ID that allows tracking devices across browsers, browser privacy modes, containers, and IPv4/IPv6 networks (including some VPNs). It can distinguish among devices with identical hardware and software, and lasts until the device restarts.
We implemented this technique and then tested it using tracking servers in two different locations and with Linux devices on various networks. We also tested it on an Android device that we patched to introduce the new port selection algorithm. The tracking technique works in real-life conditions, and we report detailed findings about it, including its dwell time, scalability, and success rate in different network types. We worked with the Linux kernel team to mitigate the exploit, resulting in a security patch introduced in May 2022 to the Linux kernel, and we provide recommendations for better securing the port selection algorithm in the paper.
Temporal CDN-Convex Lens: A CDN-Assisted Practical Pulsing DDoS Attack
Run Guo, Tsinghua University; Jianjun Chen, Tsinghua University and Zhongguancun Laboratory; Yihang Wang and Keran Mu, Tsinghua University; Baojun Liu, Tsinghua University and Zhongguancun Laboratory; Xiang Li, Tsinghua University; Chao Zhang, Tsinghua University and Zhongguancun Laboratory; Haixin Duan, Tsinghua University and Zhongguancun Laboratory and QI-ANXIN Technology Research Institute; Jianping Wu, Tsinghua University and Zhongguancun Laboratory
This paper and abstract are under embargo and will be released to the public on the first day of the symposium, Wednesday, August 9, 2023.
An Efficient Design of Intelligent Network Data Plane
Guangmeng Zhou, Tsinghua University; Zhuotao Liu, Tsinghua University and Zhongguancun Laboratory; Chuanpu Fu and Qi Li, Tsinghua University; Ke Xu, Tsinghua University and Zhongguancun Laboratory
Deploying machine learning models directly on the network data plane enables intelligent traffic analysis at line-speed using data-driven models rather than predefined protocols. Such a capability, referred to as Intelligent Data Plane (IDP), may potentially transform a wide range of networking designs. The emerging programmable switches provide crucial hardware support to realize IDP. Prior art in this regard is divided into two major categories: (i) focusing on extract useful flow information from the data plane, while placing the learning-based traffic analysis on the control plane; and (ii) taking a step further to embed learning models into the data plane, while failing to use flow-level features that are critical to achieve high learning accuracies. In this paper, we propose NetBeacon to advance the state-of-the-art in both model accuracy and model deployment efficiency. In particular, NetBeacon proposes a multi-phase sequential model architecture to perform dynamic packet analysis at different phases of a flow as it proceeds, by incorporating flow-level features that are computable at line-speed to boost learning accuracies. Further, NetBeacon designs efficient model representation mechanisms to address the table entry explosion problem when deploying tree-based models on the network data plane. Finally, NetBeacon hardens its scalability for handling concurrent flows via multiple tightly-coupled designs for managing stateful storage used to store per-flow state. We implement a prototype of NetBeacon and extensively evaluate its performance over multiple traffic analysis tasks.
Glowing in the Dark: Uncovering IPv6 Address Discovery and Scanning Strategies in the Wild
Hammas Bin Tanveer, The University of Iowa; Rachee Singh, Microsoft and Cornell University; Paul Pearce, Georgia Tech; Rishab Nithyanand, University of Iowa
In this work we identify scanning strategies of IPv6 scanners on the Internet. We offer a unique perspective on the behavior of IPv6 scanners by conducting controlled experiments leveraging a large and unused /56 IPv6 subnet. We selectively make parts of the subnet visible to scanners by hosting applications that make direct or indirect contact with IPv6- capable servers on the Internet. By careful experiment design, we mitigate the effects of hidden variables on scans sent to our /56 subnet and establish causal relationships between IPv6 host activity types and the scanner attention they evoke. We show that IPv6 host activities e.g., Web browsing, membership in the NTP pool and Tor network, cause scanners to send a magnitude higher number of unsolicited IP scans and reverse DNS queries to our subnet than before. DNS scanners focus their scans in narrow regions of the address space where our applications are hosted whereas IP scanners broadly scan the entire subnet. Even after the host activity from our subnet subsides, we observe persistent residual scanning to portions of the address space that previously hosted applications.
Arming and Disarming ARM
Oops..! I Glitched It Again! How to Multi-Glitch the Glitching-Protections on ARM TrustZone-M
Xhani Marvin Saß, Richard Mitev, and Ahmad-Reza Sadeghi, Technical University of Darmstadt
Voltage Fault Injection (VFI), also known as power glitching, has proven to be a severe threat to real-world systems. In VFI attacks, the adversary disturbs the power-supply of the target-device forcing the device to illegitimate behavior. Various countermeasures have been proposed to address different types of fault injection attacks at different abstraction layers, either requiring to modify the underlying hardware or software/firmware at the machine instruction level. Moreover, only recently, individual chip manufacturers have started to respond to this threat by integrating countermeasures in their products. Generally, these countermeasures aim at protecting against single fault injection (SFI) attacks, since Multiple Fault Injection (MFI) is believed to be challenging and sometimes even impractical.
In this paper, we present μ-Glitch, the first Voltage Fault Injection (VFI) platform which is capable of injecting multiple, coordinated voltage faults into a target device, requiring only a single trigger signal. We provide a novel flow for Multiple Voltage Fault Injection (MVFI) attacks to significantly reduce the search complexity for fault parameters, as the search space increases exponentially with each additional fault injection. We evaluate and showcase the effectiveness and practicality of our attack platform on four real-world chips, featuring TrustZone-M:
The first two have interdependent backchecking mechanisms, while the second two have additionally integrated countermeasures against fault injection. Our evaluation revealed that μ-Glitch can successfully inject four consecutive faults within an average time of one day. Finally, we discuss potential countermeasures to mitigate VFI attacks and additionally propose two novel attack scenarios for MVFI.
SHELTER: Extending Arm CCA with Isolation in User Space
Yiming Zhang, Southern University of Science and Technology and The Hong Kong Polytechnic University; Yuxin Hu, Southern University of Science and Technology; Zhenyu Ning, Hunan University and Southern University of Science and Technology; Fengwei Zhang, Southern University of Science and Technology; Xiapu Luo, The Hong Kong Polytechnic University; Haoyang Huang, Southern University of Science and Technology; Shoumeng Yan and Zhengyu He, Ant Group
This paper and abstract are under embargo and will be released to the public on the first day of the symposium, Wednesday, August 9, 2023.
Hot Pixels: Frequency, Power, and Temperature Attacks on GPUs and Arm SoCs
Hritvik Taneja, Jason Kim, and Jie Jeff Xu, Georgia Tech; Stephan van Schaik, University of Michigan; Daniel Genkin, Georgia Tech; Yuval Yarom, Ruhr University Bochum
SpectrEM: Exploiting Electromagnetic Emanations During Transient Execution
Jesse De Meulemeester, Antoon Purnal, Lennert Wouters, Arthur Beckers, and Ingrid Verbauwhede, COSIC, KU Leuven
ARMore: Pushing love back into binaries
Luca Di Bartolomeo, Hossein Moghaddas, and Mathias Payer, EPFL
Static rewriting enables late-state code changes (e.g., to add mitigations, to remove unnecessary code, or to instrument for code coverage) at low overhead in security-critical environments. Most research on static rewriting has so far focused on the x86 architecture. However, the prevalence and proliferation of ARM-based devices along with a large amount of personal data (e.g., health and sensor data) that they process calls for efficient introspection and analysis capabilities on the ARM platform. Addressing the unique challenges on aarch64, we introduce ARMore, the first efficient, robust, and heuristic-free static binary rewriter for arbitrary aarch64 binaries that produces reassembleable assembly. The key improvements introduced by ARMore make the recovery of indirect control flow an option rather than a necessity. Instead of crashing, the cost of an uncovered target only causes the small overhead of an additional branch. ARMore can rewrite binaries from different languages and compilers (even arbitrary hand-written assembly), both on PIC and non-PIC code, with or without symbols, including exception handling for C++ and Go binaries, and also including binaries with mixed data and text. ARMore is sound as it does not rely on any assumptions about the input binary. ARMore is also efficient: it does not employ any expensive dynamic translation techniques, incurring negligible overhead (<1% in our evaluated benchmarks). Our AFL++ coverage instrumentation pass enables fuzzing of closed-source aarch64 binaries at three times the speed compared to the state-of-the-art (AFL-QEMU), and we found 58 unique crashes in closed-source software. ARMore is the only static rewriter whose rewritten binaries correctly pass all SQLite3 and coreutils test cases and autopkgtest of 97.5% Debian packages.
More ML Attacks and Defenses
Secure Floating-Point Training
Deevashwer Rathee, University of California, Berkeley; Anwesh Bhattacharya, Divya Gupta, and Rahul Sharma, Microsoft Research; Dawn Song, University of California, Berkeley
Secure 2-party computation (2PC) of floating-point arithmetic is improving in performance and recent work runs deep learning algorithms with it, while being as numerically precise as commonly used machine learning (ML) frameworks like PyTorch. We find that the existing 2PC libraries for floating-point support generic computations and lack specialized support for ML training. Hence, their latency and communication costs for compound operations (e.g., dot products) are high. We provide novel specialized 2PC protocols for compound operations and prove their precision using numerical analysis. Our implementation BEACON outperforms state-of-the-art libraries for 2PC of floating-point by over $6\times$.
NeuroPots: Realtime Proactive Defense against Bit-Flip Attacks in Neural Networks
Qi Liu, Lehigh University; Jieming Yin, Nanjing University of Posts and Telecommunications; Wujie Wen, Lehigh University; Chengmo Yang, University of Delaware; Shi Sha, Wilkes University
Deep neural networks (DNNs) are becoming ubiquitous in various safety- and security-sensitive applications such as self-driving cars and financial systems. Recent studies revealed that bit-flip attacks (BFAs) can destroy DNNs' functionality via DRAM rowhammer—by precisely injecting a few bit-flips into the quantized model parameters, attackers can either degrade the model accuracy to random guessing, or misclassify certain inputs into a target class. BFAs can cause catastrophic consequences if left undetected. However, detecting BFAs is challenging because bit-flips can occur on any weights in a DNN model, leading to a large detection surface. Unlike prior works that attempt to "patch'' vulnerabilities of DNN models, our work is inspired by the idea of "honeypot''. Specifically, we propose a proactive defense concept named NeuroPots, which embeds a few "honey neurons'' as crafted vulnerabilities into the DNN model to lure the attacker into injecting faults in them, thus making detection and model recovery efficient. We utilize NeuroPots to develop a trapdoor-enabled defense framework. We design a honey neuron selection strategy, and propose two methods for embedding trapdoors into the DNN model. Furthermore, since the majority of injected bit flips will concentrate in the trapdoors, we use a checksum-based detection approach to efficiently detect faults in them, and rescue the model accuracy by "refreshing'' those faulty trapdoors. Our experiments show that trapdoor-enabled defense achieves high detection performance and effectively recovers a compromised model at a low cost across a variety of DNN models and datasets.
FedVal: Different good or different bad in federated learning
Viktor Valadi, AI Sweden; Xinchi Qiu, Pedro Porto Buarque de Gusmão, and Nicholas D. Lane, University of Cambridge; Mina Alibeigi, Zenseact AB and University of Cambridge
Gradient Obfuscation Gives a False Sense of Security in Federated Learning
Kai Yue, North Carolina State University; Richeng Jin, Zhejiang University; Chau-Wai Wong, Dror Baron, and Huaiyu Dai, North Carolina State University
Federated learning has been proposed as a privacy-preserving machine learning framework that enables multiple clients to collaborate without sharing raw data. However, client privacy protection is not guaranteed by design in this framework. Prior work has shown that the gradient sharing strategies in federated learning can be vulnerable to data reconstruction attacks. In practice, though, clients may not transmit raw gradients considering the high communication cost or due to privacy enhancement requirements. Empirical studies have demonstrated that gradient obfuscation, including intentional obfuscation via gradient noise injection and unintentional obfuscation via gradient compression, can provide more privacy protection against reconstruction attacks. In this work, we present a new reconstruction attack framework targeting the image classification task in federated learning. We show how commonly adopted gradient postprocessing procedures, such as gradient quantization, gradient sparsification, and gradient perturbation may give a false sense of security in federated learning. Contrary to prior studies, we argue that privacy enhancement should not be treated as a byproduct of gradient compression. Additionally, we design a new method under the proposed framework to reconstruct images at the semantic level. We quantify the semantic privacy leakage and compare it with conventional image similarity scores. Our comparisons challenge the image data leakage evaluation schemes in the literature. The results emphasize the importance of revisiting and redesigning the privacy protection mechanisms for client data in existing federated learning algorithms.
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases
Chong Fu, Xuhong Zhang, and Shouling Ji, Zhejiang University; Ting Wang, Pennsylvania State University; Peng Lin, Chinese Aeronautical Establishment; Yanghe Feng, National University of Defense Technology; Jianwei Yin, Zhejiang University
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence. A trojaned neural network behaves normally with clean inputs. However, if the input contains a particular trigger, the trojaned model will have attacker-chosen abnormal behavior. Although many backdoor detection methods exist, most of them assume that the defender has access to a set of clean validation samples or samples with the trigger, which may not hold in some crucial real-world cases, e.g., the case where the defender is the maintainer of model-sharing platforms. Thus, in this paper, we propose FreeEagle, the first data-free backdoor detection method that can effectively detect complex backdoor attacks on deep neural networks, without relying on the access to any clean samples or samples with the trigger. The evaluation results on diverse datasets and model architectures show that FreeEagle is effective against various complex backdoor attacks, even outperforming some state-of-the-art non-data-free backdoor detection methods.
Cryptography for Privacy
Prime Match: A Privacy Preserving Inventory Matching System
Antigoni Polychroniadou, J.P. Morgan AI Research; Gilad Asharov, Bar Ilan University; Benjamin Diamond, Tucker Balch, Hans Buehler, Richard Hua, Suwen Gu, Greg Gimler, and Manuela Veloso, J.P. Morgan AI Research
Inventory matching is a standard mechanism for trading financial stocks by which buyers and sellers can be paired. In the financial world, banks often undertake the task of finding such matches between their clients. The related stocks can be traded without adversely impacting the market price for either client. If matches between clients are found, the bank can offer the trade at advantageous rates. If no match is found, the parties have to buy or sell the stock in the public market, which introduces additional costs.
A problem with the process as it is presently conducted is that the involved parties must share their order to buy or sell a particular stock, along with the intended quantity (number of shares), to the bank. Clients worry that if this information were to “leak” somehow, then other market participants would become aware of their intentions and thus cause the price to move adversely against them before their transaction finalizes.
We provide a solution that enables clients to match their orders efficiently with reduced market impact while maintaining privacy. In the case where there are no matches, no information is revealed. Our main cryptographic innovation is a two-round secure linear comparison protocol for computing the minimum between two quantities without preprocessing and with malicious security, which can be of independent interest. We report benchmarks of our Prime Match system, which runs in production and is adopted by a large bank in the US - J.P. Morgan. Prime Match is the first secure multiparty computation solution running live in the financial world.
Squirrel: A Scalable Secure Two-Party Computation Framework for Training Gradient Boosting Decision Tree
Wen-jie Lu and Zhicong Huang, Alibaba Group; Qizhi Zhang, Ant Group; Yuchen Wang, Alibaba Group; Cheng Hong, Ant Group
Gradient Boosting Decision Tree (GBDT) and its variants are widely used in industry, due to their high efficiency as well as strong interpretability. Secure multi-party computation allows multiple data owners to compute a function jointly while keeping their input private. In this work, we present Squirrel, a secure two-party GBDT training framework on a vertically split dataset, where two data owners each hold different features of the same data samples. Squirrel is private against semi-honest adversaries, and no sensitive intermediate information is revealed during the training process. Squirrel is also scalable to datasets with millions of samples even under a Wide Area Network (WAN).
Squirrel achieves its high performance via several novel co-designs of the GBDT algorithms and advanced cryptography. Especially, 1) we propose a new mechanism to hide the sample distribution on each node using oblivious transfer. 2) We propose a highly optimized method for secure gradient aggregation using two lattice-based homomorphic encryption schemes. Our empirical results show that our method can be three orders of magnitude faster than the existing approaches. 3) We propose a novel protocol to evaluate the sigmoid function on secretly shared values, showing 19×-200×-fold improvements over two existing methods. Combining all these improvements, Squirrel costs less than 6 seconds per tree on a dataset with 50 thousands samples which outperforms Pivot (VLDB 2020) by more than 28×. We also show that Squirrel can scale up to datasets with more than one million samples, e.g., about 90 seconds per tree over a WAN.
Eos: Efficient Private Delegation of zkSNARK Provers
Alessandro Chiesa, UC Berkeley and EPFL; Ryan Lehmkuhl, MIT; Pratyush Mishra, Aleo and University of Pennsylvania; Yinuo Zhang, UC Berkeley
Succinct zero knowledge proofs (i.e. zkSNARKs) are powerful cryptographic tools that enable a prover to convince a verifier that a given statement is true without revealing any additional information. Their attractive privacy properties have led to much academic and industrial interest.
Unfortunately, existing systems for generating zkSNARKs are expensive, which limits the applications in which these proofs can be used. One approach is to take advantage of powerful cloud servers to generate the proof. However, existing techniques for this (e.g., DIZK) sacrifice privacy by revealing secret information to the cloud machines. This is problematic for many applications of zkSNARKs, such as decentralized private currency and computation systems.
In this work we design and implement privacy-preserving delegation protocols for zkSNARKs with universal setup. Our protocols enable a prover to outsource proof generation to a set of workers, so that if at least one worker does not collude with other workers, no private information is revealed to any worker. Our protocols achieve security against malicious workers without relying on heavyweight cryptographic tools.
We implement and evaluate our delegation protocols for a state-of-the-art zkSNARK in a variety of computational and bandwidth settings, and demonstrate that our protocols are concretely efficient. When compared to local proving, using our protocols to delegate proof generation from a recent smartphone (a) reduces end-to-end latency by up to 26×, (b) lowers the delegator's active computation time by up to 1447×, and (c) enables proving up to 256× larger instances.
Machine-checking Multi-Round Proofs of Shuffle: Terelius-Wikstrom and Bayer-Groth
Thomas Haines, Australian National University; Rajeev Gore, Polish Academy of Science; Mukesh Tiwari, University of Cambridge
Shuffles are used in electronic voting in much the same way physical ballot boxes are used in paper systems: (encrypted) ballots are input into the shuffle and (encrypted) ballots are output in a random order, thereby breaking the link between voter identities and ballots. To guarantee that no ballots are added, omitted or altered, zero-knowledge proofs, called proofs of shuffle, are used to provide publicly verifiable transcripts that prove that the outputs are a re-encrypted permutation of the inputs. The most prominent proofs of shuffle, in practice, are those due to Terelius and Wikstrom~(TW), and Bayer and Groth (BG). TW is simpler whereas BG is more efficient, both in terms of bandwidth and computation. Security for the simpler (TW) proof of shuffle has already been machine-checked but several prominent vendors insist on using the more complicated BG proof of shuffle. Here, we machine-check the security of the Bayer-Groth proof of shuffle via the Coq proof-assistant. We then extract the verifier (software) required to check the transcripts produced by Bayer-Groth implementations and use it to check transcripts from the Swiss Post evoting system under development for national elections in Switzerland.
TAP: Transparent and Privacy-Preserving Data Services
Daniel Reijsbergen and Aung Maw, Singapore University of Technology and Design; Zheng Yang, Southwest University; Tien Tuan Anh Dinh and Jianying Zhou, Singapore University of Technology and Design
Users today expect more security from services that handle their data. In addition to traditional data privacy and integrity requirements, they expect transparency, i.e., that the service’s processing of the data is verifiable by users and trusted auditors. Our goal is to build a multi-user system that provides data privacy, integrity, and transparency for a large number of operations, while achieving practical performance.
To this end, we first identify the limitations of existing approaches that use authenticated data structures. We find that they fall into two categories: 1) those that hide each user’s data from other users, but have a limited range of verifiable operations (e.g., CONIKS, Merkle2, and Proofs of Liabilities), and 2) those that support a wide range of verifiable operations, but make all data publicly visible (e.g., IntegriDB and FalconDB). We then present TAP to address the above limitations. The key component of TAP is a novel tree data structure that supports efficient result verification, and relies on independent audits that use zero-knowledge range proofs to show that the tree is constructed correctly without revealing user data. TAP supports a broad range of verifiable operations, including quantiles and sample standard deviations. We conduct a comprehensive evaluation of TAP, and compare it against two state-of-the-art baselines, namely IntegriDB and Merkle2, showing that the system is practical at scale.
Vulnerabilities and Threat Detection
Trojan Source: Invisible Vulnerabilities
Nicholas Boucher, University of Cambridge; Ross Anderson, University of Cambridge and University of Edinburgh
We present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. 'Trojan Source' attacks, as we call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. We present working examples of Trojan Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, Python SQL, Bash, Assembly, and Solidity. We propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack. We document an industry-wide coordinated disclosure for these vulnerabilities; as they affect most compilers, editors, and repositories, the exercise teaches how different firms, open-source communities, and other stakeholders respond to vulnerability disclosure.
Cheesecloth: Zero-Knowledge Proofs of Real World Vulnerabilities
Santiago Cuellar, Galois, Inc.; Bill Harris, Google LLC; James Parker and Stuart Pernsteiner, Galois, Inc.; Eran Tromer, Columbia University
V1SCAN: Discovering 1-day Vulnerabilities in Reused C/C++ Open-source Software Components Using Code Classification Techniques
Seunghoon Woo, Eunjin Choi, Heejo Lee, and Hakjoo Oh, Korea University
We present V1SCAN, an effective approach for discovering 1-day vulnerabilities in reused C/C++ open-source software (OSS) components. Reusing third-party OSS has many benefits, but can put the entire software at risk owing to the vulnerabilities they propagate. In mitigation, several techniques for detecting propagated vulnerabilities, which can be classified into version- and code-based approaches, have been proposed. However, state-of-the-art techniques unfortunately produce many false positives or negatives when OSS projects are reused with code modifications.
In this paper, we show that these limitations can be addressed by improving version- and code-based approaches and synergistically combining them. By classifying reused code from OSS components, V1SCAN only considers vulnerabilities contained in the target program and filters out unused vulnerable code, thereby reducing false alarms produced by version-based approaches. V1SCAN improves the coverage of code-based approaches by classifying vulnerable code and then detecting vulnerabilities propagated with code changes in various code locations. Evaluation on GitHub popular C/C++ software showed that V1SCAN outperformed state-of-the-art vulnerability detection approaches by discovering 50% more vulnerabilities than they detected. In addition, V1SCAN reduced the false positive rate of the simple integration of existing version- and code-based approaches from 71% to 4% and the false negative rate from 33% to 7%. With V1SCAN, developers can detect propagated vulnerabilities with high accuracy, maintaining a secure software supply chain.
VulChecker: Graph-based Vulnerability Localization in Source Code
Yisroel Mirsky, Ben-Gurion University of the Negev; George Macon, Georgia Tech Research Institute; Michael Brown, Georgia Institute of Technology; Carter Yagemann, Ohio State University; Matthew Pruett, Evan Downing, Sukarno Mertoguno, and Wenke Lee, Georgia Institute of Technology
In software development, it is critical to detect vulnerabilities in a project as early as possible. Although, deep learning has shown promise in this task, current state-of-the-art methods cannot classify and identify the line on which the vulnerability occurs. Instead, the developer is tasked with searching for an arbitrary bug in an entire function or even larger region of code.
In this paper, we propose VulChecker: a tool that can precisely locate vulnerabilities in source code (down to the exact instruction) as well as classify their type (CWE). To accomplish this, we propose a new program representation, program slicing strategy, and the use of a message-passing graph neural network to utilize all of code's semantics and improve the reach between a vulnerability's root cause and manifestation points.
We also propose a novel data augmentation strategy for cheaply creating strong datasets for vulnerability detection in the wild, using free synthetic samples available online. With this training strategy, VulChecker was able to identify 24 CVEs (10 from 2019 & 2020) in 19 projects taken from the wild, with nearly zero false positives compared to a commercial tool that could only detect 4. VulChecker also discovered an exploitable zero-day vulnerability, which has been reported to developers for responsible disclosure.
DISTDET: A Cost-Effective Distributed Cyber Threat Detection System
Feng Dong, School of Cyber Science and Engineering, Huazhong University of Science and Technology / Sangfor Technologies Inc.; Liu Wang and Xu Nie, Beijing University of Posts and Telecommunications; Fei Shao, Case Western Reserve University; Haoyu Wang, School of Cyber Science and Engineering, Huazhong University of Science and Technology; Ding Li, Key Laboratory of High-Confidence Software Technologies (MOE), School of Computer Science, Peking University; Xiapu Luo, The Hong Kong Polytechnic University; Xusheng Xiao, Arizona State University
Building provenance graph that considers causal relationships among software behaviors can better provide contextual information of cyber attacks, especially for advanced attacks such as Advanced Persistent Threat (APT) attacks. Despite its promises in assisting attack investigation, existing approaches that use provenance graphs to perform attack detection suffer from two fundamental limitations. First, existing approaches adopt a centralized detection architecture that sends all system auditing logs to the server for processing, incurring intolerable costs of data transmission, data storage, and computation. Second, they adopt either rule-based techniques that cannot detect unknown threats, or anomaly-detection techniques that produce numerous false alarms, failing to achieve a balance of precision and recall in APT detection. To address these fundamental challenges, we propose DISTDET, a distributed detection system that detects APT attacks by (1) performing light weight detection based on the host model built in the client side, (2) filtering false alarms based on the semantics of the alarm proprieties, and (3) deriving global models to complement the local bias of the host models. Our experiments on a large-scale industrial environment (1,130 hosts, 14 days, ∼1.6 billion events) and the DARPA TC dataset show that DISTDET is as effective as sate-of-the-art techniques in detecting attacks, while dramatically reducing network bandwidth from 11.28Mb/s to 17.08Kb/S (676.5× reduction), memory usages from 364MB to 5.523MB (66× reduction), and storage from 1.47GB to 130.34MB (11.6× reduction). By the time of this writing, DISTDET has been deployed to 50+ industry customers with 22,000+ hosts for more than 6 months, and identified over 900 real-world attacks.
2:45 pm–3:15 pm
Break with Refreshments
3:15 pm–4:30 pm
Automated Analysis of Deployed Systems
Automated Security Analysis of Exposure Notification Systems
Kevin Morio and Ilkan Esiyok, CISPA Helmholtz Center for Information Security; Dennis Jackson, Mozilla; Robert Künnemann, CISPA Helmholtz Center for Information Security
We present the first formal analysis and comparison of the security of the two most widely deployed exposure notification systems, ROBERT and the Google and Apple Exposure Notification (GAEN) framework.
ROBERT is the most popular instalment of the centralised approach to exposure notification, in which the risk score is computed by a central server. GAEN, in contrast, follows the decentralised approach, where the user's phone calculates the risk. The relative merits of centralised and decentralised systems have proven to be a controversial question. The majority of the previous analyses have focused on the privacy implications of these systems, ours is the first formal analysis to evaluate the security of the deployed systems—the absence of false risk alerts.
We model the French deployment of ROBERT and the most widely deployed GAEN variant, Germany's Corona-Warn-App. We isolate the precise conditions under which these systems prevent false alerts. We determine exactly how an adversary can subvert the system via network and Bluetooth sniffing, database leakage or the compromise of phones, back-end systems and health authorities. We also investigate the security of the original specification of the DP3T protocol, in order to identify gaps between the proposed scheme and its ultimate deployment.
We find a total of 27 attack patterns, including many that distinguish the centralised from the decentralised approach, as well as attacks on the authorisation procedure that differentiate all three protocols. Our results suggest that ROBERT's centralised design is more vulnerable against both opportunistic and highly resourced attackers trying to perform mass-notification attacks.
Formal Analysis of SPDM: Security Protocol and Data Model version 1.2
Cas Cremers, Alexander Dax, and Aurora Naska, CISPA Helmholtz Center for Information Security
DMTF is a standards organization by major industry players in IT infrastructure including AMD, Alibaba, Broadcom, Cisco, Dell, Google, Huawei, IBM, Intel, Lenovo, and NVIDIA, which aims to enable interoperability, e.g., including cloud, virtualization, network, servers and storage. It is currently standardizing a security protocol called SPDM, which aims to secure communication over the wire and to enable device attestation, notably also explicitly catering for communicating hardware components.
The SPDM protocol inherits requirements and design ideas from IETF’s TLS 1.3. However, its state machines and transcript handling are substantially different and more complex. While architecture, specification, and open-source libraries of the current versions of SPDM are publicly available, these include no significant security analysis of any kind.
In this work we develop the first formal models of the three modes of the SPDM protocol version 1.2.1, and formally analyze their main security properties.
One Size Does Not Fit All: Uncovering and Exploiting Cross Platform Discrepant APIs in WeChat
Chao Wang, Yue Zhang, and Zhiqiang Lin, The Ohio State University
The Most Dangerous Codec in the World: Finding and Exploiting Vulnerabilities in H.264 Decoders
Willy R. Vasquez, The University of Texas at Austin; Stephen Checkoway, Oberlin College; Hovav Shacham, The University of Texas at Austin
Modern video encoding standards such as H.264 are a marvel of hidden complexity. But with hidden complexity comes hidden security risk. Decoding video in practice means interacting with dedicated hardware accelerators and the proprietary, privileged software components used to drive them. The video decoder ecosystem is obscure, opaque, diverse, highly privileged, largely untested, and highly exposed—a dangerous combination.
We introduce and evaluate H26FORGE, domain-specific infrastructure for analyzing, generating, and manipulating syntactically correct but semantically spec-non-compliant video files. Using H26FORGE, we uncover insecurity in depth across the video decoder ecosystem, including kernel memory corruption bugs in iOS, memory corruption bugs in Firefox and VLC for Windows, and video accelerator and application processor kernel memory bugs in multiple Android devices.
Are You Spying on Me? Large-Scale Analysis on IoT Data Exposure through Companion Apps
Yuhong Nan, Sun Yat-sen University; Xueqiang Wang, University of Central Florida; Luyi Xing and Xiaojing Liao, Indiana University Bloomington; Ruoyu Wu and Jianliang Wu, Purdue University; Yifan Zhang and XiaoFeng Wang, Indiana University Bloomington
Recent research has highlighted privacy as a primary concern for IoT device users. However, due to the challenges in conducting a large-scale study to analyze thousands of devices, there has been less study on how pervasive unauthorized data exposure has actually become on today's IoT devices and the privacy implications of such exposure. To fill this gap, we leverage the observation that most IoT devices on the market today use their companion mobile apps as an intermediary to process, label and transmit the data they collect. As a result, the semantic information carried by these apps can be recovered and analyzed automatically to track the collection and sharing of IoT data.
In this paper, we report the first of such a study, based upon a new framework IoTProfiler, which statically analyzes a large number of companion apps to infer and track the data collected by their IoT devices. Our approach utilizes machine learning to detect the code snippet in a companion app that handles IoT data and further recovers the semantics of the data from the snippet to evaluate whether their exposure has been properly communicated to the user. By running IoTProfiler on 6,208 companion apps, our research has led to the discovery of 1,973 apps that expose user data without proper disclosure, covering IoT devices from at least 1,559 unique vendors. Our findings include highly sensitive information, such as health status and home address, and the pervasiveness of unauthorized sharing of the data to third parties, including those in different countries. Our findings highlight the urgent need to regulate today's IoT industry to protect user privacy.
Manipulation, Influence, and Elections
Strategies and Vulnerabilities of Participants in Venezuelan Influence Operations
Ruben Recabarren, Bogdan Carbunar, Nestor Hernandez, and Ashfaq Ali Shafin, Florida International University
Studies of online influence operations, coordinated efforts to disseminate and amplify disinformation, focus on forensic analysis of social networks or of publicly available datasets of trolls and bot accounts. However, little is known about the experiences and challenges of human participants in influence operations. We conducted semi-structured interviews with 19 influence operations participants that contribute to the online image of Venezuela, to understand their incentives, capabilities, and strategies to promote content while evading detection. To validate a subset of their answers, we performed a quantitative investigation using data collected over almost four months, from Twitter accounts they control.
We found diverse participants that include pro-government and opposition supporters, operatives and grassroots campaigners, and sockpuppet account owners and real users. While pro-government and opposition participants have similar goals and promotion strategies, they differ in their motivation, organization, adversaries and detection avoidance strategies. We report the Patria framework, a government platform for operatives to log activities and receive benefits. We systematize participant strategies to promote political content, and to evade and recover from Twitter penalties. We identify vulnerability points associated with these strategies, and suggest more nuanced defenses against influence operations.
TRIDENT: Towards Detecting and Mitigating Web-based Social Engineering Attacks
Zheng Yang, Joey Allen, and Matthew Landen, Georgia Institute of Technology; Roberto Perdisci, Georgia Tech and University of Georgia; Wenke Lee, Georgia Institute of Technology
As the weakest link in cybersecurity, humans have become the main target of attackers who take advantage of sophisticated web-based social engineering techniques. These attackers leverage low-tier ad networks to inject social engineering components onto web pages to lure users into websites that the attackers control for further exploitation. Most of these exploitations are Web-based Social Engineering Attacks (WSEAs), such as reward and lottery scams. Although researchers have proposed systems and tools to detect some WSEAs, these approaches are very tailored to specific scam techniques (i.e., tech support scams, survey scams) only. They were not designed to be effective against a broad set of attack techniques. With the ever-increasing diversity and sophistication of WSEAs that any user can encounter, there is an urgent need for new and more effective in-browser systems that can accurately detect generic WSEAs.
To address this need, we propose TRIDENT, a novel defense system that aims to detect and block generic WSEAs in real-time. TRIDENT stops WSEAs by detecting Social Engineering Ads (SE-ads), the entry point of general web social engineering attacks distributed by low-tier ad networks at scale. Our extensive evaluation shows that TRIDENT can detect SE-ads with an accuracy of 92.63% and a false positive rate of 2.57% and is robust against evasion attempts. We also evaluated TRIDENT against the state-of-the-art ad-blocking tools. The results show that TRIDENT outperforms these tools with a 10% increase in accuracy. Additionally, TRIDENT only incurs 2.13% runtime overhead as a median rate, which is small enough to deploy in production.
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Sahar Abdelnabi and Mario Fritz, CISPA Helmholtz Center for Information Security
Mis- and disinformation are a substantial global threat to our security and safety. To cope with the scale of online misinformation, researchers have been working on automating fact-checking by retrieving and verifying against relevant evidence. However, despite many advances, a comprehensive evaluation of the possible attack vectors against such systems is still lacking. Particularly, the automated fact-verification process might be vulnerable to the exact disinformation campaigns it is trying to combat. In this work, we assume an adversary that automatically tampers with the online evidence in order to disrupt the fact-checking model via camouflaging the relevant evidence or planting a misleading one. We first propose an exploratory taxonomy that spans these two targets and the different threat model dimensions. Guided by this, we design and propose several potential attack methods. We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence. Thus, we highly degrade the fact-checking performance under many different permutations of the taxonomy’s dimensions. The attacks are also robust against post-hoc modifications of the claim. Our analysis further hints at potential limitations in models’ inference when faced with contradicting evidence. We emphasize that these attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios of such models, and we conclude by discussing challenges and directions for future defenses.
Reversing, Breaking, and Fixing the French Legislative Election E-Voting Protocol
Alexandre Debant and Lucca Hirschi, Université de Lorraine, Inria, CNRS, France
PROVIDENCE: a Flexible Round-by-Round Risk-Limiting Audit
Oliver Broadrick and Poorvi Vora, The George Washington University; Filip Zagórski, University of Wroclaw and Votifica
A Risk-Limiting Audit (RLA) is a statistical election tabulation audit with a rigorous error guarantee. We present ballot polling RLA PROVIDENCE, an audit with the efficiency of MINERVA and flexibility of BRAVO, and prove that it is risk-limiting in the presence of an adversary who can choose subsequent round sizes given knowledge of previous samples. We describe a measure of audit workload as a function of the number of rounds, precincts touched, and ballots drawn and quantify the problem of obtaining a misleading audit sample when rounds are too small, demonstrating the importance of the resulting constraint on audit planning. We describe an approach to planning audit round schedules using these measures and present simulation results demonstrating the superiority of PROVIDENCE.
We describe the use of PROVIDENCE by the Rhode Island Board of Elections in a tabulation audit of the 2021 election. Our implementation of PROVIDENCE in the open source R2B2 library has been integrated as an option in Arlo, the most commonly used RLA software.
Side Channel Attacks
NVLeak: Off-Chip Side-Channel Attacks via Non-Volatile Memory Systems
Zixuan Wang, UC San Diego; Mohammadkazem Taram, Purdue University and UC San Diego; Daniel Moghimi, UT Austin and UC San Diego; Steven Swanson, Dean Tullsen, and Jishen Zhao, UC San Diego
We study microarchitectural side-channel attacks and defenses on non-volatile RAM (NVRAM) DIMMs. In this study, we first perform reverse-engineering of NVRAMs as implemented by the Intel Optane DIMM and reveal several of its previously undocumented microarchitectural details: on-DIMM cache structures (NVCache) and wear-leveling policies. Based on these findings, we first develop cross-core and cross-VM covert channels to establish the channel capacity of these shared hardware resources. Then, we devise NVCache-based side channels under the umbrella of NVLeak. We apply NVLeak to a series of attack case studies, including compromising the privacy of databases and key-value storage backed by NVRAM and spying on the execution path of code pages when NVRAM is used as a volatile runtime memory. Our results show that side-channel attacks exploiting NVRAM are practical and defeat previously-proposed defense that only focuses on on-chip hardware resources. To fill this gap in defense, we develop system-level mitigations based on cache partitioning to prevent side-channel leakage from NVCache.
Cipherfix: Mitigating Ciphertext Side-Channel Attacks in Software
Jan Wichelmann, Anna Pätschke, Luca Wilke, and Thomas Eisenbarth, University of Lübeck
Trusted execution environments (TEEs) provide an environment for running workloads in the cloud without having to trust cloud service providers, by offering additional hardware-assisted security guarantees. However, main memory encryption as a key mechanism to protect against system-level attackers trying to read the TEE's content and physical, off-chip attackers, is insufficient. The recent Cipherleaks attacks infer secret data from TEE-protected implementations by analyzing ciphertext patterns exhibited due to deterministic memory encryption. The underlying vulnerability, dubbed the ciphertext side-channel, is neither protected by state-of-the-art countermeasures like constant-time code nor by hardware fixes.
Thus, in this paper, we present a software-based, drop-in solution that can harden existing binaries such that they can be safely executed under TEEs vulnerable to ciphertext side-channels, without requiring recompilation. We combine taint tracking with both static and dynamic binary instrumentation to find sensitive memory locations, and mitigate the leakage by masking secret data before it gets written to memory. This way, although the memory encryption remains deterministic, we destroy any secret-dependent patterns in encrypted memory. We show that our proof-of-concept implementation protects various constant-time implementations against ciphertext side-channels with reasonable overhead.
Side-Channel Attacks on Optane Persistent Memory
Sihang Liu, University of Virginia; Suraaj Kanniwadi, Cornell University; Martin Schwarzl, Andreas Kogler, and Daniel Gruss, Graz University of Technology; Samira Khan, University of Virginia
There is a constant evolution of technology for cloud environments, including the development of new memory storage technology, such as persistent memory. The newly-released Intel Optane persistent memory provides high-performance, persistent, and byte-addressable access for storage-class applications in data centers. While Optane’s direct data management is fast and efficient, it is unclear whether it comes with undesirable security implications. This is problematic, as cloud tenants are physically co-located on the same hardware.
In this paper, we present the first side-channel security analysis of Intel Optane persistent memory. We reverse-engineer the internal cache hierarchy, cache sizes, associativity, replacement policies, and wear-leveling mechanism of the Optane memory. Based on this reverse-engineering, we construct four new attack primitives on Optane’s internal components. We then present four case studies using these attack primitives. First, we present local covert channels based on Optane’s internal caching. Second, we demonstrate a keystroke side-channel attack on a remote user via Intel’s Optane-optimized key-value store, pmemkv. Third, we study a fully remote covert channel through pmemkv. Fourth, we present our Note Board attack, also through pmemkv, enabling two parties to store and exchange messages covertly across long time gaps and even power cycles of the server. Finally, we discuss mitigations against our attacks.
Pspray: Timing Side-Channel based Linux Kernel Heap Exploitation Technique
Yoochan Lee and Jinhan Kwak, Seoul National University; Junesoo Kang and Yuseok Jeon, UNIST; Byoungyoung Lee, Seoul National University
The stealthiness of an attack is the most vital consideration for an attacker to reach their goals without being detected. Therefore, attackers put in a great deal of effort to increase the success rate of attacks in order not to expose information on the attacker and attack attempts resulting from failures. Exploitation of the kernel, which is a prime target for the attacker, usually takes advantage of heap-based vulnerabilities, and these exploits' success rates fortunately remain low (e.g., 56.1% on average) due to the operating principle of the default Linux kernel heap allocator, SLUB.
This paper presents Pspray, a timing side-channel attack-based exploitation technique that significantly increases the success probability of exploitation. According to our evaluation, with 10 real-world vulnerabilities, Pspray significantly improves the success rate of all those vulnerabilities (e.g., from 56.1% to 97.92% on average). To prevent this exploitation technique from being abused by the attacker, we further introduce a new defense mechanism to mitigate the threat of Pspray. After applying mitigation, the overall success rate of Pspray becomes similar to that from before using Pspray with negligible performance overhead (0.25%) and memory overhead (0.52%).
CipherH: Automated Detection of Ciphertext Side-channel Vulnerabilities in Cryptographic Implementations
Sen Deng, Southern University of Science and Technology; Mengyuan Li, The Ohio State University; Yining Tang, Southern University of Science and Technology; Shuai Wang, Hong Kong University of Science and Technology; Shoumeng Yan, The Ant Group; Yinqian Zhang, Southern University of Science and Technology
The ciphertext side channel is a new type of side channels that exploits deterministic memory encryption of trusted execution environments (TEE). It enables the adversary with read accesses to the ciphertext of the encrypted memory, either logically or physically, to compromise cryptographic implementations protected by TEEs with high fidelity. Prior studies have concluded that the ciphertext side channel is a severe threat to not only AMD SEV-SNP, where the vulnerability was first discovered, but to all TEEs with deterministic memory encryption.
In this paper, we propose CipherH, a practical framework for automating the analysis of cryptographic software and detecting program points vulnerable to ciphertext side channels. CipherH is designed to perform a practical hybrid analysis in production cryptographic software, with a speedy dynamic taint analysis to track the usage of secrets throughout the entire program and a static symbolic execution procedure on each “tainted” function to reason about ciphertext side channel vulnerabilities using symbolic constraint. Empirical evaluation has led to the discovery of over 200 vulnerable program points from the state-of-the-art RSA and ECDSA/ECDH implementations from OpenSSL, MbedTLS, and WolfSSL. Representative cases have been reported to and confirmed or patched by the developers.
Transportation and Infrastructure
ICSPatch: Automated Vulnerability Localization and Non-Intrusive Hotpatching in Industrial Control Systems using Data Dependence Graphs
Prashant Hari Narayan Rajput, NYU Tandon School of Engineering; Constantine Doumanidis and Michail Maniatakos, New York University Abu Dhabi
The paradigm shift of enabling extensive intercommunication between the Operational Technology (OT) and Information Technology (IT) devices allows vulnerabilities typical to the IT world to propagate to the OT side. Therefore, the security layer offered in the past by air gapping is removed, making security patching for OT devices a hard requirement. Conventional patching involves a device reboot to load the patched code in the main memory, which does not apply to OT devices controlling critical processes due to downtime, necessitating in-memory vulnerability patching. Furthermore, these control binaries are often compiled by in-house proprietary compilers, further hindering the patching process and placing reliance on OT vendors for rapid vulnerability discovery and patch development. The current state-of-the-art hotpatching approaches only focus on firmware and/or RTOS. Therefore, in this work, we develop ICSPatch, a framework to automate control logic vulnerability localization using Data Dependence Graphs (DDGs). With the help of DDGs, ICSPatch pinpoints the vulnerability in the control application. As an independent second step, ICSPatch can non-intrusively hotpatch vulnerabilities in the control application directly in the main memory of Programmable Logic Controllers while maintaining reliable continuous operation. To evaluate our framework, we test ICSPatch on a synthetic dataset of 24 vulnerable control application binaries from diverse critical infrastructure sectors. Results show that ICSPatch could successfully localize all vulnerabilities and generate patches accordingly. Furthermore, the patch added negligible latency increase in the execution cycle while maintaining correctness and protection against the vulnerability.
Access Denied: Assessing Physical Risks to Internet Access Networks
Alexander Marder, CAIDA / UC San Diego; Zesen Zhang, UC San Diego; Ricky Mok and Ramakrishna Padmanabhan, CAIDA / UC San Diego; Bradley Huffaker, CAIDA/ UC San Diego; Matthew Luckie, University of Waikato; Alberto Dainotti, Georgia Tech; kc claffy, CAIDA/ UC San Diego; Alex C. Snoeren and Aaron Schulman, UC San Diego
Regional access networks play an essential role in connecting both wireline and mobile users to the Internet. Today’s access networks support 5G cellular phones, cloud services, hospital and financial services, and remote work essential to the modern economy. Yet long-standing economic and architectural constraints produce points of limited redundancy that leave these networks exposed to targeted physical attacks resulting in widespread outages. This risk was dramatically shown in December 2020, when a bomb destroyed part of AT&T’s regional access network in Nashville, Tennessee disabling 911 emergency dispatch, air traffic control, hospital networks, and credit card processing, among other services.
We combine new techniques for analyzing access-network infrastructure deployments with measurements of large-scale outages to demonstrate the feasibility and quantify potential impacts of targeted attacks. Our study yields insights into physical attack surfaces and resiliency limits of regional access networks. We analyze potential approaches to mitigate the risks we identify and discuss drawbacks identified by network operators. We hope that our empirical evaluation will inform risk assessments and operational practices, as well as motivate further analyses of this critical infrastructure.
ZBCAN: A Zero-Byte CAN Defense System
Khaled Serag, Rohit Bhatia, Akram Faqih, and Muslum Ozgur Ozmen, Purdue University; Vireshwar Kumar, Indian Institute of Technology, Delhi; Z. Berkay Celik and Dongyan Xu, Purdue University
Controller Area Network (CAN) is a widely used network protocol. In addition to being the main communication medium for vehicles, it is also used in factories, medical equipment, elevators, and avionics. Unfortunately, CAN was designed without any security features. Consequently, it has come under scrutiny by the research community, showing its security weakness. Recent works have shown that a single compromised ECU on a CAN bus can launch a multitude of attacks ranging from message injection, to bus flooding, to attacks exploiting CAN's error-handling mechanism. Although several works have attempted to secure CAN, we argue that none of their approaches could be widely adopted for reasons inherent in their design. In this work, we introduce ZBCAN, a defense system that uses zero bytes of the CAN frame to secure against the most common CAN attacks, including message injection, impersonation, flooding, and error handling, without using encryption or MACs, while taking into consideration performance metrics such as delay, busload, and data-rate.
RIDAS: Real-time identification of attack sources on controller area networks
Jiwoo Shin and Hyunghoon Kim, Soongsil University; Seyoung Lee, Wonsuk Choi, and Dong Hoon Lee, Korea University; Hyo Jin Jo, Soongsil University
That Person Moves Like A Car: Misclassification Attack Detection for Autonomous Systems Using Spatiotemporal Consistency
Yanmao Man, University of Arizona; Raymond Muller, Purdue University; Ming Li, University of Arizona; Z. Berkay Celik, Purdue University; Ryan Gerdes, Virginia Tech
Autonomous systems commonly rely on object detection and tracking (ODT) to perceive the environment and predict the trajectory of surrounding objects for planning purposes. An ODT’s output contains object classes and tracks that are traditionally predicted independently. Recent studies have shown that ODT’s output can be falsified by various perception attacks with well-crafted noise, but existing defenses are limited to specific noise injection methods and thus fail to generalize. In this work we propose PercepGuard for the detection of misclassification attacks against perception modules regardless of attack methodologies. PercepGuard exploits the spatiotemporal properties of a detected object (inherent in the tracks), and cross-checks the consistency between the track and class predictions. To improve adversarial robustness against defense-aware (adaptive) attacks, we additionally consider context data (such as ego-vehicle velocity) for contextual consistency verification, which dramatically increases the attack difficulty. Evaluations with both real-world and simulated datasets produce a FPR of 5% and a TPR of 99% against adaptive attacks. A baseline comparison confirms the advantage of leveraging temporal features. Real-world experiments with displayed and projected adversarial patches show that PercepGuard detects 96% of the attacks on average.
Language-Based Security
TRust: A Compilation Framework for In-process Isolation to Protect Safe Rust against Untrusted Code
Inyoung Bang and Martin Kayondo, Seoul National University; Hyungon Moon, UNIST (Ulsan National Institute of Science and Technology); Yunheung Paek, Seoul National University
Rust was invented to help developers build highly safe systems. It comes with a variety of programming constructs that put emphasis on safety and control of memory layout. Rust enforces strict discipline about a type system and ownership model to enable compile-time checks of all spatial and temporal safety errors. Despite this advantage in security, the restrictions imposed by Rust’s type system make it difficult or inefficient to express certain designs or computations. To ease or simplify their programming, developers thus often include untrusted code from unsafe Rust or external libraries written in other languages. Sadly, the programming practices embracing such untrusted code for flexibility or efficiency subvert the strong safety guarantees by safe Rust. This paper presents TRUST, a compilation framework which against untrusted code present in the program, provides trustworthy protection of safe Rust via in-process isolation. Its main strategy is allocating objects in an isolated memory region that is accessible to safe Rust but restricted from being written by the untrusted. To enforce this, TRUST employs software fault isolation and x86 protection keys. It can be applied directly to any Rust code without requiring manual changes. Our experiments reveal that TRUST is effective and efficient, incurring runtime overhead of only 7.55% and memory overhead of 13.30% on average when running 11 widely used crates in Rust.
Jinn: Hijacking Safe Programs with Trojans
Komail Dharsee and John Criswell, University of Rochester
Untrusted hardware supply chains enable malicious, powerful, and permanent alterations to processors known as hardware trojans. Such hardware trojans can undermine any software-enforced security policies deployed on top of the hardware. Existing defenses target a select set of hardware components, specifically those that implement hardware-enforced security mechanisms such as cryptographic cores, user/kernel privilege isolation, and memory protections.
We observe that computing systems exercise general purpose processor logic to implement software-enforced security policies. This makes general purpose logic security critical since tampering with it could violate software-based security policies. Leveraging this insight, we develop a novel class of hardware trojans, which we dub Jinn trojans, that corrupt general-purpose hardware to enable flexible and powerful high level attacks. Jinn trojans deactivate compiler-based security-enforcement mechanisms, making type-safe software vulnerable to memory-safety attacks. We prototyped design-time Jinn trojans in the gem5 simulator and used them to attack programs written in Rust, inducing memory-safety vulnerabilities to launch control-flow hijacking attacks. We find that Jinn trojans can effectively compromise software-enforced security policies by compromising a single bit of architectural state with as little as 8 bits of persistent trojan-internal state. Thus, we show that Jinn trojans are effective even when planted in general purpose hardware, disjoint from any hardware-enforced security components. We show that protecting hardware-enforced security logic is insufficient to keep a system secure from hardware trojans.
ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions
Siddharth Muralee, Purdue University; Igibek Koishybayev, Aleksandr Nahapetyan, Greg Tystahl, and Brad Reaves, North Carolina State University; Antonio Bianchi, Purdue University; William Enck and Alexandros Kapravelos, North Carolina State University; Aravind Machiry, Purdue University
McFIL: Model Counting Functionality-Inherent Leakage
Maximilian Zinkus, Yinzhi Cao, and Matthew Green, Johns Hopkins University
Extracting Protocol Format as State Machine via Controlled Static Loop Analysis
Qingkai Shi, Xiangzhe Xu, and Xiangyu Zhang, Purdue University
Browsers
Isolated and Exhausted: Attacking Operating Systems via Site Isolation in the Browser
Matthias Gierlings, Marcus Brinkmann, and Jörg Schwenk, Ruhr University Bochum
Site Isolation is a security architecture for browsers to protect against side-channel and renderer exploits by separating content from different sites at the operating system (OS) process level. By aligning web and OS security boundaries, Site Isolation promises to defend against these attack classes systematically in a streamlined architecture. However, Site Isolation is a large-scale architectural change that also makes OS resources more accessible to web attackers, and thus exposes web users to new risks at the OS level. In this paper, we present the first systematic study of OS resource exhaustion attacks based on Site Isolation, in the web attacker model, in three steps: (1) first-level resources directly accessible with Site Isolation; (2) second-level resources whose direct use is protected by the browser sandbox; (3) an advanced, real-world attack. For (1) we show how to create a fork bomb, highlighting conceptual gaps in the Site Isolation architecture. For (2) we show how to block all UDP sockets in an OS, using a variety of advanced browser features. For (3), we implement a fully working DNS Cache Poisoning attack based on Site Isolation, building on (2) and bypassing a major security feature of DNS. Our results show that the interplay between modern browser features and older OS features is increasingly problematic and needs further research.
Extending a Hand to Attackers: Browser Privilege Escalation Attacks via Extensions
Young Min Kim and Byoungyoung Lee, Seoul National University
Web browsers are attractive targets of attacks, whereby attackers can steal security- and privacy-sensitive data, such as online banking and social network credentials, from users. Thus, browsers adopt the principle of least privilege (PoLP) to minimize damage if compromised, namely, the multiprocess architecture and site isolation. We focus on browser extensions, which are third-party programs that extend the features of modern browsers (Chrome, Firefox, and Safari). The browser also applies PoLP to the extension architecture; that is, two primary extension components are separated, where one component is granted higher privilege, and the other is granted lower privilege.
In this paper, we first analyze the security aspect of extensions. The analysis reveals that the current extension architecture imposes strict security requirements on extension developers, which are difficult to satisfy. In particular, 59 vulnerabilities are found in 40 extensions caused by violated requirements, allowing the attacker to perform privilege escalation attacks, including UXSS (universal cross-site scripting) and stealing passwords or cryptocurrencies in the extensions. Alarmingly, extensions are used by more than half and a third of Chrome and Firefox users, respectively. Furthermore, many extensions in which vulnerabilities are found are extremely popular and have more than 10 million users.
To address the security limitations of the current extension architecture, we present FistBump, a new extension architecture to strengthen PoLP enforcement. FistBump employs strong process isolation between the webpage and content script; thus, the aforementioned security requirements are satisfied by design, thereby eliminating all the identified vulnerabilities. Moreover, FistBump’s design maintains the backward compatibility of the extensions; therefore, the extensions can run with FistBump without modification.
RøB: Ransomware over Modern Web Browsers
Harun Oz, Ahmet Aris, and Abbas Acar, Cyber-Physical Systems Security Lab, Florida International University, Miami, Florida, USA; Güliz Seray Tuncay, Google, Mountain View, CA, USA; Leonardo Babun and Selcuk Uluagac, Cyber-Physical Systems Security Lab, Florida International University, Miami, Florida, USA
Pool-Party: Exploiting Browser Resource Pools for Web Tracking
Peter Snyder, Brave Software; Soroush Karami, University of Illinois at Chicago; Arthur Edelstein, Brave Software; Benjamin Livshits, Imperial College London; Hamed Haddadi, Brave Software and Imperial College of London
We identify class of covert channels in browsers that are not mitigated by current defenses, which we call “pool-party” attacks. Pool-party attacks allow sites to create covert channels by manipulating limited-but-unpartitioned resource pools. This class of attacks have been known to exist; in this work we show that they are more prevalent, more practical for exploitation, and allow exploitation in more ways, than previously identified. These covert channels have sufficient bandwidth to pass cookies and identifiers across site boundaries under practical and real-world conditions.We identify pool-party attacks in all popular browsers, and show they are practical cross-site tracking techniques (i.e., attacks take 0.6s in Chrome and Edge, and 7s in Firefox and Tor Browser).
In this paper we make the following contributions: first, we describe pool-party covert channel attacks that exploit limits in application-layer resource pools in browsers. Second, we demonstrate that pool-party attacks are practical, and can be used to track users in all popular browsers; we also share open source implementations of the attack. Third, we show that in Gecko based-browsers (including the Tor Browser) pool-party attacks can also be used for cross-profile tracking (e.g., linking user behavior across normal and private browsing sessions). Finally, we discuss possible defenses.
Checking Passwords on Leaky Computers: A Side Channel Analysis of Chrome's Password Leak Detect Protocol
Andrew Kwong, UNC Chapel Hill; Walter Wang, University of Michigan; Jason Kim, Georgia Tech; Jonathan Berger, Bar Ilan University; Daniel Genkin, Georgia Tech; Eyal Ronen, Tel Aviv University; Hovav Shacham, UT Austin; Riad Wahby, CMU; Yuval Yarom, Ruhr University Bochum
4:30 pm–4:45 pm
Short Break
4:45 pm–5:45 pm
Speculation Doesn't Pay
Ultimate SLH: Taking Speculative Load Hardening to the Next Level
Zhiyuan Zhang, The University of Adelaide; Gilles Barthe, MPI-SP and IMDEA Software Institute; Chitchanok Chuengsatiansup, The University of Melbourne; Peter Schwabe, MPI-SP and Radboud University; Yuval Yarom, The University of Adelaide
In this paper we revisit the Spectre v1 vulnerability and software-only countermeasures. Specifically, we systematically investigate the performance penalty and security properties of multiple variants of speculative load hardening (SLH). As part of this investigation we implement the "strong SLH" variant by Patrignani and Guarnieri (CCS 2021) as a compiler extension to LLVM. We show that none of the existing variants, including strong SLH, is able to protect against all Spectre v1 attacks in practice. We do this by demonstrating, for the first time, that variable-time arithmetic instructions leak secret information even if they are executed only speculatively. We extend strong SLH to include protections also against this kind of leakage, implement the resulting full protection in LLVM, and use the SPEC2017 benchmarks to compare its performance to the existing variants of SLH and to code that uses fencing instructions to completely prevent speculative execution. We show that our proposed countermeasure offers full protection against Spectre v1 attacks at much better performance than code using fences. In fact, for several benchmarks our approach is more than twice as fast.
Speculation at Fault: Modeling and Testing Microarchitectural Leakage of CPU Exceptions
Jana Hofmann, Azure Research, Microsoft; Emanuele Vannacci, Vrije Universiteit Amsterdam; Cedric Fournet, Boris Köpf, and Oleksii Oleksenko, Azure Research, Microsoft
ProSpeCT: Provably Secure Speculation for the Constant-Time Policy
Lesly-Ann Daniel, Marton Bognar, and Job Noorman, imec-DistriNet, KU Leuven, 3001 Leuven, Belgium; Sébastien Bardin, CEA, LIST, Université Paris Saclay, France; Tamara Rezk, INRIA, Université Côte d’Azur, Sophia Antipolis, France; Frank Piessens, imec-DistriNet, KU Leuven, 3001 Leuven, Belgium
We propose ProSpeCT, a generic formal processor model providing provably secure speculation for the constant-time policy. For constant-time programs under a non-speculative semantics, ProSpeCT guarantees that speculative and out-of-order execution cause no microarchitectural leaks. This guarantee is achieved by tracking secrets in the processor pipeline and ensuring that they do not influence the microarchitectural state during speculative execution. Our formalization covers a broad class of speculation mechanisms, generalizing prior work. As a result, our security proof covers all known Spectre attacks, including load value injection (LVI) attacks.
In addition to the formal model, we provide a prototype hardware implementation of ProSpeCT on a RISC-V processor and show evidence of its low impact on hardware cost, performance, and required software changes. In particular, the experimental evaluation confirms our expectation that for a compliant constant-time binary, enabling ProSpeCT incurs no performance overhead.
Title Redacted Due to Vulnerability Embargo
Daniel Moghimi, UCSD
This paper, title, and abstract are under embargo and will be released to the public on the first day of the symposium, Wednesday, August 9, 2023.
Facing the Facts
FACE-AUDITOR: Data Auditing in Facial Recognition Systems
Min Chen and Zhikun Zhang, CISPA Helmholtz Center for Information Security; Tianhao Wang, University of Virginia; Michael Backes and Yang Zhang, CISPA Helmholtz Center for Information Security
UnGANable: Defending Against GAN-based Face Manipulation
Zheng Li, CISPA Helmholtz Center for Information Security; Ning Yu, Salesforce Research; Ahmed Salem, Microsoft Research; Michael Backes, Mario Fritz, and Yang Zhang, CISPA Helmholtz Center for Information Security
Deepfakes pose severe threats of visual misinformation to our society. One representative deepfake application is face manipulation that modifies a victim's facial attributes in an image, e.g., changing her age or hair color. The state-of-the-art face manipulation techniques rely on Generative Adversarial Networks (GANs). In this paper, we propose the first defense system, namely UnGANable, against GAN-inversion-based face manipulation. In specific, UnGANable focuses on defending GAN inversion, an essential step for face manipulation. Its core technique is to search for alternative images (called cloaked images) around the original images (called target images) in image space. When posted online, these cloaked images can jeopardize the GAN inversion process. We consider two state-of-the-art inversion techniques including optimization-based inversion and hybrid inversion, and design five different defenses under five scenarios depending on the defender's background knowledge. Extensive experiments on four popular GAN models trained on two benchmark face datasets show that UnGANable achieves remarkable effectiveness and utility performance, and outperforms multiple baseline methods. We further investigate four adaptive adversaries to bypass UnGANable and show that some of them are slightly effective.
Fairness Properties of Face Recognition and Obfuscation Systems
Harrison Rosenberg, University of Wisconsin–Madison; Brian Tang, University of Michigan; Kassem Fawaz and Somesh Jha, University of Wisconsin–Madison
The proliferation of automated face recognition in the commercial and government sectors has caused significant privacy concerns for individuals. One approach to address these privacy concerns is to employ evasion attacks against the metric embedding networks powering face recognition systems: Face obfuscation systems generate imperceptibly perturbed images that cause face recognition systems to misidentify the user. Perturbed faces are generated on metric embedding networks, which are known to be unfair in the context of face recognition. A question of demographic fairness naturally follows: are there demographic disparities in face obfuscation system performance? We answer this question with an analytical and empirical exploration of recent face obfuscation systems. Metric embedding networks are found to be demographically aware: face embeddings are clustered by demographic. We show how this clustering behavior leads to reduced face obfuscation utility for faces in minority groups. An intuitive analytical model yields insight into these phenomena.
GlitchHiker: Uncovering Vulnerabilities of Image Signal Transmission with IEMI
Qinhong Jiang, Xiaoyu Ji, Chen Yan, Zhixin Xie, Haina Lou, and Wenyuan Xu, Zhejiang University
Cameras have evolved into one of the most important gadgets in a variety of applications. In this paper, we identify a new class of vulnerabilities involving the hitherto disregarded image signal transmission phase and explain the underlying principles of camera glitches for the first time. Based on the vulnerabilities, we design the GlitchHiker attack that can actively induce controlled glitch images of a camera at various positions, widths, and numbers using intentional electromagnetic interference (IEMI). We successfully launch the GlitchHiker attack on 8 off-the-shelf camera systems in 5 categories in their original packages at a distance of up to 30 cm. Experiments with 2 case studies involving 4 object detectors and 2 face detectors show that injecting one ribboning suffices to hide, create or alter objects and persons with a maximum success rate of 98.5% and 80.4%, respectively. Then, we discuss real-world attack scenarios and perform preliminary investigations on the feasibility of targeted attacks. Finally, we propose hardware- and software-based countermeasures.
More Hardware Side Channels
(M)WAIT for It: Bridging the Gap between Microarchitectural and Architectural Side Channels
Ruiyi Zhang, CISPA Helmholtz Center for Information Security; Taehyun Kim, Independent; Daniel Weber and Michael Schwarz, CISPA Helmholtz Center for Information Security
This paper is under embargo and will be released on the first day of the symposium.
In the last years, there has been a rapid increase in microarchitectural attacks, exploiting side effects of various parts of the CPU. Most of them have in common that they rely on timing differences, requiring an architectural high-resolution timer to make microarchitectural states visible to an attacker.
In this paper, we present a new primitive that converts microarchitectural states into architectural states without relying on time measurements. We exploit the unprivileged idle-loop optimization instructions umonitor
and umwait
introduced with the new Intel microarchitectures (Tremont and Alder Lake). Although not documented, these instructions provide architectural feedback about the transient usage of a specified memory region. In three case studies, we show the versatility of our primitive. First, with Spectral, we present a way of enabling transient-execution attacks to leak bits architecturally with up to 200 kbit/s without requiring any architectural timer. Second, we show traditional side-channel attacks without relying on an architectural timer. Finally, we demonstrate that when augmented with a coarse-grained timer, we can also mount interrupt-timing attacks, allowing us to, e.g., detect which website a user opens. Our case studies highlight that the boundary between architecture and microarchitecture becomes more and more blurry, leading to new attack variants and complicating effective countermeasures.
Collide+Power: Leaking Inaccessible Data with Software-based Power Side Channels
Andreas Kogler, Jonas Juffinger, and Lukas Giner, Graz University of Technology; Lukas Gerlach, CISPA Helmholtz Center for Information Security; Martin Schwarzl, Graz University of Technology; Michael Schwarz, CISPA Helmholtz Center for Information Security; Daniel Gruss and Stefan Mangard, Graz University of Technology
Inception: Exposing New Attack Surfaces with Training in Transient Execution
Daniël Trujillo, Johannes Wikner, and Kaveh Razavi, ETH Zurich
BunnyHop: Exploiting the Instruction Prefetcher
Zhiyuan Zhang, Mingtian Tao, and Sioli O'Connell, The University of Adelaide; Chitchanok Chuengsatiansup, The University of Melbourne; Daniel Genkin, Georgia Tech; Yuval Yarom, The University of Adelaide
The instruction prefetcher is a microarchitectural component whose task is to bring program code into the instruction cache. To predict which code is likely to be executed, the instruction prefetcher relies on the branch predictor.
In this paper we investigate the instruction prefetcher in modern Intel processors. We first propose BunnyHop, a technique that uses the instruction prefetcher to encode branch prediction information as a cache state. We show how to use BunnyHop to perform low-noise attacks on the branch predictor. Specifically, we show how to implement attacks similar to Flush+Reload and Prime+Probe on the branch predictor instead of on the data caches. We then show that BunnyHop allows using the instruction prefetcher as a confused deputy to force cache eviction within a victim. We use this to demonstrate an attack on an implementation of AES protected with both cache coloring and data prefetch.
Deeper Thoughts on Deep Learning
Can a Deep Learning Model for One Architecture Be Used for Others? Retargeted-Architecture Binary Code Analysis
Junzhe Wang, George Mason University; Matthew Sharp, University of South Carolina; Chuxiong Wu, Qiang Zeng, and Lannan Luo, George Mason University
Decompiling x86 Deep Neural Network Executables
Zhibo Liu, Yuanyuan Yuan, and Shuai Wang, The Hong Kong University of Science and Technology; Xiaofei Xie, Singapore Management University; Lei Ma, University of Alberta
Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including CPUs, GPUs, and various hardware accelerators.
We present BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables. BTD takes DNN executables and outputs full model specifications, including types of DNN operators, network topology, dimensions, and parameters that are (nearly) identical to those of the input models. BTD delivers a practical framework to process DNN executables compiled by different DL compilers and with full optimizations enabled on x86 platforms. It employs learning-based techniques to infer DNN operators, dynamic analysis to reveal network architectures, and symbolic execution to facilitate inferring dimensions and parameters of DNN operators.
Our evaluation reveals that BTD enables accurate recovery of full specifications of complex DNNs with millions of parameters (e.g., ResNet). The recovered DNN specifications can be re-compiled into a new DNN executable exhibiting identical behavior to the input executable. We show that BTD can boost two representative attacks, adversarial example generation and knowledge stealing, against DNN executables. We also demonstrate cross-architecture legacy code reuse using BTD, and envision BTD being used for other critical downstream tasks like DNN security hardening and patching.
AIRS: Explanation for Deep Reinforcement Learning based Security Applications
Jiahao Yu, Northwestern University; Wenbo Guo, Purdue University; Qi Qin, ShanghaiTech University; Gang Wang, University of Illinois at Urbana-Champaign; Ting Wang, The Pennsylvania State University; Xinyu Xing, Northwestern University
Recently, we have witnessed the success of deep reinforcement learning (DRL) in many security applications, ranging from malware mutation to selfish blockchain mining. Like all other machine learning methods, the lack of explainability has been limiting its broad adoption as users have difficulty establishing trust in DRL models' decisions. Over the past years, different methods have been proposed to explain DRL models but unfortunately, they are often not suitable for security applications, in which explanation fidelity, efficiency, and the capability of model debugging are largely lacking.
In this work, we propose AIRS, a general framework to explain deep reinforcement learning-based security applications. Unlike previous works that pinpoint important features to the agent's current action, our explanation is at the step level. It models the relationship between the final reward and the key steps that a DRL agent takes, and thus outputs the steps that are most critical towards the final reward the agent has gathered. Using four representative security-critical applications, we evaluate AIRS from the perspectives of explainability, fidelity, stability, and efficiency. We show that AIRS could outperform alternative explainable DRL methods. We also showcase AIRS's utility, demonstrating that our explanation could facilitate the DRL model's failure offset, help users establish trust in a model decision, and even assist the identification of inappropriate reward designs.
Differential Testing of Cross Deep Learning Framework APIs: Revealing Inconsistencies and Vulnerabilities
Zizhuang Deng, Guozhu Meng, Kai Chen, Tong Liu, and Lu Xiang, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China; Chunyang Chen, Monash University, Australia
Attacks on Deployed Cryptosystems
Every Signature is Broken: On the Insecurity of Microsoft Office’s OOXML Signatures
Simon Rohlmann, Vladislav Mladenov, Christian Mainka, Daniel Hirschberger, and Jörg Schwenk, Ruhr University Bochum
Microsoft Office is one of the most widely used applications for office documents. For documents of prime importance, such as contracts and invoices, the content can be signed to guarantee authenticity and integrity. Since 2019, security researchers have uncovered attacks against the integrity protection in other office standards like PDF and ODF. Since Microsoft Office documents rely on different specifications and processing rules, the existing attacks are not applicable.
We are the first to provide an in-depth analysis of Office Open XML (OOXML) Signatures, the Ecma/ISO standard that all Microsoft Office applications use. Our analysis reveals major discrepancies between the structure of office documents and the way digital signatures are verified. These discrepancies lead to serious security flaws in the specification and in the implementation. As a result, we discovered five new attack classes. Each attack allows attackers to modify the content in signed documents, while the signatures are still displayed as valid.
We tested the attacks against different Microsoft Office versions on Windows and macOS, as well as against OnlyOffice Desktop on Windows, macOS and Linux. All tested Office versions are vulnerable. On macOS, we could reveal a surprising result: although Microsoft Office indicates that the document is protected by a signature, the signature is not validated. The attacks’ impact is alarming: attackers can arbitrarily manipulate the displayed content of a signed document, and victims are unable to detect the tampering. Even worse, we present a universal signature forgery attack that allows the attacker to create an arbitrary document and apply a signature extracted from a different source, such as an ODF document or a SAML token. For the victim, the document is displayed as validly signed by a trusted entity.
We propose countermeasures to prevent such issues in the future. During a coordinated disclosure, Microsoft acknowledged and awarded our research with a bug bounty.
On the Security of Internet Infrastructure
Elias Heftrig, ATHENE and Fraunhofer SIT; Haya Shulman, ATHENE, Fraunhofer SIT, and Goethe-Universität Frankfurt; Michael Waidner, ATHENE, Fraunhofer SIT, and Technische Universität Darmstadt
This paper, title, and abstract are under embargo and will be released to the public on the first day of the symposium, Wednesday, August 9, 2023.
Security Analysis of MongoDB Queryable Encryption
Zichen Gui, Kenny Paterson, and Tianxin Tang, ETH Zurich
Title: [Redacted Telecom Talk]
Carlo Meijer, Wouter Bokslag, and Jos Wetzels, Midnight Blue
Attacking, Defending, and Analyzing
On the Feasibility of Malware Unpacking via Hardware-assisted Loop Profiling
Binlin Cheng, Shandong University & Hubei Normal University; Erika A Leal, Tulane University; Haotian Zhang, The University of Texas at Arlington; Jiang Ming, Tulane University
This paper and abstract are under embargo and will be released on the first day of the symposium.
Multiview: Finding Blind Spots in Access-Deny Issues Diagnosis
Bingyu Shen, Tianyi Shan, and Yuanyuan Zhou, University of California, San Diego
Access-deny issues are hard to fix because it implies both availability and security requirements. On one hand, system administrators (sysadmins) need to make a change quickly to enable legitimate access. On the other hand, sysadmins need to make sure the change does not allow excessive access. Fulfilling the second requirement on security is especially challenging because it highly requires the sysadmins’ knowledge of the system environments and security context. Blind spots in knowledge and system settings may hinder sysadmins from finding the solutions that align with the security context. Insecure fixes can over-grant permissions, which may only get noticed after the security vulnerability gets exploited.
This paper aims to help sysadmins reduce blind spots in diagnosis by providing multiple directions to resolve access-deny issues. We propose a system, called Multiview, that automatically mutates the configurations to explore possible directions to fix the access-deny issue and lets the configuration changes on each direction grant as few permissions as possible. Multiview provides a detailed diagnosis report, including access-control configurations that are related to the denial, possible configuration changes on different directions to allow the request, as well as the impact on the access-control state of the entire system.
We conducted a user study to evaluate Multiview with 20 participants on five real-world access-deny issues. Multiview can reduce the percentage of insecure fixes from 44.0% to 2.0% and reduce the diagnosis time by 62.0% on average. We also evaluated Multiview on 112 real-world failure cases from eight different systems and server applications, and it can successfully diagnose 89 of them. Multiview accurately identifies the failure-causing configurations and provides possible directions to each access-deny issue within one minute.
Attacks are Forwarded: Breaking the Isolation of MicroVM-based Containers Through Operation Forwarding
Jietao Xiao and Nanzi Yang, State Key Lab of ISN, School of Cyber Engineering, Xidian University, China; Wenbo Shen, Zhejiang University, China; Jinku Li and Xin Guo, State Key Lab of ISN, School of Cyber Engineering, Xidian University, China; Zhiqiang Dong and Fei Xie, Tencent Security Yunding Lab, China; Jianfeng Ma, State Key Lab of ISN, School of Cyber Engineering, Xidian University, China
People proposed to use virtualization techniques to reinforce the isolation between containers. In the design, each container runs inside a lightweight virtual machine (called microVM). MicroVM-based containers benefit from both the security of microVM and the high efficiency of the container, and thus are widely used on the public cloud.
However, in this paper, we demonstrate a new attack surface that can be exploited to break the isolation of the microVM-based container, called operation forwarding attacks. Our key observation is that certain operations of the microVM-based container are forwarded to host system calls and host kernel functions. The attacker can leverage the operation forwarding to exploit the host kernel’s vulnerabilities and exhaust host resources. To fully understand the security risk of operation forwarding attacks, we divide the components of the microVM-based container into three layers according to their functionalities and present corresponding attacking strategies to exploit the operation forwarding of each layer. Moreover, we design eight attacks against Kata Containers and Firecracker-based containers and conduct experiments on the local environment, AWS, and Alibaba Cloud. Our results show that the attacker can trigger potential privilege escalation, downgrade 93.4% IO performance and 75.0% CPU performance of the victim container, and even crash the host. We further give security suggestions for mitigating these attacks.
AutoFR: Automated Filter Rule Generation for Adblocking
Hieu Le, Salma Elmalaki, and Athina Markopoulou, University of California, Irvine; Zubair Shafiq, University of California, Davis
Adblocking relies on filter lists, which are manually curated and maintained by a community of filter list authors. Filter list curation is a laborious process that does not scale well to a large number of sites or over time. In this paper, we introduce AutoFR, a reinforcement learning framework to fully automate the process of filter rule creation and evaluation for sites of interest. We design an algorithm based on multi-arm bandits to generate filter rules that block ads while controlling the trade-off between blocking ads and avoiding visual breakage. We test AutoFR on thousands of sites and we show that it is efficient: it takes only a few minutes to generate filter rules for a site of interest. AutoFR is effective: it generates filter rules that can block 86% of the ads, as compared to 87% by EasyList, while achieving comparable visual breakage. Furthermore, AutoFR generates filter rules that generalize well to new sites. We envision that AutoFR can assist the adblocking community in filter rule generation at scale.