Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- posterNovember 2019
Poster: Recovering the Input of Neural Networks via Single Shot Side-channel Attacks
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2657–2659https://doi.org/10.1145/3319535.3363280The interplay between machine learning and security is becoming more prominent. New applications using machine learning also bring new security risks. Here, we show it is possible to reverse-engineer the inputs to a neural network with only a single-...
- posterNovember 2019
Poster: Video Fingerprinting in Tor
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2629–2631https://doi.org/10.1145/3319535.3363273Over 8 million users rely on the Tor network each day to protect their anonymity online. Unfortunately, Tor has been shown to be vulnerable to the website fingerprinting attack, which allows an attacker to deduce the website a user is visiting based on ...
- posterNovember 2019
Poster: Adversarial Examples for Hate Speech Classifiers
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2621–2623https://doi.org/10.1145/3319535.3363271With the advent of the Internet, social media platforms have become an increasingly popular medium of communication for people. Platforms like Twitter and Quora allow people to express their opinions on a large scale. These platforms are, however, ...
- posterNovember 2019
Poster: Towards Robust Open-World Detection of Deepfakes
- Saniat Javid Sohrawardi,
- Akash Chintha,
- Bao Thai,
- Sovantharith Seng,
- Andrea Hickerson,
- Raymond Ptucha,
- Matthew Wright
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2613–2615https://doi.org/10.1145/3319535.3363269There is heightened concern over deliberately inaccurate news. Recently, so-called deepfake videos and images that are modified by or generated by artificial intelligence techniques have become more realistic and easier to create. These techniques could ...
- posterNovember 2019
Nickel to Lego: Using Foolgle</> to Create Adversarial Examples to Fool Google Cloud Speech-to-Text API
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2593–2595https://doi.org/10.1145/3319535.3363264Many companies offer automatic speech recognition or Speech-to-Text APIs for use in diverse applications. However, audio classification algorithms trained with deep neural networks (DNNs) can sometimes misclassify adversarial examples, posing a ...
- posterNovember 2019
Poster: Attacking Malware Classifiers by Crafting Gradient-Attacks that Preserve Functionality
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2565–2567https://doi.org/10.1145/3319535.3363257Machine learning has proved to be a promising technology to determine whether a piece of software is malicious or benign. However, the accuracy of this approach comes sometimes at the expense of its robustness and probing these systems against ...
- research-articleNovember 2019
ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 1265–1282https://doi.org/10.1145/3319535.3363216This paper presents a technique to scan neural network based AI models to determine if they are trojaned. Pre-trained AI models may contain back-doors that are injected through training or by transforming inner neuron weights. These trojaned models ...
- research-articleNovember 2019
Towards Continuous Access Control Validation and Forensics
- Chengcheng Xiang,
- Yudong Wu,
- Bingyu Shen,
- Mingyao Shen,
- Haochen Huang,
- Tianyin Xu,
- Yuanyuan Zhou,
- Cindy Moore,
- Xinxin Jin,
- Tianwei Sheng
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 113–129https://doi.org/10.1145/3319535.3363191Access control is often reported to be "profoundly broken" in real-world practices due to prevalent policy misconfigurations introduced by system administrators (sysadmins). Given the dynamics of resource and data sharing, access control policies need ...
- research-articleNovember 2019
Geneva: Evolving Censorship Evasion Strategies
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2199–2214https://doi.org/10.1145/3319535.3363189Researchers and censoring regimes have long engaged in a cat-and-mouse game, leading to increasingly sophisticated Internet-scale censorship techniques and methods to evade them. In this paper, we take a drastic departure from the previously manual ...
- research-articleNovember 2019
Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 225–240https://doi.org/10.1145/3319535.3354261The wide application of deep learning technique has raised new security concerns about the training data and test data. In this work, we investigate the model inversion problem under adversarial settings, where the adversary aims at inferring information ...
- research-articleNovember 2019
Quantitative Verification of Neural Networks and Its Security Applications
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 1249–1264https://doi.org/10.1145/3319535.3354245Neural networks are increasingly employed in safety-critical domains. This has prompted interest in verifying or certifying logically encoded properties of neural networks. Prior work has largely focused on checking existential properties, wherein the ...
- research-articleNovember 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 241–257https://doi.org/10.1145/3319535.3354211The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and ...
- research-articleNovember 2019
Latent Backdoor Attacks on Deep Neural Networks
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2041–2055https://doi.org/10.1145/3319535.3354209Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs), where misclassification rules are hidden inside normal models, only to be triggered by very specific inputs. However, these "traditional" backdoors assume a context ...
- research-articleNovember 2019
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 275–289https://doi.org/10.1145/3319535.3345660Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations ...
- research-articleNovember 2019
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
- Yulong Cao,
- Chaowei Xiao,
- Benjamin Cyr,
- Yimeng Zhou,
- Won Park,
- Sara Rampazzi,
- Qi Alfred Chen,
- Kevin Fu,
- Z. Morley Mao
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityPages 2267–2281https://doi.org/10.1145/3319535.3339815In Autonomous Vehicles (AVs), one fundamental pillar is perception,which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have ...