Banner

At the Intel Science & Technology Center for Adversary-Resilient Security Analytics (ISTC-ARSA) housed at Georgia Tech’s Institute for Information Security & Privacy (IISP), researchers will study the vulnerabilities of machine learning (ML) algorithms and develop new security approaches to improve the resilience of ML applications including security analytics, search engines, facial and voice recognition, fraud detection, and more.

Read more here


Updates


  • Tue 11 June 2019
  • Carter Yagemann

MLSploit Extended Abstract to Appear in KDD 2019

An extended abstract authored by ISTC-ARSA researchers has been accepted to the 25th Conference on Knowledge Discovery and Data Mining (KDD'19) in August. Title: MLsploit: A Framework for Interactive Experimentation with Adversarial Machine Learning Research Authors: Nilaksh Das, Siwei Li, Chanil Jeon, Jinho Jung, Shang-Tse Chen, Carter Yagemann, Evan Downing …

  • Sat 08 June 2019
  • Carter Yagemann

Barnum to Appear in Information Security Conference 2019

A paper authored by ISTC-ARSA researchers has been accepted to the 22nd Information Security Conference (ISC'19) in September. Title: Barnum: Detecting Document Malware via Control Flow Anomalies in Hardware Traces. Authors: Carter Yagemann (Georgia Tech), Salmin Sultana (Intel Labs), Li Chen (Intel Labs), Wenke Lee (Georgia Tech). Abstract: This paper …

  • Mon 23 July 2018
  • Carter Yagemann

uCFI Accepted to ACM CCS 2018

A paper authored by ISTC-ARSA researchers has been accepted to the 25th ACM Conference on Computer and Communications Security (CCS'18) being held in Toronto, Canada from October 15, 2018 to October 19, 2018. Title: Enforcing Unique Code Target Property for Control-Flow Integrity Authors: Hong Hu, Chenxiong Qian, Carter Yagemann, Simon …

  • Tue 15 May 2018
  • Carter Yagemann

Researchers gather May 9-10 for second annual retreat

Researchers from Intel Labs and Georgia Tech gathered at Intel's campus in Portland, Oregon for a two-day annual retreat dedicated to the advancement of machine learning (ML) cybersecurity. Following a review of the multi-year project vision and goals for the Intel ISTC-ARSA, students gave a demo of the upcoming MLSploit …

  • Mon 16 April 2018
  • Carter Yagemann

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

We have release a new code repository for physically attacking Faster R-CNN. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to …


  • Tue 12 December 2017
  • Nilaksh Das

Defending AI with JPEG Compression

The field of machine learning has witnessed tremendous success in the recent years across multiple domains. It is not uncommon to witness the state of the art being challenged nearly every month, more so in the domain of computer vision. Many deep neural networks have been proposed that can beat …

  • Mon 30 October 2017
  • Carter Yagemann

CCS 2017 Accepted Papers

We have three papers appearing in CCS 2017: Yang Ji, Sangho Lee, Evan Downing, Weiren Wang, Mattia Fazzini, Taesoo Kim, Alessandro Orso, Wenke Lee. RAIN: Refinable Attack Investigation with On-demand Inter-Process Information Flow Tracking. Appeared in ACM Conference on Computer and Communications Security (CCS 2017). Dallas, USA. October 2017. [Paper …

  • Sat 28 October 2017
  • Carter Yagemann

Intel PT Data at Rest: A Compression Experiment

At the Intel Science and Technology Center for Adversarial Resilient Security Analytics (ISTC-ARSA), one of our ongoing goals is to identify and explore new data sources for more robust machine learning. One of the new sources we're interested in is Intel Processor Trace (PT), which is able to efficiently record …

  • Fri 15 September 2017
  • Carter Yagemann

AVPass Code Release

The code for AVPass is available now on Github!