Ask APIs

March 2018 - Now
Ask API front page

Build and deployed an API to classify given webpages or detect toxic comments. The API is unlimited and free but please be nice when requesting. Check it out here.

Webpages Classification: The backend of the classification is built with a LSTM text classifier trained on Wikipedia pages. There is also a backup light TF-IDF classifier to handle huge traffic. In two MTurk surveys tested on wild urls, we have shown that the model can achieve more than 80% accuracy.

Toxic Comments Detection: Detector backed by machine learning model trained on 153k toxic comments and reached 98% accuracy.

Defense against backdoor attacks on deep neural networks (DNN)

May 2018 - Sep 2018
backdoor figure

Our paper have been accpeted at IEEE Symposium on Security and Privacy. Check out our paper, Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks.

Service providers rely more and more on DNNs to detect malicious behaviors. Recent works on adversarial attacks make such detections easy to evade. In one of my recent projects, we focused on backdoor attacks on DNNs, where service providers outsource their model training and obtain a model with special engineered vulnerabilities only known to attackers. We designed a defense system that relies on finding possible attacks through optimization. Our system is able to detect backdoor attacks on all models we have tested. In addition, we proposed mitigation techniques to patch the model via adversarial training and model pruning.

Project ERU

March 2018 - Now
project ERU logo

I am writing a deep learning framework specific for people who at least have basic understanding of the deep learning and wish to try different architectures as fast as possible.

The first version of ERU is published with more details on the Github page. I am working on adding more functionality and improve the interface. Feel free to email me to help with the development!!

Resisting information manipulation using Machine Learning

Feb 2018 - May 2018

As users are given the power to generate and curate content on sites (e.g., Facebook, Twitter), it also enables malicious entities to manipulate the information we consume. An ML program used by attackers that can create fake content would greatly reduce the attack cost and also evade existing detection system. I worked on a project to examine such attacks. Our study focused on online review systems such as Yelp. We showed that a generative model based on MaskGAN is capable of manipulating existing content to express certain meanings specified by attackers. For example, an attacker can alter positive restaurant reviews to negative ones or neutral comments to controversial ones in order to satisfy their malicious objectives.

Exploiting completive platforms with reinforcement learning agent

Oct 2017 - Nov 2018
penny auction data plots

Malicious attacks on online competitive platforms, such as auctions and gaming, are evolving as malicious entities move beyond simple rule- based bots with limited algorithmic intelligence. In online competitive platforms, users compete with others for certain rewards. An RL agent that trained to optimize rewards in a certain competitive environment would exploit the environment and provide significant profit to attackers. A specially engineered RL agent can even mimic normal behaviors to evade existing detection algorithms.

I was involved in a project that took the first step towards evaluating such attacks by considering online auction platforms. Our study focused on penny auctions, a form of winner-take-all auction in which a perfect agent could win auctions with low costs. We showed that an RL program is capable of outcompeting normal users, winning most auctions, and generating a high profit. We trained the RL agent in an LSTM-based simulator. The simulator is trained from millions of auction bidding traces that we collected over six months. As RL algorithms become more accessible with the emergence of open source implementations, it is important to understand how it can be used as an attack tool. My work highlights the need to prepare for AI-based attacks that can outcompete normal users on various platforms.

We published a short paper at Hypertext'18 and there will be a more in-depth followup paper coming soon.

Online tracking transparency

June 2017 - Oct 2018

My work in this space focuses on providing transparency of third-party tracking. In a study, we found that people’s preferences regarding tracking vary across demographics, interests, and understanding levels of tracking. We designed a browser extension that collects longitudinal information of the tracking activities in the user’s browser. Similar to tracking companies, we also build an LSTM based inferencing algorithm that makes inferences about users’ interests based on their browser histories. The extension provides a transparency tool that informs users how their data are being collected and potential information trackers have inferred about them.