Website Fingerprinting through Deep Learning

by Vera Rimmer, Davy Preuveneers, Marc Juarez, Tom Van Goethem, and Wouter Joosen

Millions of Internet users rely on The Onion Router (Tor) software for anonymous, untraceable use of the web. The privacy of Tor users is, however, threatened by website fingerprinting attacks: passive eavesdroppers can identify the websites visited by a user through applying machine learning techniques to the (encrypted) network traffic between the user and the entrance to the Tor network. We present a novel website fingerprinting attack against Tor based on deep learning, a set of powerful learning algorithms that are the force behind many of the latest advances in artificial intelligence. In our research, we explored how feedforward, convolutional and recurrent deep neural networks can be applied to Tor traffic to accurately identify browsing patterns. We show that deep learning is a highly effective and robust technique for website fingerprinting.

The main outcome of our work is a paper presented at the Network and Distributed System Security Symposium in February 2018. Additionally, we constructed the largest-ever dataset for evaluation of deep learning against Tor traffic, which contains 2,500 page visits of 900 websites. Below you can find our implementation of deep neural networks in application to Tor traffic. We encourage you to read our paper and use our data and code in order to further deepen and expand the research on privacy enhancing technologies and deep learning.


Several studies have shown that the network traffic that is generated by a visit to a website over Tor reveals information specific to the website through the timing and sizes of network packets. By capturing traffic traces between users and their Tor entry guard, a network eavesdropper can leverage this meta-data to reveal which website Tor users are visiting. The success of such attacks heavily depends on the particular set of traffic features that are used to construct the fingerprint. Typically, these features are manually engineered and, as such, any change introduced to the Tor network can render these carefully constructed features ineffective.

In this paper, we show that an adversary can automate the feature engineering process, and thus automatically deanonymize Tor traffic by applying our novel method based on deep learning. We collect a dataset comprised of more than three million network traces, which is the largest dataset of web traffic ever used for website fingerprinting, and find that the performance achieved by our deep learning approaches is comparable to known methods which include various research efforts spanning over multiple years.

The obtained success rate exceeds 96% for a closed world of 100 websites and 94% for our biggest closed world of 900 classes. In our open world evaluation, the most performant deep learning model is 2% more accurate than the state-of-the-art attack. Furthermore, we show that the implicit features automatically learned by our approach are far more resilient to dynamic changes of web content over time. We conclude that the ability to automatically construct the most relevant traffic features and perform accurate traffic recognition makes our deep learning based approach an efficient, flexible and robust technique for website fingerprinting.


To access the source code and data as used in our research paper, please follow the link below.

Access Code & Data


Our dataset containing traffic traces of 2500 page visits on 900 websites will be made publicly available soon. For now, the dataset is available upon request.

Contact Us

Feel free to reach out to us with regard to our research or the dataset.