Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Is Writing Style Sufficient to Deanonymize Material Posted Online?
33 Bits of Entropy ^ | 20 Feb 2012 | Arvind Narayanan

Posted on 02/21/2012 10:18:57 AM PST by FourPeas

I have a new paper appearing at IEEE S&P with Hristo Paskov, Neil Gong, John Bethencourt, Emil Stefanov, Richard Shin and Dawn Song on Internet-scale authorship identification based on stylometry, i.e., analysis of writing style. Stylometric identification exploits the fact that we all have a ‘fingerprint’ based on our stylistic choices and idiosyncrasies with the written word. To quote from my previous post speculating on the possibility of Internet-scale authorship identification:

Consider two words that are nearly interchangeable, say ‘since’ and ‘because’. Different people use the two words in a differing proportion. By comparing the relative frequency of the two words, you get a little bit of information about a person, typically under 1 bit. But by putting together enough of these ‘markers’, you can construct a profile.

The basic idea that people have distinctive writing styles is very well-known and well-understood, and there is an extremely long line of research on this topic. This research began in modern form in the early 1960s when statisticians Mosteller and Wallace determined the authorship of the disputed Federalist papers, and were featured in TIME magazine. It is never easy to make a significant contribution in a heavily studied area. No surprise, then, that my initial blog post was written about three years ago, and the Stanford-Berkeley collaboration began in earnest over two years ago.

Impact. So what exactly did we achieve? Our research has dramatically increased the number of authors that can be distinguished using writing-style analysis: from about 300 to 100,000. More importantly, the accuracy of our algorithms drops off gently as the number of authors increases, so we can be confident that they will continue to perform well as we scale the problem even further. Our work is therefore the first time that stylometry has been shown to have to have serious implications for online anonymity.[1]

Anonymity and free speech have been intertwined throughout history. For example, anonymous discourse was essential to the debates that gave birth to the United States Constitution. Yet a right to anonymity is meaningless if an anonymous author’s identity can be unmasked by adversaries. While there have been many attempts to legally force service providers and other intermediaries to reveal the identity of anonymous users, courts have generally upheld the right to anonymity. But what if authors can be identified based on nothing but a comparison of the content they publish to other web content they have previously authored?

Experiments. Our experimental methodology is set up to directly address this question. Our primary data source was the ICWSM 2009 Spinn3r Blog Dataset, a large collection of blog posts made available to researchers by Spinn3r.com, a provider of blog-related commercial data feeds. To test the identifiability of an author, we remove a random k (typically 3) posts from the corresponding blog and treat it as if those posts are anonymous, and apply our algorithm to try to determine which blog it came from. In these experiments, the labeled (identified) and unlabled (anonymous) texts are drawn from the same context. We call this post-to-blog matching.

In some applications of stylometric authorship recognition, the context for the identified and anonymous text might be the same. This was the case in the famous study of the federalist papers — each author hid his name from some of his papers, but wrote about the same topic. In the blogging scenario, an author might decide to selectively distribute a few particularly sensitive posts anonymously through a different channel. But in other cases, the unlabeled text might be political speech, whereas the only available labeled text by the same author might be a cooking blog, i.e., the labeled and unlabeled text might come from different contexts. Context encompasses much more than topic: the tone might be formal or informal; the author might be in a different mental state (e.g., more emotional) in one context versus the other, etc.

We feel that it is crucial for authorship recognition techniques to be validated in a cross-context setting. Previous work has fallen short in this regard because of the difficulty of finding a suitable dataset. We were able to obtain about 2,000 pairs (and a few triples, etc.) of blogs, each pair written by the same author, by looking at a dataset of 3.5 million Google profiles and searching for users who listed more than one blog in the ‘websites’ field.[2] We are thankful to Daniele Perito for sharing this dataset. We added these blogs to the Spinn3r blog dataset to bring the total to 100,000. Using this data, we performed experiments as follows: remove one of a pair of blogs written by the same author, and use it as unlabeled text. The goal is to find the other blog written by the same author. We call this blog-to-blog matching. Note that although the number of blog pairs is only a few thousand, we match each anonymous blog against all 99,999 other blogs.

Results. Our baseline result is that in the post-to-blog experiments, the author was correctly identified 20% of the time. This means that when our algorithm uses three anonymously published blog posts to rank the possible authors in descending order of probability, the top guess is correct 20% of the time.

But it gets better from there. In 35% of cases, the correct author is one of the top 20 guesses. Why does this matter? Because in practice, algorithmic analysis probably won’t be the only step in authorship recognition, and will instead be used to produce a shortlist for further investigation. A manual examination may incorporate several characteristics that the automated analysis does not, such as choice of topic (our algorithms are scrupulously “topic-free”). Location is another signal that can be used: for example, if we were trying to identify the author of the once-anonymous blog Washingtonienne we’d know that she almost certainly resides in or around Washington, D.C. Alternately, a powerful adversary such as law enforcement may require Blogger, WordPress, or another popular blog host to reveal the login times of the top suspects, which could be correlated with the timing of posts on the anonymous blog to confirm a match.

We can also improve the accuracy significantly over the baseline of 20% for authors for whom we have more than an average number of labeled or unlabeled blog posts. For example, with 40–50 labeled posts to work with (the average is 20 posts per author), the accuracy goes up to 30–35%.

An important capability is confidence estimation, i.e., modifying the algorithm to also output a score reflecting its degree of confidence in the prediction. We measure the efficacy of confidence estimation via the standard machine-learning metrics of precision and recall. We find that we can improve precision from 20% to over 80% with only a halving of recall. In plain English, what these numbers mean is: the algorithm does not always attempt to identify an author, but when it does, it finds the right author 80% of the time. Overall, it identifies 10% (half of 20%) of authors correctly, i.e., 10,000 out of the 100,000 authors in our dataset. Strong as these numbers are, it is important to keep in mind that in a real-life deanonymization attack on a specific target, it is likely that confidence can be greatly improved through methods discussed above — topic, manual inspection, etc.

We confirmed that our techniques work in a cross-context setting (i.e., blog-to-blog experiments), although the accuracy is lower (~12%). Confidence estimation works really well in this setting as well and boosts accuracy to over 50% with a halving of recall. Finally, we also manually verified that in cross-context matching we find pairs of blogs that are hard for humans to match based on topic or writing style; we describe three such pairs in an appendix to the paper. For detailed graphs as well as a variety of other experimental results, see the paper.

We see our results as establishing early lower bounds on the efficacy of large-scale stylometric authorship recognition. Having cracked the scale barrier, we expect accuracy improvements to come easier in the future. In particular, we report experiments in the paper showing that a combination of two very different classifiers works better than either, but there is a lot more mileage to squeeze from this approach, given that ensembles of classifiers are known to work well for most machine-learning problems. Also, there is much work to be done in terms of analyzing which aspects of writing style are preserved across contexts, and using this understanding to improve accuracy in that setting.

Techniques. Now let’s look in more detail at the techniques I’ve hinted at above. The author identification task proceeds in two steps: feature extraction and classification. In the feature extraction stage, we reduce each blog post to a sequence of about 1,200 numerical features (a “feature vector”) that acts as a fingerprint. These features fall into various lexical and grammatical categories. Two example features: the frequency of uppercase words, the number of words that occur exactly once in the text. While we mostly used the same set of features that the authors of the Writeprints paper did, we also came up with a new set of features that involved analyzing the grammatical parse trees of sentences.

An important component of feature extraction is to ensure that our analysis was purely stylistic. We do this in two ways: first, we preprocess the blog posts to filter out signatures, markup, or anything that might not be directly entered by a human. Second, we restrict our features to those that bear little resemblance to the topic of discussion. In particular, our word-based features are limited to stylistic “function words” that we list in an appendix to the paper.

In the classification stage, we algorithmically “learn” a characterization of each author (from the set of feature vectors corresponding to the posts written by that author). Given a set of feature vectors from an unknown author, we use the learned characterizations to decide which author it most likely corresponds to. For example, viewing each feature vector as a point in a high-dimensional space, the learning algorithm might try to find a “hyperplane” that separates the points corresponding to one author from those of every other author, and the decision algorithm might determine, given a set of hyperplanes corresponding to each known author, which hyperplane best separates the unknown author from the rest.

We made several innovations that allowed us to achieve the accuracy levels that we did. First, contrary to some previous authors who hypothesized that only relatively straightforward “lazy” classifiers work for this type of problem, we were able to avoid various pitfalls and use more high-powered machinery. Second, we developed new techniques for confidence estimation, including a measure very similar to “eccentricity” used in the Netflix paper. Third, we developed techniques to improve the performance (speed) of our classifiers, detailed in the paper. This is a research contribution by itself, but it also enabled us to rapidly iterate the development of our algorithms and optimize them.

In an earlier article, I noted that we don’t yet have as rigorous an understanding of deanonymization algorithms as we would like. I see this paper as a significant step in that direction. In my series on fingerprinting, I pointed out that in numerous domains, researchers have considered classification/deanonymization problems with tens of classes, with implications for forensics and security-enhancing applications, but that to explore the privacy-infringing/surveillance applications the methods need to be tweaked to be able to deal with a much larger number of classes. Our work shows how to do that, and we believe that insights from our paper will be generally applicable to numerous problems in the privacy space.

Concluding thoughts. We’ve thrown open the doors for the study of writing-style based deanonymization that can be carried out on an Internet-wide scale, and our research demonstrates that the threat is already real. We believe that our techniques are valuable by themselves as well.

The good news for authors who would like to protect themselves against deanonymization, it appears that manually changing one’s style is enough to throw off these attacks. Developing fully automated methods to hide traces of one’s writing style remains a challenge. For now, few people are aware of the existence of these attacks and defenses; all the sensitive text that has already been anonymously written is also at risk of deanonymization.

[1] A team from Israel have studied authorship recognition with 10,000 authors. While this is interesting and impressive work, and bears some similarities with ours, they do not restrict themselves to stylistic analysis, and therefore the method is comparatively limited in scope. Incidentally, they have been in the news recently for some related work.

[2] Although the fraction of users who listed even a single blog in their Google profile was small, there were more than 2,000 users who listed multiple. We did not use the full number that was available.


TOPICS: Constitution/Conservatism; Culture/Society; News/Current Events
KEYWORDS: anonymous; freespeech; hackers
Navigation: use the links below to view more comments.
first 1-2021-22 next last
We’ve thrown open the doors for the study of writing-style based deanonymization that can be carried out on an Internet-wide scale, and our research demonstrates that the threat is already real. We believe that our techniques are valuable by themselves as well.

The good news for authors who would like to protect themselves against deanonymization, it appears that manually changing one’s style is enough to throw off these attacks. Developing fully automated methods to hide traces of one’s writing style remains a challenge. For now, few people are aware of the existence of these attacks and defenses; all the sensitive text that has already been anonymously written is also at risk of deanonymization.

Posts on Free Republic. Could they "deanonymize"?

1 posted on 02/21/2012 10:19:15 AM PST by FourPeas
[ Post Reply | Private Reply | View Replies]

To: FourPeas

Um....maybe?


2 posted on 02/21/2012 10:20:25 AM PST by Larry Lucido (My doctor told me to curtail my Walpoling activities.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas

Protect your privacy: Plagiarize!


3 posted on 02/21/2012 10:22:21 AM PST by Joe 6-pack (Que me amat, amet et canem meum)
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas

Who wrote Dreams Of My Father?


4 posted on 02/21/2012 10:23:35 AM PST by TexasCajun
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas

Let’s deanonymize “Dreams from my Father” and see which Chicago silver ponytail crawls out from under the rock.


5 posted on 02/21/2012 10:24:06 AM PST by Joe the Pimpernel (Islam is a religion of peace, and Moslems reserve the right to dismember anyone who says otherwise.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas
This calls for an anonymizing feature much like spell-check, that randomly substitutes synonyms for all the words in everything you write.

Sort of like babelfish only the output language is the same as the input language.

Or you could just bablefish English to German and back to English again!

Anything resembling a coherent sentence strictly coincidental.

6 posted on 02/21/2012 10:28:23 AM PST by Joe the Pimpernel (Islam is a religion of peace, and Moslems reserve the right to dismember anyone who says otherwise.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas
"Could they "deanonymize"? "

Absolutely.

7 posted on 02/21/2012 10:31:23 AM PST by Paladin2
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas
Consider two words that are nearly interchangeable, say ‘since’ and ‘because’. Different people use the two words in a differing proportion. By comparing the relative frequency of the two words, you get a little bit of information about a person...

Some use the words correctly. Others don't.

"Since" refers to time:

"Since Obama was elected, America has gone to Hell in a handbasket."

But "because" doesn't refer to time:

"Because Obama was elected, America has gone to Hell in a handbasket."

Different meanings.

Uh, oh. I've just realized I've been deanomyz... deanonymousi... de-anyonim................... outed!

Yikes!

8 posted on 02/21/2012 10:32:18 AM PST by Flycatcher (God speaks to us, through the supernal lightness of birds, in a special type of poetry.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Joe the Pimpernel

Friends and I used to do that. Humor ensued.


9 posted on 02/21/2012 10:35:34 AM PST by FourPeas ("Maladjusted and wigging out is no way to go through life, son." -hg)
[ Post Reply | Private Reply | To 6 | View Replies]

To: FourPeas
...First they came for the verbose, pedantic, pricks but I didn't speak out because I wasn't...

oh crap

10 posted on 02/21/2012 10:39:22 AM PST by Jack of all Trades (Hold your face to the light, even though for the moment you do not see.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Joe the Pimpernel
This requires an anonymous function like spell-check, write the randomly substitute synonyms for words in all that they.

A sort of babelfish only the language of the output is the same as the input language.

Or you can simply Bablefish German to English and back again to English! Such thing as a coherent set of randomly clean. Babylon 9

11 posted on 02/21/2012 10:39:44 AM PST by Paladin2
[ Post Reply | Private Reply | To 6 | View Replies]

To: FourPeas

One probably can identify posters based on linguistic analysis. But I don’t think that it is as easy as one might think at first glance.

I can remember a few years back trying to identify some liberal posters based upon rather peculiar phrases that they used. I was unable to do so. It could be that some of the more “sore thumb” type of posters aren’t actually famous enough to warrant a lot of results in a google search.

I don’t use my real name. I’m not exactly famous either. But there are a lot of people who post online who know who I am. Others, with a little bit of detective work, could easily detemine my identity without going into linguistic analysis.

I used to be concerned a little about that. No longer. It’s kind of liberating. Plus, it’s helpful to understand that not that many people really care except those who love you (plus the few who hate you).


12 posted on 02/21/2012 10:44:10 AM PST by Engraved-on-His-hands
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas

It deanonymized Ted Kaczinski (sp?).


13 posted on 02/21/2012 10:45:44 AM PST by DuncanWaring (The Lord uses the good ones; the bad ones use the Lord.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: FourPeas

This doesn’t work all the time. I was reading a story and the phrase “I’d hit it” came up. I naturally assumed it was LAZ, but, it turns out, it was Derek Jeter talking about baseball. Unless LAZ is Derek Jeter?????????


14 posted on 02/21/2012 11:07:37 AM PST by blueunicorn6 ("A crack shot and a good dancer")
[ Post Reply | Private Reply | To 1 | View Replies]

To: blueunicorn6
Unless LAZ is Derek Jeter?????????

LAZ is actually Mike Tyson. Watch your ears.

15 posted on 02/21/2012 11:40:22 AM PST by dirtboy
[ Post Reply | Private Reply | To 14 | View Replies]

To: FourPeas

Seems like, if deanonymization could be refined to be, in effect, a linguistic fingerprint, it would also be possible to smear someone by “translating” self-incriminating text into his distinctive style.


16 posted on 02/21/2012 11:59:55 AM PST by Hunton Peck (See my FR homepage for a list of businesses that support WI Gov. Scott Walker)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Hunton Peck

Interesting point. A more creative way to forge documents.


17 posted on 02/21/2012 12:04:30 PM PST by FourPeas ("Maladjusted and wigging out is no way to go through life, son." -hg)
[ Post Reply | Private Reply | To 16 | View Replies]

To: FourPeas; CodeToad; hiredhand; Eaker; Squantos; Joe Brower

I assume that every key stroke I have made since Day One is fully searchable and attributable, and that I might someday be grilled under bright lights about this or that post I once made.

To do otherwise, is foolish IMHO.


18 posted on 02/21/2012 12:24:41 PM PST by Travis McGee (www.EnemiesForeignAndDomestic.com)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Travis McGee

“I assume that every key stroke I have made since Day One is fully searchable and attributable”

Yep. Nothing on the Internet, banking systems including checking accounts and credit cards and loans of all kinds, or anything medical or work related is anonymous anymore.


19 posted on 02/21/2012 12:34:41 PM PST by CodeToad (NO TAXATION WITHOUT REPRESENTATION!!!)
[ Post Reply | Private Reply | To 18 | View Replies]

To: FourPeas

Oh yeah? Deanonymize THIS!


20 posted on 02/21/2012 4:48:47 PM PST by Colinsky
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-22 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson