Ted is writing things

On privacy, research, and privacy research.

Why not differential privacy?

— updated

This post is part of a series on differential privacy. Check out the table of contents to see the other articles!


Some algorithms are not differentially private, but still claim to perform anonymization. Such mechanisms are common, both in the academic literature and in industry. Explanations on why they still preserve some notion of privacy vary.

  • They might include some ad hoc protections against entire classes of attacks.
  • They might aggregate the data to a point where the statistics "obviously" seem safe.
  • They might use some other metric for data leakage, like entropy or mutual information.

How to get an idea of how robust these proposals are? In this post, I'll suggest a somewhat provocative approach: we'll try to analyze them in the language of differential privacy. We're going to ask the following question: why isn't a given mechanism differentially private?

I'll need a straw man to get the discussion going. Meet Paille.

A drawing of a friendly straw man

Paille (it's pronounced \(\pi\)) has an algorithm \(A\). They believe that \(A\) performs anonymization: it protects the data of individuals in its input. Their line of argument starts with:

It's not differentially private, but… [insert long explanation here]

Rather than focusing on the explanation itself, let's dig into why that algorithm is not DP. First, what does it mean for an algorithm to not be DP? Let's take the definition of differential privacy and negate it. If \(A\) isn't DP, then there are databases \(D_1\) and \(D_2\) differing in only one individual, such that the ratio:

$$\frac{\mathbb{P}\left[A(D_1)=O\right]}{\mathbb{P}\left[A(D_2)=O\right]}$$

gets arbitrarily large for varying possible outputs \(O\). Remember: we called this ratio the privacy loss before.

Suppose that an attacker is hesitating between \(D_1\) and \(D_2\): they know all the database, except the data of one single individual. Then it's possible that by looking at the output of \(A\), the attacker knows for sure what the data of this person is.

That sounds… not great. In fact, that sounds exactly like what we were trying to avoid. Why is this OK? Oh, wait, I have an idea. What if this only happens very rarely?

Averaging the privacy loss across outputs

Differential privacy is a worst-case property: it must hold for every possible output \(O\). So if there's the slightest chance that the ratio of probabilities is unbounded, we don't get DP. Yet, we might be able to say « unless we're extraordinarily unlucky, the DP property holds ». In fact, we've done this before, when we introduced \((\varepsilon,\delta)\)-DP. That could be good enough!

We saw that \((\varepsilon,\delta)\)-DP allows a small possibility of catastrophic failure: the privacy loss can sometimes be infinite. To avoid this, we can average the privacy loss across all possible outputs instead. Some variants of DP even let us choose what kind of averaging function we want to use1.

So, Paille, is this what's happening here? Do we have differential privacy for almost all possible outputs? Or is the average privacy loss bounded by some reasonable value?

Eh… not exactly. The privacy loss can be really large even if we average it across all possible outputs.

Oh, OK. Well, let's see what else could happen. What if, instead of averaging the privacy loss across outputs, we average it across people?

Averaging the privacy loss across people

Differential privacy gives the same protection to all individuals. The guarantees on the privacy loss apply to everyone. Is that always necessary? In some cases, it might be reasonable to say that some people need more privacy protection than others. For example, folks from at-risk populations might need a smaller \(\varepsilon\) than majority groups.

Another possible approach is to protect almost all individuals, without specifying which ones. To do so, we first need to model the population according to a probability distribution. Then, we say « with high probability, someone sampled from this distribution is protected ». Unlucky people might not get any protection, but these are hopefully very rare2.

This is a bit like \((\varepsilon,\delta)\)-DP: there is a small chance that things go wrong. We could, instead, average the privacy loss across people. Like before, it would avoid the possibility of infinite risk for some individuals. This is much less robust than the previous class of definitions, though. First, some people might never get good protection, if their data is unusual. Second, it requires us to model our population with a probability distribution. This is hard to do! And if our model is wrong, more folks might be at risk than we expected.

Still, though, it's something. Paille, does your algorithm \(A\) behave this way?

Hmmm… no. It seems that the privacy loss is very large for more than a few individuals. So averaging it doesn't bring much.

Arg. Well… If you're not protecting individuals, maybe you're protecting some other property?

Changing the protected property

With DP, the attacker tries to distinguish between databases differing in one person. This means that we protect everything about any single individual. Sometimes, though, getting to this level of protection seems like an impossible task.

For example, suppose the input database is growing over time: every day, we get new data from the users of an app. We want to publish an anonymized version of this daily data every day. Each daily release might be differentially private… But the total privacy loss of a given user over time is unbounded: the same person might use the app every day for a long time.

This is better than nothing, though: we can still claim that we're protecting all contributions of each user in every single day. Capturing this idea is easy: we can redefine "neighboring datasets" to differ in the data of a single person in a single day.

We can also extend this idea to other properties that we want to protect. Maybe finding out that someone is in our database might not be that sensitive. But finding out the value of a specific field might be problematic! In this case, we can adapt the definition of DP, and have the two databases differ only in this field for a single individual.

Paille, can you capture the properties of your algorithm \(A\) this way? If it's too hard to get formal privacy guarantees for individuals, can you do it for smaller "units of privacy"?

Erm… it doesn't look like it. Even when the "unit of privacy" is smaller, the privacy loss is still too high to be meaningful.

Well, this doesn't look great. But let's persevere and try one last thing. What if we assume the attacker is uncertain about the initial dataset?

Assuming a weaker attacker

When using DP, we compare the output of the algorithm on two databases that differ in a single user. Implicitly, we assume that the attacker knows the data of everyone else in the database. What if we relax this assumption?

Doing this seems reasonable. After all, the only realistic way an attacker could know about everyone in a database is by having direct access to the database… And then there's not much left to protect. Some variants of DP attempt to formalize this idea. To do this, they capture the attacker's uncertainty using a probability distribution. The neighboring databases are no longer fixed: they're sampled from this distribution, conditioned on the data of a specific user.

The variants of differential privacy obtained this way3 have two major problems.

  • First, they don't compose. Say two algorithms are "private" if an attacker has limited background knowledge. Each output, in isolation, doesn't leak too much information. Both outputs combined, though, might not be private at all, even under the same assumption.
  • Second, these variants need us to model the database as a probability distribution. This distribution is supposed to capture the attacker's uncertainty… So you have to put yourself in the shoes of each possible attacker and model their knowledge somehow. This is difficult and very brittle: if you misjudge their knowledge even slightly, all privacy properties might break down.

Because of this4, assuming a weaker attacker can be kind of a dangerous road. Paille, does your algorithm \(A\) satisfies one of these variants? It wouldn't be enough to fully convince me: I'd also need take a long look at the underlying assumptions, and at how you're using it in practice. Nonetheless, it'd be a start, and it'd be better than nothing, I guess.

Well, let me check. Modeling the attacker's uncertainty is difficult, but… doing that doesn't give me convincing results either. I can make unrealistic assumptions on my data, and then it sort of works. But if I try to model the attacker in a more realistic way, I don't get great numbers at all.

That's… unfortunate.

What's left?

Let's recap what we know so far about Paille's algorithm \(A\). If we negate all the relaxations we've seen so far, what do we have left?

An attacker who looks at the output of \(A\):

  • can retrieve very fine-grained information
  • about many individuals
  • even if the attacker is not particularly lucky
  • and only has limited knowledge about the data.

This is not good! But this is the direct conclusion of the discussion so far. Paille's mechanism not being DP didn't seem so bad at first, after all, DP is quite a high bar. But if we can't say anything about \(A\) in the language of DP, even if we relax the definition a lot, then this is pretty damning. No need to dive deep into the original rationale for why \(A\) might be safe: we just showed it isn't.

Or, rather, we are unable to show that it is. This will be the last resort of people defending their custom anonymization method: « I can't prove that it's safe, but I still argue that it is. Prove me wrong! Show me an attack that works. » Reversing the burden of proof this way is, of course, a red flag. If you're anonymizing my data, you should have to convince me that what you're doing is safe, not the other way around.

Further, experience shows that if someone does find an attack, that won't be enough to end the debate. In practice, people slap a patch or two on their algorithm, and go right back to proclaiming its safety. The history of computer security is littered with such examples: people patch systems after an attack is discovered, but shortly after, a minor change to the attack proves successful. The early days of data privacy were no different. I hope that we learn from this past, and focus future efforts on stronger notions with provable guarantees!

So, next time you encounter a non-DP algorithm… Why don't you insist that its authors explain to you why it isn't DP?

Final note

There are many more variants and extensions of DP beyond those mentioned in this post. In fact, a colleague and I wrote a whole survey paper about it! In this paper, we classify all these variants, list their properties, and provide intuitions for each. For a short overview of this work, you can check out the recording of the talk I gave about it at PETS last summer.


The next article in this branch is about the US Census Bureau's reconstruction attack. Or you can also go to the table of contents of this blog post series to pick something else to read!


  1. This is Rényi DP, a definition often used for machine learning applications. Its additional parameter \(\alpha\) determines which averaging function is used: \(\alpha=1\) bounds the geometric mean of the ratio, \(\alpha=2\) bounds the arithmetic mean, \(\alpha=3\) bounds the quadratic mean, etc. 

  2. This is random DP, though a couple of other variants do essentially the same thing. 

  3. Like noiseless privacy, named this way because even algorithms that don't add any noise to data can be considered private under this variant. 

  4. … and because of other complicated technical reasons I won't get into here, though I co-authored an entire paper about these problems. 

All opinions here are my own, not my employer's.   |   Feedback on these posts is very welcome! Please reach out via e-mail (se.niatnofsed@neimad) or Twitter (@TedOnPrivacy) for comments and suggestions.   |   Interested in deploying formal anonymization methods? My colleagues and I at Tumult Labs can help. Contact me at oi.tlmt@neimad, and let's chat!