Ted is writing things

On privacy, research, and privacy research.

Research post: Differential privacy under partial knowledge

Differential privacy (DP) makes very few assumptions on the attacker. The definition holds for all pairs of databases \(D_1\) and \(D_2\) that differ in one person. This means that even if the attacker knows everyone in the database, except one person, they can't get information about this person. Or, rather, the probabilistic information they can get is bounded by \(e^\varepsilon\).

When explaining DP to people for the first time, this "perfect knowledge" assumption often raises eyebrows. This seems overkill, right? If someone knows everyone in your database except one person… This probably means that they have direct access to your raw data. And in that case, you've already lost! It no longer matters how you're anonymizing the data later.

This intuition was central to my PhD proposal, back in early 2016. Then, I was observing two different worlds. In academia, researchers focused their efforts on differential privacy. Definitions like \(k\)-anonymity were a thing of the past. In industry, however, DP was still largely ignored. People thought it was a neat theoretical idea, but pretty much unusable in practice. Its strong assumptions were unrealistic, and the idea of adding noise to data was too painful.

So, I thought: I'm trying to start a part-time PhD, splitting my weeks between industry and academia. Can I work towards bridging this gap? What if we considered DP, but made its assumptions a little weaker… Would it be possible to prove something about older notions like \(k\)-anonymity? They might not be as robust as DP, but they might be good enough sometimes, right? Could one, for example, quantify their guarantees using the same Bayesian interpretation as with DP?

This line of study eventually led to this paper, a collaboration with Esfandiar Mohammadi, Elisabeth Krahmer, and my PhD advisor David Basin. We studied this natural question: what if an attacker only has partial knowledge over the data? How can we formalize this assumption? How does the DP definition change, and can we use this variant to prove interesting results?

This basic idea is not new: the formalism we used was based on work by Bhaskar et al., published in 2011. To capture the attacker's uncertainty, we model the input data by a probability distribution, denoted \(\theta\). And instead of comparing the output of the mechanism \(A\) on two fixed databases, we compare it on two samples from this distribution.

This \(\theta\) captures both what the attacker know and doesn't know. The more uncertainty the attacker has, the "more random" this probability distribution is. How do we choose \(\theta\)? Great question. We'll come back to it a little later.

Now: what did we find out?

Some positive results

Let's take the very simple case of a referendum, where everybody answers a question by Yes or No. The attacker is uncertain about whether some people \(i\) vote Yes or No. This is captured by having each vote be random: Yes with probability \(p_i\) and No with probability \(1-p_i\). And we capture "how uncertain" the attacker is by having a certain number of \(p_i\) be "not too close to 0 or 1".

This modeling is the same as prior work, but we get better bounds for \(\varepsilon\) and \(\delta\). This is due to a neat trick: we can reuse the results from amplification by shuffling! The context is different, but the underlying math is the same: take a bunch of small random things, mix them together, and you get a good amount of randomness in total. That's Theorem 1 in the paper, if you want to follow along. And it can easily be extended to settings where people are choosing among more than two options (Corollary 1).

We continue by showing that thresholding can provide some guarantees, under the right assumptions. Of course, thresholding only has an effect for rare events: when all the \(p_i\) are very small, and we only publish results if the total number of Yes is larger than a threshold. Then, depending on the exact parameters, we can get reasonably small values of \(\varepsilon\) and \(\delta\). Again, this captures an intuitive phenomenon: if very few people are in a group, but we are suppressing this group from the results entirely, then this can provide some protection (Theorem 4).

Putting these two results together, we get what we were looking for: a formal interpretation of the privacy of \(k\)-anonymity… Or, at least, of the simplest possible \(k\)-anonymity scheme, with a long list of assumptions. The high-level intuition is as follows. For each group in the output:

  • either there are many (more than \(k\)) people in this group, and then the attacker will have some uncertainty, because they don't know everyone there;
  • or there aren't that many people in the group, but then the count isn't published, which protects the people in it.

This is Theorem 5 in the paper. Mission accomplished! Right?

Well… Not really. For this theorem to hold, we have to make a number of weird assumptions. The "uncertainty distribution" \(\theta\) has to have exactly the right shape. There needs to be a "special group" whose count we don't publish, regardless of the number of people in it. And as we'll see in the rest of this article, the definition of privacy itself is more than a little shaky.

A distinction between active and passive attackers

A first difficulty we identified is the necessity to distinguish between active and passive attackers. When the attacker has partial knowledge of the input data, this can mean two things.

  • The attacker can inject some fake records in our data: they're active.
  • Or the attacker can get information on some records, but not influence them: they're passive.

Both situations can be realistic in different scenarios. For example, say you call people on the phone to take part in a survey. Then, an attacker might know some participants, but not be able to answer in their place. But what if you're releasing statistics about the use of an online service? Then, an attacker might be able to spin up bots to interact with it, and create artificial data.

With "real" differential privacy, this doesn't matter. Giving the attacker control over all the records doesn't change anything: you have to reason about the worst-case scenario anyway, so you end up with the same definition.

But with partial knowledge, you do get two distinct versions of the definition, depending on the attacker's capability. Some results, like the one about thresholding, only hold in the passive version of the definition. That makes sense: if the attacker can inject fake records, they can artificially boost low numbers to make them go above the threshold. In this case, thresholding as a privacy mitigation is pointless.

In fact, this concern isn't just theoretical. In a paper investigating Facebook's ad targeting system, researchers used a similar method: by carefully controlling the data and queries, they negated a threshold-based protection.

Problems with dependencies in the data

Second, you might have raised your eyebrows at one of the hypotheses we made to prove our initial results: all records had to be independent. This assumption turns out to be both very important, and quite brittle.

  • If we don't make this assumption, then the math gets difficult fast. You need to find a way to model possible dependencies, and take them into account when calculating the privacy guarantees. I'm not sure anybody has found a tractable way to do this. I tried and failed to get anything convincing.
  • But if this assumption is false, and the attacker can use some knowledge about the correlations in the data… Then everything breaks down. The privacy guarantees no longer hold, and maybe the attacker can get all the data they want.

We also found an additional subtlety. When modeling the attacker's partial knowledge, it is crucial to delineate what they know from what they want to achieve. If there are correlations between their partial knowledge and the sensitive information they try to find out… Then the definition is meaningless: you end up in situations where an attacker with more knowledge over the data is less powerful than one with less.

We're not the first to point out such problems: Bassily et al. showed that in certain cases, the original notion can be buggy, and proposed an alternative. But we showed that this alternative can also have fundamental problems. This separation between partial knowledge and sensitive information seems essential to fix them.

This requirement, though, makes these definitions really tricky to use in practice. We have to think hard about what the attacker wants to achieve, and what they might know about the data. And if we're wrong, then everything we proved might be meaningless.

Difficulties making composition work

Finally, these variants of differential privacy don't compose very well. Imagine a database of referendum votes, where each person is associated with their vote. Then, consider the two following queries (without noise).

  • The count of everybody who voted Yes.
  • The count of everybody who voted Yes, excluding a specific user \(i\).

Each query, on its own, might be considered private in the partial knowledge setting: if the attacker has uncertainty over many votes, each query result won't give a lot of information about individual votes. But, of course, combining both results gives you the vote of user \(i\). So, composition doesn't degrade privacy loss smoothly, like for differential privacy. Instead, it can lead very quickly to catastrophic privacy loss.

Of course, this counterexample is a little artificial. It's natural to wonder: can we find a simple condition on queries allowing us to prove a composition result? We investigated, and found possible ways of achieving composition… But nothing was really natural and convincing.

The problem is, again, correlations: if queries give correlated results, then the math breaks down. That's bad news: the queries can't touch the same data, otherwise, the results are correlated. What if the queries look at different columns of a dataset? You need one more assumption: the different columns in the data must be uncorrelated. And that's pretty unrealistic.

One option is a little more viable: databases that constantly get new data. This situation is common in practical scenarios. And it seems natural to assume that this new data might add to the attacker's uncertainty… So, if we require that each new query gets some "fresh" data, we can get some composition results. They're quite preliminary for now, but maybe worth investigating further.

Conclusion & perspectives

I'm happy about some of the progress we made on this problem. We found important issues with prior work, and proposed a more robust definitional framework. The link we established with shuffled DP is interesting, and somewhat promising. I'm hopeful that both aspects might end up being useful to folks doing further research in this area.

But while the promise of utilizing an attacker's uncertainty in DP is alluring, a closer look revealed big challenges with this line of work. I'm not sure I'm optimistic about making this idea work well enough for real-world use cases. The assumptions that are necessary to make this work are too brittle and unrealistic. The math seems to get messy too fast.

While I was working on this research project, I was also working on building infrastructure for differential privacy. And over time, I became convinced that the gap between DP theory and practice was usability. We'll get to widespread adoption by building better tools and doing a better job helping people use them. I even made a whole talk about this idea since then.

This work also changed my mind about the assumptions behind differential privacy. They're not unrealistic, or too strong: instead, they seem necessary to get the basic properties you want out of a convincing notion of privacy. They might be an over-approximation of the capabilities of realistic attackers… But reducing these assumptions is dangerous, costs a lot, and doesn't buy much.

If you've found this post interesting, or if you disagree with some of the points I made in it, let me know! My contact info is at the footer of this page, and I'd love to hear more perspectives about this problem.


Thanks to Esfandiar Mohammadi and David Basin for their helpful feedback on drafts of this post.

All opinions here are my own, not my employer's.   |   Feedback on these posts is very welcome! Please reach out via e-mail (se.niatnofsed@neimad) or Twitter (@TedOnPrivacy) for comments and suggestions.   |   Interested in deploying formal anonymization methods? My colleagues and I at Tumult Labs can help. Contact me at oi.tlmt@neimad, and let's chat!