3.2.3 Application to -anonymity
Sections 3.2.1 and 3.2.2 formalize two intuitive phenomena under partial knowledge. First, if the attacker has a significant enough uncertain about enough people, counting queries do not leak too much information about individuals. Second, for counting queries that apply to rare enough behavior, thresholding provides meaningful protection against a passive attacker. This suggests a link to an older anonymization notion: -anonymity. In this section, we formalize that link, and combine these two intuitions to provide a relation between -anonymity and differential privacy under partial knowledge.
-anonymity, which we introduce in Section 2.1.1, requires each record in a database to be indistinguishable from at least other records. The intuition is that blending in a large enough crowd provides protection; this intuition is close to the results of Section 3.2.1. -anonymity is generally obtained by generalizing the data to group similar records together, then dropping the groups with less than records. The link with the results of Section 3.2.2 is obvious.
To formalize it, we need to clarify the notion of a -anonymity mechanism. For simplicity, we will simply assume that such a mechanism groups records by their value, and returns a truncated histogram, where all values with less than records have been removed.
Definition 61 (-anonymity mechanism). The -anonymity mechanism takes a dataset in as input, and returns a histogram in . For each , is defined as:
- for all , if this number is at least (if there are less than records with value in );
- otherwise.
If an input record is not in , it is ignored by .
Note that we skipped the generalization step. The results below can be easily extended to any fixed generalization strategy, i.e. a fixed mapping between and an arbitrary space forming the support of the histogram. It is important that this strategy is fixed. If this function depends on the data, arbitrary correlations can be embedded in the output, which might leak additional information; minimality attacks [389] provide an example of this phenomenon.
Now, under which condition is such a mechanism private? The distribution that captures the attacker’s uncertainty must be such that for all possible values , either this value is rare enough to be thresholded with high probability, either there is sufficient randomness in the input data that releasing the exact value does not leak too much information.
In addition, we assume that it is possible for a given record to have the value , representing their absence in the dataset. The count corresponding to are never released. We discuss later the importance of such a special value, and its practical interpretation.
Theorem 5. Let be a distribution that generates independent records in . Assume that there is a such that for all :
- either for all indices , ,
- or for all indices , ;
furthermore, assume that for all indices , , and that the attacker does not have any background knowledge.
Let be a threshold such that . Then is -APKDP for all , where:
and is such that , where and are two independent random variables sampled from a binomial distribution with trials and success probability .
Proof. For a given index and a possible record , we compare the events and . If we find and such that , then we have for all , which would conclude the proof immediately.
There are two options that we must consider: either for all indices , , either for all indices , .
In the first case, we can reuse the analysis of Section 3.2.2: with probability , where , the result is thresholded, and the corresponding privacy loss is bounded by , following the same reasoning than in the proof of Theorem 4. Importantly, in this case, comparing and allows us to restrict our analysis to the value of : the distributions of values of for all are the same when is conditioned on or .
In the second case, we reuse the analysis of Section 3.2.1: the distribution of can be seen as the sum of records, each of whom has been randomized using a binary randomized response with parameter . Since also follows this binary randomized response process, we can directly apply the proof of Theorem 3 with .
Combining both cases directly leads to the desired result, using the indistinguishability property between and to get one between arbitrary and . □
Theorem 5 is relatively complex, and depends on a number of conditions. Let us discuss its limitations. Some of them are necessary for the result to be true, others could be overcome with a more careful analysis, at the cost of simplicity.
First, we assume that the attacker has no partial knowledge over the data. The result can easily be extended to the case where the attacker has non-zero passive partial knowledge of records over the data: for the counting case, we can simply remove these records and obtain the results with instead of , and for the thresholding case, we can apply Theorem 4 directly. The discussion in Theorem 4 shows that cannot be easily extended to the case where the attacker has the ability to influence the data, unless a very small number of records can be influenced (as in Proposition 28). This captures the correct intuition that -anonymity is vulnerable against active attackers.
Second, the choice distribution might seem artificial, carefully chosen so the previous results can be applied. Why would there be a value such that all records have a probability lower than of being in a fixed category, or larger than ? The first option is reasonable: many real-life distributions are long-tailed; some types of actions, or characteristics, are simply very rare. The second option is less natural: maybe a characteristic that is common for many people is extremely rare in others, so requiring all records to have a high enough probability for this record seems too restrictive. However, note that this high probability captures the attacker’s uncertainty: if the attacker knows that some records have a particularly low probability of having a certain record, it is possible to over-approximate this knowledge, and simply consider these records as known by the attacker. We can then use the previous point to still get an upper bound on the attacker’s information gain.
Third, what is the meaning of the special case, and is it necessary for the proof of Theorem 5 to work? We use it to prove the desired indistinguishability property in the second case of the proof. Without it, it turns out that subtle problems can arise. Suppose, for example, that , and that for all , is infinitesimally small, while and are both close to . If the total number of records is fixed (and implicitly assumed to be known by the attacker), note that thresholding the count for is pointless: with high probability, we can retrieve it by computing the difference between and the counts for and . This phenomenon is a real vulnerability of -anonymity when the total number of participants is known: any result showing that -anonymity protects privacy under partial knowledge must find a way of guaranteeing that this does not happen.
Creating an artificial category whose count is never released solves this problem, assuming that this category has sufficient uncertainty. This hides the total number of participants and mitigate this vulnerability. Another way would be to impose that the distribution has multiple whose counts will likely be thresholded, and that these together have enough uncertainty to hide the total count. This is also realistic in practice, given that most distributions are long-tailed, but would likely require a more complex analysis, as well as complicate the theorem statement.
Note that a link between -anonymity and differential privacy was already introduced in [255]. We use the same notion of a -anonymity mechanism, however, we model the attacker’s partial knowledge differently. In [255], the attacker is assumed to know the value of every single record from the original dataset, but not which records have been randomly sampled from it. Arguably, the only way to satisfy that assumption in practice is to have the mechanism actually sample the data before applying -anonymity. In that case, the original differential privacy definition is satisfied. By contrast, our setting assumes an attacker that has some uncertainty about the value of the records themselves; we argue that this is a much more natural way of capturing the natural assumption that the attacker has partial knowledge over the data.
How strong are the privacy parameters provided by Theorem 5 for realistic use cases? First, note that as shown by Figure 3.4, whenever the parameter from the thresholding operation is reasonably low, then we have , which is much lower than the values from the results on the privacy of noiseless aggregations (Section 3.2.1). Thus, to use Theorem 5 in practice, one would supposedly need to:
- 1.
- first, fix a target and that we want to obtain;
- 2.
- these privacy parameters give a range of acceptable values of , according to Theorem 3; we then select one such value that will serve as a boundary between “rare events” and “common events”;
- 3.
- finally, calculate the threshold based on and , according to Theorem 4.
Admittedly, the above process is not trivial to actually apply, and the constraints on make our second point above even more salient: not only is this choice brittle. All in all, our result is an interesting link between two important notions, and formalizes a natural intuition about the inherent privacy of simple aggregation and thresholding mechanisms, but is probably not suited for practical applications. Can it be significantly improved or extended, without adding more brittle assumptions? We leave this as an open question.