Lowering the cost of anonymization

a PhD thesis

3.2.4  Composition

Composition theorems enable the modular analysis of complex systems and the continued usage of mechanisms over time. In this section, we study two kinds of composition. Sequential composition, introduced in Section 2.1.6.0 , and nested composition, where post-processing noise is added to the result of the aggregation.

Sequential composition

We saw in the previous section that noiseless mechanisms could be private under partial knowledge. For such mechanisms, composition does not hold in general. We explain why dependencies between mechanisms are the root cause of composition failing, and we explain how bounding this dependencies allow us to derive usable composition results. First, we show that noiseless composition fails in general.

Example 8. Going back to the voting example, consider the queries “How many people voted Yes?” and “How many people who are not voted Yes?”, for some individual . As shown in Section 3.2.1 , each query can be private on its own. However, publishing both results reveals ’s vote: the composition of both queries is not private.

Are there special cases where noiseless counting queries can be composed? In this section, we propose a criterion, -boundedness, under which sequential composition does hold.

The core problem with Example 8 is that the two queries are heavily dependent on each other. In fact, knowing the result to the first query only leaves two options for the result of the second query: it drastically reduces an attacker’s uncertainty about the second query’s result. We show that this dependency between queries is the main obstacle towards a composition result and prove that mechanisms where the dependency is bounded (Definition 63 ) can actually be composed (Theorem 6 ).

How can we formalize the bounded dependency between mechanisms? A natural approach is to quantify how much the additional knowledge of the first mechanism impacts the privacy loss of the second mechanism.

Definition 62. Given two mechanisms and , two , two outputs , a distribution , an index , a possible value of the background knowledge compatible with and , the dependency of on to distinguish and is the function defined by:

using the convention for all .

Intuitively, this value quantifies the amount of additional information that gives the attacker when analyzing the privacy loss of . In Example 8 , the first term is , as knowing both the results of and leaks the value of , while the second term is typically finite. So takes infinite values, which captures the fact that the two mechanisms together leak a lot of information.

Bounding this dependency can be done in the same way as using the PLRV to define differential privacy: we bound by almost everywhere, and use a small quantity to capture rare events where .

Definition 63 (-bounded dependency). Given a family of distributions , two mechanism and are -bounded dependent for if for all , all indices and records , and all :

is smaller or equal to .

This notion formalizes the intuition that the result of the first mechanism should not impact “too much” the result of the second mechanism. As we show in the following theorem, the dependency of on can be used to express the PLRV of the composed mechanism as a function of the PLRV of the two original mechanisms. As a direct consequence, we show that two -bounded dependent mechanisms can be sequentially composed.

Theorem 6. Given a distribution , two mechanisms , an indice , records , and , the PLRV of the composed mechanism satisfies:

As a corollary, if are -bounded dependent, if is -APK, and if is -APK for , then is -APK.

Proof. Fix , , , and . The main statement is straightforward to prove by decomposing:

and plugging in the definition of .

To prove the composition theorem, we have to show that, if we denote and :

Note that the function is subadditive: for all and , . Indeed, if , then , and it is straightforward to verify that . The same reasoning holds when . Finally, if and , then we must show that , which is equivalent to , which trivially holds.

We then define:

and we use this subadditivity property: . As we have , we can directly plug this into the expression above and use the assumptions of the theorem to show that it is bounded by . □

Note that the characterization of enables the use of more sophisticated composition bounds for differential privacy, such as the advanced composition theorem [ 131], Rényi Differential Privacy [ 284], or privacy buckets [ 278]. For simplicity, here, we only used the standard (non-tight) composition bound for DP.

A common special case directly leads to -bounded dependent mechanisms: two mechanisms that work on distinct parts of a database are -bounded dependent if these two parts are independent.

Proposition 29. Let be a family of distributions, and let and be mechanisms. Assume that for any , there are functions and such that and are independent, and functions and such that and . Then and are -bounded dependent.

We now present an natural example of a practical scenario where we can use these composition results.

Example 9. Consider a regularly updated database, like usage information about an online service. Statistics are computed from this database: for example, among registered users, how many of them used a specific feature on any given day. This count is released daily, and we want to understand how the privacy of a particular user is impacted over time.

This can be represented by a database where each record is a series of binary values , where , and we release a series of mechanisms . The results of Section 3.2.1 can be used to determine the privacy of each depending on the data-generating distribution . The goal is to determine the privacy of multiple queries, assuming independence between and for all .

The analysis of the privacy guarantees offered by this setting over time depends on , and on the correlations between the different values of a record. If is independent from for all , then the result is direct. Otherwise, we must quantify the maximum amount of correlation between and . Quantifying this can be done using indistinguishability: we can assume, for example, that there is a such that for all and all indices and :

Under this assumption, it is easy to verify that mechanisms and are -bounded dependent, so we can use the composition result of Theorem 6 and derive bounds on the privacy leakage over time.

This approach can be extended to other scenarios, for example if only a subset of users participate to each update, or if a referendum contains multiple questions, whose answers are correlated. Another possible scenario is if only a subset of users participate to each update. We can represent this by having be either a categorical value (which encodes e.g. the type of interaction) or a special value that encodes “user did not participate to this update”. The probabilities and correlation relationships of different values associated with the same user can be set to capture different scenarios (e.g. the probability that can be large, to capture a scenario where few users participate every round).

Nested composition

The results of Sections 3.2.1 to 3.2.3 show that noiseless mechanisms can be considered private, assuming some additional assumptions on the attacker’s background knowledge. With enough records, even pessimistic assumptions (considering an attacker who knows a large fraction of records) can lead to very small values of and . However, one could still consider these assumptions as too brittle, and decide to add a small amount of additional noise to the mechanism to have it satisfy differential privacy in its original form.

Such mechanisms have a double privacy guarantee: under realistic assumptions, their privacy level is very high thanks to the attacker’s uncertainty, and the additional noise provides a “worst-case” privacy level that the mechanism satisfies independently of the attacker capabilities. Without noise, we can use results like Theorem 3 to show that a given aggregation over records is -APKDP (or PPKDP), where is the number of records that the attacker knows. In situations like ones we have seen so far, and can be very small when is close to , but might become unacceptably high when gets close to . Adding noise can be a way to guarantee that and never get above a certain point: when there is not enough randomness coming from the data anymore, the guarantee from post-processing noise take over. Figure 3.5 illustrates this phenomenon.

    without noise

02571000001|𝜀5050....B,,,02468|000,0000w0000i0th noise

Figure 3.5:  from the closed-form formula of Theorem 3 for , , and , as a function of the number of records known by the attacker . We compare two scenarios: either we do not add any post-processing noise, or we add Laplace noise of scale 2 to the output.

Of course, it is also natural to wonder whether the two sources of uncertainty could be combined. The privacy guarantees from Theorem 3 come from the shape of the binomial distribution, just like the shape of Laplace noise is the reason why adding it to the result of an aggregation can provide -DP. It seems intuitive that combining two sources of noise would have a larger effect.

In some cases, this effect can be numerically estimated. Given a noise distribution added to a mechanism of sensitivity , the PLRV can be obtained by comparing the distributions of and . To estimate the PLRV coming from two noise sources summed together (for example, binomial and geometric noise), we can simply compute the convolution of the corresponding two distributions, and use the result to compute the PLRV, and thus, the graph. We demonstrate this approach in Figure 3.6 , where we add two-sided geometric noise (see Definition 8 ) to a noiseless counting query.

000011111111𝜀δ....00000002468−−−−−−−1118642420  nppo==no00i..se56

Figure 3.6: Numerical computation of the bounds given by Theorem 3 with and (in dashed blue), compared with the bounds obtained by adding two-sided geometric noise of parameter or (see Definition 8 ) and combining both probability distributions.

It is natural to ask whether we could obtain generic results that quantify the combined effect of noise coming from the input data and noise added after the aggregation mechanism, without numerical evaluation. In [ 176], the authors propose such a result, based on the fact that Gaussian distributions are closed under convolution. The noise from the input data is approximated by a Gaussian using the central limit theorem, and their Theorem 6 shows that adding Gaussian noise leads to a smaller . However, since the term comes from the central limit theorem approximation, it cannot be improved beyond in general.

We could solve this by simply making the assumption that the input data unknown from the attacker actually follows a Gaussian distribution. Sadly, the corresponding result would be very brittle: if an attacker does not conform exactly to this approximation, then the result no longer holds. This is a major criticism of privacy definitions which assume the input data has inherent randomness [ 350]. The results of this paper are not so brittle, as the privacy guarantees degrade gracefully with the assumptions we make on the attacker’s partial knowledge (e.g. the number of records known, or the value of in Theorems 3 or 4 ).

Another approach would be choose the noise added as post-processing based on the natural noise distributions emerging from the partial knowledge assumption. For example, since the proof of Theorem 3 uses the fact that the attacker uncertainty corresponds to binomial noise, we could also add binomial noise as post-processing, since . However, this property depends on the exact value of , which again creates a brittleness we were trying to avoid.

The question of computing the privacy loss in situations where multiple sources of randomness are combined appears in other scenarios. Amplification by sampling or amplification by shuffling are examples of such results. These two classes of results are generic: they do not depend on the exact mechanism used to obtain the initial -DP guarantee. It is unlikely that such generic results exist when combining two arbitrary sources of noise, each of which satisfies -DP.

Some other results depend on additional assumptions on the noise distribution, like amplification by iteration [ 149] or amplification by mixing and diffusion mechanisms [ 25]. These do not seem to bring significant improvements in scenarios like Theorem 3 with post-processing noise: amplification by iteration is tailored for a case where noise is added many times (not only once), while amplification by mixing and diffusion require stronger assumptions on the original noise distribution.

An generic result on the privacy guarantee of chained -DP mechanisms appears in [ 139] (Appendix B). This tight result is only valid for pure -DP, but the main building block holds for -DP mechanisms: proving a fully generic chained composition result is equivalent to solving the special case where the input and output of both mechanisms have values in . This result can likely be extended to the -DP, although the analysis is surprisingly non-trivial, and fully generic optimality results do not necessarily mean optimality for the special case of additive noise mechanisms.

All opinions here are my own, not my employers.
I'm always glad to get feedback! If you'd like to contact me, please do so via e-mail (se.niatnofsed@neimad) or Twitter (@TedOnPrivacy).