Lowering the cost of anonymization

a PhD thesis

2.2.2  Quantification of privacy loss (Q)

The risk model associated to differential privacy is a worst-case property: it quantifies not only over all possible neighboring datasets but also over all possible outputs. However, in many real-life risk assessments, events with vanishingly small probability are ignored, or their risk weighted according to their probability. It is natural to consider analogous relaxations, especially since these relaxations often have better composition properties, and enable natural mechanisms like the Gaussian mechanism to be considered private [131].

Most of the definitions within this section can be expressed using the privacy loss random variable, first defined in [110] as the adversary’s confidence gain, so we first introduce this concept. Roughly speaking, it measures how much information is revealed by the output of a mechanism.

Definition 13 (Privacy loss random variable [110]). Let be a mechanism, and and two datasets. The privacy loss random variable between and is defined as:

if neither nor is 0; in case only is zero then , otherwise . When the mechanism is clear from context, we simply write .

For simplicity, we only consider the case where the set of possible outputs of the mechanism, , is countable; otherwise the PLRV can be reformulated using the density functions of and instead of specific outputs.

Differential privacy bounds the maximum value of . Instead of considering the maximum value, which corresponds to the worst possible output, relaxations of this section will allow a small probability of error, consider the average of the privacy loss random variable, or describe its behavior in finer ways.

Allowing a small probability of error

The first option, whose introduction is commonly attributed to [126], relaxes the definition of -indistinguishability by allowing an additional small density of probability on which the upper bound does not hold. This small density, denoted , can be used to compensate for outputs for which the privacy loss is larger than . This led to the definition of approximate differential privacy, often simply called -DP. This is, by far, the most commonly used relaxation in the scientific literature.

Definition 14 (-differential privacy [126]). Two random variables and are -indistinguishable, denoted , if for all measurable sets of possible events:

A privacy mechanism is -DP (or -approximate DP) if for any datasets and that differ only on one record, and for all , .

This definition is equivalent with Max-KL stability [37], a special case of algorithmic stability, which requires that one change in an algorithm’s inputs does not change its output “too much”.

The in -DP is sometimes explained as the probability that the privacy loss of the output is larger than (or, equivalently, that the -indistinguishability formula is satisfied). In fact, this intuition corresponds to a different definition, first introduced in [267] as probabilistic DP, also called -DP in distribution in [60]. A detailed explanation of the distinction between the two definitions can be found in [277].

Definition 15 (-probabilistic differential privacy [277]). A privacy mechanism is -probabilistically DP (ProDP) if for any datasets and that differ only on one record there is a set where , such that for all measurable sets :

It is straightforward to show that -DP is stronger than -ProDP (with no change in parameters); a proof of the reverse result (with parameter change) is given in [408]. Both definitions can be reformulated using the privacy loss random variable.

Proposition 8. A mechanism is:

  • -DP for all neighboring and .
  • -DP for all neighboring and .
  • -ProDP for all neighboring and .

Approximate and probabilistic differential privacy can be combined to form -relaxed DP (RelDP) [407], which requires -DP with probability at least .

Averaging the privacy loss

As -DP corresponds to a worst-case risk model, it is natural to consider relaxations to allow for larger privacy loss for some outputs. It is also natural to consider average-case risk models: allowing larger privacy loss values only if lower values compensate it in other cases. One such relaxation is called Kullback-Leibler privacy [31, 88]: it considers the arithmetic mean of the privacy loss random variable, which measures how much information is revealed when the output of a private algorithm is observed.

Definition 16 (-Kullback-Leibler privacy [31, 88]). A privacy mechanism is -Kullback-Leibler private (KLPr) if for all , differing in one record:

(2.2)

Note that this formula can be expressed as where is the Kullback-Leibler-divergence.

-KL privacy considers the arithmetic mean of the privacy loss random variable or, equivalently, the geometric mean of . This choice of averaging function does not attribute a lot of weight to worst-case events, where takes high values. Rényi DP extends this idea by adding a parameter , which allows controlling the choice of averaging function by bounding the th momentum of the privacy loss random variable.

Definition 17 (-Rényi differential privacy [284]). Given , a privacy mechanism is -Rényi DP (RenyiDP) if for all pairs of neighboring datasets and :

Note that this formula can be expressed as where is the Rényi-divergence of order .

This definition can be naturally extended by continuity to (where it is equivalent to -KL privacy) and (where it is equivalent to -DP). Larger values of lead to more weight being assigned to worst-case events: -Rényi DP  -Rényi DP iff . Besides and , Rényi DP has a simple interpretation for some values of : imposes a bound on the arithmetic mean of , imposes it on the quadratic mean, on the cubic mean, etc. A related technique is the moments accountant [1] which keeps track of a bound on the moments of the privacy loss random variable during composition.

It is possible to use other divergence functions to obtain other relaxations. For example, in [380], the authors introduce two technical definitions, binary- DP (b- DP) and ternary- DP (t- DP), as part of a proof on amplification by sampling. Other examples of divergences can lead to other variants, like -total variation privacy [31] (-TVPr, using the total variance) and quantum DP [81] (QDP, using the quantum divergence).

Another possibility to average the privacy loss is to use mutual information to formalize the intuition that any individual record should not “give out too much information” on the output of the mechanism (or vice-versa). This is captured by -mutual-information DP (MIDP) [88], which guarantees that the mutual information between and conditioned on is under a certain threshold. The bound is taken over all possible priors on , which avoids having to reason about the attacker’s background knowledge. This definition, along with KL-privacy, are technically stronger than approximate DP, but the change in parameters was criticized for not providing a strong enough guarantee [272].

Proposition 9. For all , , and :

  • -DP -KLPr (Lemma 1 in [88])
  • -KLPr -MIDP -DP (Lemma 1 and 2 in [88])
  • -DP -RényiDP -DP (Theorem 21 in [26])

Controlling the tail distribution of the privacy loss

Some definitions go further than simply considering a worst-case bound on the privacy loss, or averaging it across the distribution. They try to obtain the benefits of -DP with a smaller which holds in most cases, but control the behavior of the bad cases better than -DP, which allows for catastrophic privacy loss in rare cases.

The first attempt to formalize this idea was proposed in [132], where the authors introduce concentrated DP (later renamed to mean-concentrated DP (mCoDP) in [57]). In this definition, a parameter controls the privacy loss variable globally, and another parameter allows for some outputs to have a greater privacy loss; while still requiring that the difference is smaller than a Gaussian distribution. In [57], the authors show that this definition does not satisfy the post-processing axiom, and propose another formalization of the same idea called zero-concentrated DP (zCoDP) [57], which requires that the privacy loss random variable is concentrated around zero.

Definition 18 (-zero-concentrated differential privacy [57]). A mechanism is -zero-concentrated DP if for all pairs of neighboring datasets and and all :

Four more variants of concentrated DP exist.

  • -approximate zero-concentrated DP [57] (AzCoDP), which relaxes -zCoDP by only taking the Rényi divergence on events with probability higher than instead of on the full distribution.
  • -bounded CoDP [57] (bCoDP) relaxes -zCoDP by requiring the inequality to hold only for .
  • -truncated CoDP [56] (tCoDP[56]) relaxes -zCoDP in the same way.
  • -truncated CoDP [81] (tCoDP[81]) requires the Rényi divergence to be smaller than for all .

The relations between these definitions and other notions in this section is well-understood. Besides the special cases (e.g., -tCoDP[56] is the same as -zCoDP) and the relations that are a direct consequence of the definitions (e.g., -zCoDP is the same as the condition “-RénDP for all ”), we list known relations below.

Proposition 10. For all , , , , and :

  • -DP -mCoDP (Theorem 3.5 in [132])
  • -DP -zCoDP (Lemma 8.3 in [57])
  • -DP -zCoDP (Lemma 3.2 in [57])
  • -mCoDP -zCoDP (Lemma 4.2 in [57])
  • -zCoDP -mCoDP (Lemma 4.3 in [57])
  • -zCoDP -DP (Lemma 3.5 and 3.6 in [57])
  • -DP -zCoDP (Lemma 3.7 in [57])
  • -tCoDP[56] -DP, where if , and otherwise (Lemma 6 in [56])

Extension

Most definitions of this section can be seen as bounding the divergence between and , for different possible divergence functions. In [31], the authors use this fact to generalize them and define -divergence DP (DivDP), which takes the particular divergence used as a parameter .

Definition 19 (-divergence differential privacy [31]). Let be a convex function such as . A privacy mechanism is -divergence DP if for all pairs of neighboring datasets , :

An instance of this definition was presented in [118] as -divergence DP; which requires that . This definition is mainly used to prove technical results on privacy/utility trade-offs in the local model. For any , -DP implies -DivDP, and when , it is equivalent to -RényiDP (Section 2 in [118]).

Moreover, capacity bounded differential privacy (CBDP) was introduced in [69], which uses -restricted -divergence: where is a divergence, is a family of functions, and is the Fenchel conjugate5. In other words, it requires the supremum condition to hold only for a selected set of functions (queries) instead of all possible ones. The interpretation for this definition is slightly different than other definitions in this section: represents the possible attacks that the attacker is allowed to perform.

Finally, most definitions in this section taking two real-valued parameters can be extended to use a family of parameters rather than a single pair of parameters. As shown in [348] (Theorem 2) for approximate DP, probabilistic DP, and Rényi DP, finding the tightest possible family of parameters (for either definition) for a given mechanism is equivalent to specifying the behavior of its privacy loss random variable entirely.

Multidimensional definitions

Allowing a small probability of error by using the same concept as in -DP is very common; many new DP definitions were proposed in the literature with such a parameter. Unless it creates a particularly notable effect, we do not mention it explicitly and present the definitions without this parameter.

Definitions in this section can be used as standalone concepts: -DP is omnipresent in the literature, and the principle of averaging risk is natural enough for Rényi privacy to be used in practical settings, like posterior sampling [163] or resistance to adversarial inputs in machine learning [317]. Most variants in this section, however, are only used as technical tools to get better results on composition or privacy amplification [131, 149, 245, 380].

5The Fenchel conjugate for a function with a domain is .

All opinions here are my own, not my employers.
I'm always glad to get feedback! If you'd like to contact me, please do so via e-mail (se.niatnofsed@neimad) or Twitter (@TedOnPrivacy).