In this section we summarize the results from the previous 7 sections into a table where we list the known relations and show the properties with either referring to the original proof or creating a novel one.
In Table 2.20, we list the differential privacy variants and extensions introduced in this work. For each, we specify their name, parameters and where they were introduced (column 1), which dimensions they belong to (column 2), which axioms they satisfy (column 3, post-processing on the left and convexity on the right), whether they are composable (column 4) and how they relate to other differential privacy notions (column 5). We do not list definitions whose only difference is that they apply DP to other types of input, like those from Section 2.2.3, or geolocation-specific definitions.
The references for each claim present in Table 2.20 are listed after the table, while the proofs follow in Section 2.2.9. Like in the rest of this chapter, the following abbreviations are used for dimensions:
- Q: Quantification of privacy loss
- N: Neighborhood definition
- V: Variation of privacy loss
- B: Background knowledge
- F: Formalism of privacy loss
- R: Relativization of knowledge gain
- C: Computational power
|Name & references||Dimensions||P.P.1||Cv.2||Cp.3||Relations|
| -DP, or -approximate DP |
also known as max-KL stability 
| -probabilistic DP [267, 277] |
also known as -DP in distribution 
|Q||4||6||11||-DP -ProDP -DP|
|-Relaxed DP ||Q||4||6||11||-ProDP -RelDP -DP|
|-Kullback-Leiber Pr [31, 88]||Q||5||5||11||-DP -KLPr -DP|
|-Rényi DP ||Q||5||5||11||-KLPr -RenyiDP -DP|
|binary- DP ||Q||?||?||?||b- DP -DP|
|tenary- DP ||Q||?||?||?||t- DP -DP|
|-total variation Pr ||Q||?||?||?||-TVPr b- DP|
|-quantum DP ||Q||?||?||?|
|-mutual-information DP ||Q||5||5||11||-DP -MIDP -KLPr|
|-mean concentrated DP ||Q||4||?||11||-DP -mCoDP -DP|
|-zero concentrated DP ||Q||5||5||11||-zCoDP -mCoDP|
|-approximate CoDP ||Q||5||?||11||-DP -ACoDP -zCoDP|
|-bounded CoDP ||Q||5||5||11||-bCoDP -zCoDP|
|-truncated CoDP ||Q||5||5||11||-tCoDP -DP|
|-truncated CoDP ||Q||5||5||11||-tCoDP -bCoDP|
|-divergence DP ||Q||5||5||?||-DivDP most definitions in Q|
|-divergence DP ||Q||5||5||?||-DivDP -DivDP|
|-capacity bounded DP ||Q||5||5||?||-CBDP -DivDP|
|-unbounded DP ||N||4||4||12||-DP -uBoDP -GrDP|
| -bounded DP |
also known as per-person DP 
|-attribute/bit DP ||N||4||4||12||-BitDP -AttDP -BoDP|
|-element DP ||N||4||4||12||-ELDP -DP|
|-one-sided DP ||N||4||4||12||-OnSDP -BoDP|
|-sensitive privacy ||N||4||4||12||-SenPr --OnSDP|
|-anomaly-restricted DP ||N||4||4||12||-ARDP -DP|
| -group DP |
also known as DP under correlation 
|-dependent DP ||N||4||4||12||-DepDP -GrDP|
|-bayesian DP ||N||4||4||12||-BayDP -DepDP|
|-correlated DP [392, 393]||N||4||4||12||-CorDP -BayDP|
|-prior DP ||N||4||4||12||-PriDP -BayDP|
|-free lunch Pr ||N||4||4||12||-FLPr all definitions in N|
| -individual DP |
also known as conditioned DP 
|-per-instance DP ||N||4||4||12||-PIDP -IndDP|
|-generic DP [145, 227]||N||4||4||12||-GcDP most definitions in N|
| -constrained DP |
also known as adjacent DP 
and DP under a neighborhood 
|-distributional Pr ||N||4||4||12||-DlPr -GcDP|
|-sensitivity-induced DP ||N||4||4||12||-SIDP -GcDP|
|-induced-neighbors DP ||N||4||4||12||-INDP -GcDP|
|-blowfish Pr [187, 193]||N||4||4||12||-BFPr -GcDP -INDP|
|-adjacency-relation div. DP ||Q,N||5, 4||5, 4||?||-GcDP -ARDDP -DivDP|
| -personalized DP [134, 165, 212, 261, 302]|
also known as heterogeneous DP 
|-tailored DP ||V||7||7||12||-TaiDP -PerDP|
|-outlier Pr ||V||7||7||12||-OutPr -TaiDP|
|-simple-outlier Pr ||V||7||7||12||-SOPr -OutPr|
|-simple outlier DP ||V||7||7||12||-DP -SODP -OutPr|
|-staircase outlier DP ||V||7||7||12||-SCODP -OutPr|
|-Pareto DP ||V||7||7||12||-ParDP -TaiDP|
|-random DP ||V||?||8||11||-RanDP -DP|
| -predictive DP |
also known as model-specific DP 
|V||?||?||?||-RanDP -PredDP -DP|
|-generalized DP ||V||?||?||?||-GdDP -DP|
| -Pr |
also known as extended DP 
|-weighted DP ||N,V||7||7||12||-WeiDP -Pr|
|-smooth DP ||N,V||7||7||12||-SmoDP -Pr|
|-earth mover’s Pr ||N,V||7||7||12||-EMDP -Pr|
|-DP on location set ||N,V||7||7||12||-LocSetDP|
|-distributional Pr [333, 409]||N,V||?||?||?||-FLPr -DlPr[53, 333]|
|-endogenous DP ||Q,V||7||7||12||-DP -EndDP -PerDP|
|-weak Bayesian DP ||Q,V||4||?||12||-DP -WBDP -RanDP|
| -on average KL Pr [150, 381]|
also known as average leave-one-out KL stability 
|Q,V||5,4||?||?16||-KLPr -avgKLPr -RanDP|
|-Bayesian DP ||Q,V||4||8||12||-ProDP -BayDP -RanDP|
|-privacy at risk ||Q,V||4||8||12||-ProDP -PAR -RanDP|
|-pseudo-metric DP ||Q,N,V||?||?||11||-DP -PsDP -Pr|
|-extended divergence DP ||Q,N,V||7||7||?||-Pr -EDivDP -Div DP|
|-generic DP ||Q,N,V||4||4||?||-GcDP -DP|
|-abstract DP ||Q,N,V||4||4||?||-AbsDP -GcDP|
|-noiseless Pr [46, 116, 164, 391]||B||4||4||13||-DP -NPr|
|-causal DP ||B||4||4||13||-CausDP -DP|
|-DP under sampling ||Q,B||4||4||13||-NPr -SamDP -DP|
|-active PK DP [36, 46, 100]||Q,B||4||4||13||-APKDP -NPr|
|-passive PK DP ||Q,B||4||4||13||-APKDP -PPKDP -NPr|
|-pufferfish Pr [206, 219, 228]||N,B||4||4||13||-NPr -PFPr -GcDP|
|-Bayesian DP ||N,B||4||4||13||BayDP -PFPr|
|-distribution Pr ||Q,N,B||7||7||13||-DnPr -APKDP|
|-profile-based DP ||Q,N,B||7||7||13||-PBDP -DnPr|
|-probabilistic DnPr ||Q,N,B||4||6||13||-ProDP -PDnPr -DnPr|
|-divergence DnPr ||Q,N,B||7||7||13||-DP -DDnPr -DnPr|
|-extended DnPr ||N,V,B||7||7||13||-Pr -EDnPr -DnPr|
|-ext. div. DnPr ||Q,N,V,B||7||7||13||-DDPr -EDDnPr -EDnPr|
|-indistinguishable Pr ||F||14||14||14||-IndPr -DP|
|-DP ||Q,F||4||?||4||-DP -DP|
|Gaussian DP ||Q,F||4||?||4||GaussDP -Pr|
|-positive membership Pr ||B,F||9||9||13||-PMPr -BoDP|
|-negative membership Pr ||B,F||9||9||13||-NMPr -BoDP|
|-membership Pr ||B,F||9||9||13||-PMPr -MPr -NMPr|
| -adversarial Pr [326, 391]|
also known as information privacy 
|Q,B,F||9||9||13||-DP -AdvPr -PMPr|
|-aposteriori noiseless Pr ||B,F||9||9||?||-ANPr -NPr|
|-semantic Pr [158, 217]||F||?||?||?||-SemPr -DP|
|-range-bounded Pr ||F||?||?||?||-RBPr -DP|
|-inference-based causal DP ||B,F||?||?||?||-IBCDP -CausDP|
|-information Pr ||N,B,F||?||?||?||-InfPr -DP|
|-zero-knowledge Pr ||R||4||4||?16||-ZKPr -DP|
|-bounded leakage DP ||Q,R||4||4||11||-BLDP -DP|
|-coupled-worlds Pr ||N,B,R||4||4||7||-CWPr -DP|
|-distributional DP ||N,B,R||4||4||7||-CWPr -DistDP -DP|
|-inference-based CW Pr ||Q,N,B,F,R||?||?||7||-IBCWPr -CWPr|
|-inference-based DistDP ||Q,N,B,F,R||?||?||7||-DDP -IBDDP -IBCWPr|
|-typical stability ||Q,V,R||4||?||11|
|-SIM-computational DP ||C||10||10||17||-SimCDP -DP|
|-IND-computational DP ||C||10||10||17||-IndCDP -SimCDP|
|-DP for Record Linkage ||C||10||10||17||-RLDP -OCDP|
|-output constrained DP ||N,C||10||10||17||-OCDP -IndCDP|
|-computational ZK Pr ||R,C||10||10||?||-CZKPr -ZKPr|
4 See Proposition 11.
5 See Proposition 12.
6 See Proposition 13.
7 See Proposition 14.
8 See Proposition 15.
9 See Proposition 16.
10 See Proposition 17.
11 See Proposition 18.
12 See Proposition 19.
13 See Proposition 20.
14 Follows directly from its equivalence to -DP.
15 A modified definition was presented in , which is an instance of PF Pr.
16 A proof for a restricted scenario appears in the paper introducing the definition.
17 This claim appears in , its proof is in the unpublished full version.
We first list known results for variants and extensions satisfying privacy axioms, prove additional results, then we do the same for composition.
- ProDP, ACoDP, and mCoDP do not satisfy the post-processing axiom [57, 277].
- AbsDP satisfies neither privacy axiom, while GlDP satisfies both [225, 228]22.
- WBDP satisfies the post-processing axiom .
- TypSt satisfies the post-processing axiom .
- GaussDP satisfies the post-processing axiom .
- PFPr satisfies both privacy axioms .
- CWPr satisfies both privacy axioms 23 .
- APKDP and PPKDP satisfy both privacy axioms .
- BLDP satisfies both privacy axioms .
Proof. The post-processing axiom follows directly from the monotonicity property of the -divergence. The convexity axiom follows directly from the joint convexity property of the -divergence. □
Proof. Consider the following mechanisms and , with input and output in .
- , with probability , and with probability .
Both mechanisms are -ProDP. Now, consider the mechanism which applies with probability and with probability . is a convex combination of and , but the reader can verify that it is not -ProDP. The result for -ACoDP is a direct corollary, since is is equivalent to -ProDP when . □
Proof. The proof of Proposition 11 for PFPr (Appendix B in ) is a proof by case analysis on every possible protected property. The fact that is the same for every protected property has no influence on the proof, so we can directly adapt the proof to -Pr, and its combination with PFPr. Similarly, the proof can be extended to arbitrary divergence functions, like in Proposition 12. □
Proof. Let be the uniform distribution on , let be generated by picking records according to , and by flipping one record at random. Let return if all records are , and otherwise. Let return if all records are , and otherwise.
Note that both mechanisms are -RanDP. Indeed, will only return for with probability , and for with probability (if only has one , which happens with probability , and this record is flipped, which happens with probability ). In both cases, will return for the other database; which will be a distinguishing event. Otherwise, will return for both databases, so . The reasoning is the same for .
However, the mechanism obtained by applying either or uniformly randomly does not satisfy -RanDP: the indistinguishability property does not hold if or have all their records set to either or , which happens twice as often as either option alone. □
Proof. We prove it for AdvPr. A mechanism satisfies -AdvPr if for all , , and , . We first prove that it satisfies the convexity axiom. Suppose is a convex combination of and . Simplifying into , we have:
Denoting for , this gives:
The proof for the post-processing axiom is similar, summing over all possible outputs . It is straightforward to adapt the proof to all other definitions which change the shape of the prior-posterior bounds. □
Proposition 17. Both versions of CDP satisfy both privacy axioms; where the post-processing axiom is modified to only allow post-processing with functions computable on a probabilistic polynomial time Turing machine. As a direct corollary of Proposition 11 for CWPr, CZKPr also satisfies both privacy axioms.
Proof. For Ind-CDP and the post-processing axiom, the proof is straightforward: if post-processing the output could break the -indistinguishability property, the attacker could do this on the original output and break the -indistinguishability property of the original definition.
For Ind-CDP and the convexity axiom, without loss of generality, we can assume that the sets of possible outputs of both mechanisms are disjoint (otherwise, this give strictly less information to the attacker). The proof is then the same as for the post-processing axiom.
For SimCDP, applying the same post-processing function to the “true” differentially private mechanism immediately leads to the result, since DP satisfies post-processing. The same reasoning holds for convexity. □
In this section, if and are two mechanisms, we denote the mechanism defined by .
- -DP and -DP, then is -DP .
- -ProDP and -ProDP, then is -ProDP [60, 267, 277].
- -RelDP and -RelDP, then is -RelDP .
- -MIDP and -MIDP, then is -MIDP .
- -KLDP and -KLDP, then is -KLDP [31, 88].
- -RenDP and -RenDP, then is -RenyiDP .
- -mCoDP and -mCoDP, then is -mCoDP [57, 132].
- -zCoDP and -zCoDP, then is -zCoDP .
- -ACoDP and -ACoDP, then is -ACoDP .
- -bCoDP and -bCoDP, then is -bCoDP .
- -CCoDP and -CCoDP, then is -CCoDP .
- -DP and -DP, than then is -DP .
- -RanDP and -RanDP, then is -RanDP .
- -BayDP and -BayDP, then is -BayDP. The same result holds for WBDP as an immediate consequence of Theorem 1 in .
- -TypSt and -TypSt, then is -TypSt .
- -PsDP and -PsDP then is -PsM DP .
- -BLDP and -BLDP, then is -BLDP, where is the concatenation of and (where the randomness is not shared between and , nor between and ) .
Proof. The proof is essentially the same as for -DP. ’s randomness is independent from ’s, so:
Each definition listed in Proposition 18 can also be combined with -privacy, and the composition proofs can be similarly adapted. □
Proof. The proof of Proposition 19 cannot be adapted to a context in which the attacker has limited background knowledge: as the randomness partially comes from the data-generating distribution, the two probabilities are no longer independent. A typical example considers two mechanisms which answer e.g., queries “how many records satisfy property ” and “how many records satisfy property and have an ID different from 4217”: the randomness in the data might make each query private, but the combination of two queries trivially reveals something about a particular user. Variants of this proof can easily be obtained for all definitions with limited background knowledge. □