2.2.10 Related work
In this section, we detail our criteria for excluding particular data privacy definitions from our work, we list some relevant definitions that were excluded by the criteria presented in Section 2.2.1, and we list related works and existing surveys in the field of data privacy.
Out of scope definitions
As detailed in Section 2.2.1, we considered certain data privacy definitions to be out of scope for our work, even when they seem to be related to differential privacy. This section elaborates on such definitions.
Some definitions do not provide clear semantic privacy guarantees, or are only used as a tool in order to prove links between existing definitions. As such, we did not include them in our survey.
- -privacy, introduced in [MGG09], was a first attempt at formalizing an adversary with restricted background knowledge. Its formulation does not provide a semantic guarantee, and it was superseded by noiseless privacy [Dua09, BBG11] (introduced in Section 2.2.5).
- Relaxed indistinguishability, introduced in [RHMS09] is a relaxation of adversarial privacy that provides a plausible deniability by requiring for each tuple, that at least tuples must exist with -indistinguishability. It does not provide any guarantee against Bayesian adversaries.
- Differential identifiability, introduced in [LC12], bounds the probability that a given individual’s information is included in the input datasets but does not measure the change in probabilities between the two alternatives. As such, it does not provide any guarantee against Bayesian adversaries24.
- Crowd-blending privacy, introduced in [GHLP12], combines differential privacy with -anonymity. As it is strictly weaker than any mechanism which always returns a -anonymous dataset, the guarantees it provides against a Bayesian adversary are unclear. It is mainly used to show that combining crowd-blending privacy with pre-sampling implies zero-knowledge privacy [GHLP12, LP15].
- Membership privacy25, introduced in [SMS19], is tailored to membership inference attacks on machine learning models; the guarantees it provides are not clear.
- -anonymity, introduced in [HABMA17], first performs -anonymisation on a subset of the quasi identifiers and then -DP on the remaining quasi-identifiers with different settings for each equivalence class of the -anonymous dataset. The semantic guarantees of this definition are not made explicit.
- Posteriori DP, introduced in [WYZ16], compares two posteriors in a way similar to inferential privacy, but does not make the prior (and thus, the attacker model) explicit.
- Noiseless privacy26, introduced in [Far19], limits the change in the number of possible outputs when one record in the dataset changes. As it does not bound the change in probabilities of the mechanism, it does not seem to offer clear guarantees against a Bayesian adversary.
- Weak DP, introduced in [WSMD20], adapts DP for streams, but it only provides a DP guarantee for the average of all possible mechanism outputs27, rather than for the mechanism itself. Thus, its semantics guarantees are also unclear.
- Error Preserving Privacy, introduced in [DNGRV18], states that the variance of the adversary’s error when trying to guess a given user’s record does not change significantly after accessing the output of the mechanism. The exact adversary model is not specified.
A important technical tool used when designing differentially private mechanisms is the sensitivity of the function that we try to compute. There are many variants to the initial concept of global sensitivity [DMNS06], including local sensitivity [NRS07], smooth sensitivity [NRS07], restricted sensitivity [BBDS13], empirical sensitivity [CZ13], empirical differential privacy28 29 [ASV13], recommendation-aware sensitivity [ZLR13], record and correlated sensitivity [ZXLZ15], dependence sensitivity [LCM16], per-instance sensitivity [Wan17], individual sensitivity [CD20], elastic sensitivity [JNS18] and derivative sensitivity [LPM18]. These notions only change how to achieve a given privacy definition (typically DP), and are not relevant to the definition itself, so we did not consider these notions in our work.
Local model and other contexts
In this work we focused on DP variants/extensions typically used in the global model, in which a central entity has access to the whole dataset. It is also possible to use DP in other contexts, without formally changing the definition. The main alternative is the local model, where each individual randomizes their own data before sending it to an aggregator. This model, formally introduced in [DJW13], is used e.g., by Google [EPK14], Apple [Tea17], or Microsoft [DKY17]. These models can be thought of as different ways of deploying a given privacy definition, rather than distinct definitions.
Many definitions we listed were initially presented in the local model, such as -privacy [CABP13], geo-indistinguishability [ABCP13], earth mover’s Pr [FDM19], location Pr [EG16], profile-based DP [GC19], divergence DP and smooth DP from [BD14], and extended DP, distribution Pr, and extended distribution Pr from [KM19b].
Below, we list the definitions that are the same as previously listed definitions, but used in a different attacker setting; the list also includes alternatives to the local and global models.
- In [SCR11], the authors introduce distributed DP, which corresponds to local DP, with the additional assumption that only a portion of participants are honest.
- In [KPRU14], the authors define joint DP, to model a game in which each player cannot learn the data from any other player, but are still allowed to observe the influence of their data on the mechanism output. In [WHWX16], authors define a slightly different version of this idea, multiparty DP, in which the view of each subgroup of players is differentially private in respect to other players inputs.
- In [BEM17], the authors define DP in the shuffled model, which falls in-between the global and the local model: the local model is augmented by an anonymous channel that randomly permutes a set of user-supplied messages, and differential privacy is only required of the output of the shuffler.
- In [JLT18], the authors define localized information privacy, a local version of information privacy (mentioned in Section 2.2.6).
- In [MK19], the authors define utility-optimized local DP, a local version of one-sided differential privacy (mentioned in Section 2.2.3) which additionally guarantees that if the data is considered sensitive, then a certain set of outputs is forbidden.
- In [DPZ18, NYH18, ABK20], the authors define personalized local DP, a local version of personalized DP (mentioned in Section 2.2.4).
- In [ACPP18], the authors define -local DP, a local version of -DP (mentioned in Section 2.2.4); this was redefined as condensed local DP in [GTT19].
- In [LKCT20], the authors define task-global DP and task-local DP, which are equivalents of element-level DP (mentioned in Section 2.2.3) in a meta-learning context.
Other related work
The relation between the main syntactic models of anonymity and DP was studied in [CT13], in which the authors claim that the former is designed for privacy-preserving data publishing (PPDP), while DP is more suitable for privacy preserving data mining (PPDM). We disagree with this assessment, and discuss differentially private data publishing at length in Chapter 4.
In [HZNF15], the authors classify different privacy enhancing technologies (PETs) into 7 complementary dimensions. Indistinguishability falls into the Aim dimension, but within this category, only -anonymity and oblivious transfer are considered; differential privacy is not mentioned. In [AGM18], the authors survey privacy concerns, measurements and privacy-preserving techniques used in online social networks and recommender systems. They classify privacy into 5 categories; DP falls into Privacy-preserving models along with e.g., -anonymity. In [WE18] the authors classified 80+ privacy metrics into 8 categories based on the output of the privacy mechanism. One of their classes is Indistinguishability, which contains DP as well as several variants. Some variants are classified into other categories; for example Rényi DP is classified into Uncertainty and mutual-information DP into Information gain/loss. The authors list 8 differential privacy variants; our taxonomy can be seen as an extension of the contents of their work (and in particular of the Indistinguishability category).
In [WYZ16], authors establish connections between differential privacy (seen as the additional disclosure of an individual’s information due to the release of the data), identifiability (seen as the posteriors of recovering the original data from the released data), and mutual-information privacy (which measures the average amount of information about the original dataset contained in the released data).
The appropriate selection of the privacy parameters for DP was also exhaustively studied. This problem in not trivial, and many factors can be considered: in [HGH14], the authors used economic incentives, in [LC11, Kre19, PTB19], the authors looked at individual preferences, and in [LHC19, LP19], the authors took into account an adversary’s capability in terms of hypothesis testing and guessing advantage respectively.
Earliest surveys focusing on DP summarize algorithms achieving DP and applications [Dwo08, Dwo09]. The more detailed “privacy book” [DR14] presents an in-depth discussion about the fundamentals of DP, techniques for achieving it, and applications to query-release mechanisms, distributed computations or data streams. Other textbooks have focused on empirical performance of various algorithms [LLSY16], asymptotic upper and lower bounds for various tasks [Vad17], or have tried to make differential privacy more approachable to non-experts [NSW17]. Other surveys focus on the release of histograms and synthetic data with DP [HMM16a, NR19].
Finally, some surveys focus on location privacy. In [MH18], the authors highlight privacy concerns in this context and list mechanisms with formal provable privacy guarantees; they describe several variants of differential privacy for streaming (e.g., pan-privacy) and location data (e.g., geo-indistinguishability) along with extensions such as pufferfish and blowfish privacy. In [CEP17], the authors analyze different kinds of privacy breaches and compare metrics that have been proposed to protect location data.
24Differential identifiability was reformulated in [LQS13] as an instance of membership privacy.
25Another definition with the same name is introduced in [LQS13], we mention it in Section 2.2.6
26Another definition with the same name is introduced in [Dua09, BBG11], we mention it in Section 2.2.5.
27It also assumes that some uncertainty comes from the data itself, similarly to definitions in Section 2.2.6
28Even though it is introduced as a variant of DP, it was later shown to be a measure of sensitivity [CH16].
29Another definition with the same name is introduced in [BD19], we mention it in Section 2.2.5.