Ted is writing things

On privacy, research, and privacy research.

The privacy loss random variable

— updated

This post is part of a series on differential privacy. Check out the table of contents to see the other articles!


Remember the notion of « almost » differential privacy? We changed the original definition to add a new parameter, \(\delta\). We said that \(\delta\) was « the probability that something goes wrong ». This was a bit of a shortcut: this nice and easy intuition is sometimes not exactly accurate. In this post, I'll do two things. I'll introduce a crucial concept in differential privacy: the « privacy loss random variable ». Then, I'll use it to explain what \(\delta\) really means.

Friendly heads-up: this post has slightly more math than the rest of this series. But don't worry! I made it as nice and visual as I could, with graphs instead of equations. All the equations are in a proof hidden by default.

The privacy loss random variable

Recall the setting of the definition of \(\varepsilon\)-DP (short for differential privacy). The attacker tries to distinguish between two databases \(D_1\) and \(D_2\), that differ by only one record. If a mechanism \(A\) is \(\varepsilon\)-DP, then \(A\left(D_1\right)\) and \(A\left(D_2\right)\) will return output \(O\) with similar probability:

$$ \mathbb{P}[A(D_1)=O] \le e^\varepsilon\cdot\mathbb{P}[A(D_2)=O]. $$

The equality also goes in the other direction, but the relation between \(D_1\) and \(D_2\) is symmetrical, so we only use this one inequality, to simplify.

We said before that the \(\varepsilon\) in \(\varepsilon\)-DP was the maximal knowledge gain of the attacker. We defined this knowledge gain in Bayesian terms, where the attacker is trying to guess if the real database \(D\) is \(D_1\) and \(D_2\). We saw that \(\varepsilon\) bounds the evolution of betting odds. For each \(O\), we had:

$$ \frac{\mathbb{P}\left[D=D_1\mid A(D)=O\right]}{\mathbb{P}\left[D=D_2\mid A(D)=O\right]} \le e^\varepsilon\cdot\frac{\mathbb{P}\left[D=D_1\right]}{\mathbb{P}\left[D=D_2\right]} $$

What if we don't just want to bound this quantity, but calculate it for a given output \(O\)? Let us define:

$$ \mathcal{L}_{D_1,D_2}(O) = \ln\frac{ \frac{\mathbb{P}\left[D=D_1\mid A(D)=O\right]}{\mathbb{P}\left[D=D_2\mid A(D)=O\right]} }{ \frac{\mathbb{P}\left[D=D_1\right]}{\mathbb{P}\left[D=D_2\right]}. } $$

This formula looks scary, but the intuition behind it is pretty simple. The denominator corresponds to the initial betting odds for \(D_1\) vs. \(D_2\). How likely is one option vs. the other, before looking at the result of the mechanism. In Bayesian terms, this is called the "prior". Meanwhile, the numerator of the fraction is the betting odds afterwards — the "posterior". Differential privacy guarantees that \(\mathcal{L}_{D_1,D_2}(O)\le\varepsilon\) for all \(O\).

Bayes' rule allows us to reformulate this quantity:

$$ \mathcal{L}_{D_1,D_2}(O) = \ln\left(\frac{\mathbb{P}\left[A(D_1)=O\right]}{\mathbb{P}\left[A(D_2)=O\right]}\right). $$

This is called the privacy loss random variable (PLRV for short). Intuitively, the PLRV is the « actual \(\varepsilon\) value » for a specific output \(O\). Why is it a random variable? Because typically, we consider \(\mathcal{L}_{D_1,D_2}(O)\) when \(O\) varies according to \(A(D_1)\), which we assume is the "real" database.

OK, this is very abstract. We need a concrete example.

A concrete example

Suppose that we're counting the number of people with blue eyes in the dataset. We make this diferentially private by adding Laplace noise of scale \(1/\ln(3)\), to get \(\varepsilon=\ln(3)\). The attacker hesitates between two possible datasets: one with \(1000\) blue-eyed people, the other with \(1001\). The real number is \(1000\), but the attacker doesn't know that. The two distributions look like this:

Graph showing two Laplace distributions with scale 1/ln(3), centered on 1000 and 1001

Let's consider three possible outputs of the mechanism, given the "real" database is \(D_1\). We represent them below as \(O_1\), \(O_2\), and \(O_3\).

Graph showing the previous Laplace distributions, with three points O1, O2 and O3 marked respectively at x=999, x=1000.5 and x=1003

Say the attacker is very uncertain: initially, they give equal probabilities to \(D_1\) and \(D_2\). What are they going to think once we give them the output of the mechanism?

  • If we return \(O_1\), the attacker is starting to suspect that the real database is \(D_1\). There's a larger chance to get that output if \(D=D_1\) than if \(D=D_2\). How much larger? Exactly 3 times larger: the attacker's knowledge is tripled.
  • If we return \(O_2\), the attacker is like: ¯\_(ツ)_/¯. This is not giving them much information. This output could have come from \(D_1\), but it could just as well have come from \(D_2\). The attacker's knowledge doesn't change.
  • If we return \(O_3\), the attacker is getting tricked with wrong information. They will think it's more likely that the real database is \(D_2\). Their "knowledge" is divided by 3.

Let's look at all possible events \(O=A(D_1)\), and order them. We'll put the ones that help the attacker most first, and look at the value of \(\mathcal{L}_{D_1,D_2}(O)\). Let's call this \(\mathcal{L}\), for short, and plot it.

Graph showing the PLRV for the Laplace distribution depending on the output

This is why Laplace noise is so nice: look at this neat horizontal line. Oh my god. It even has a straight diagonal. It never goes above \(\varepsilon\approx1.1\): a beautiful visual proof that Laplace noise gives \(\varepsilon\)-DP.

Let's change the graph above to more accurately represent that \(\mathcal{L}\) is a random variable. On the \(x\)-axis, we represent all events according to their probability. We're also more interested in \(\exp(\mathcal{L})\), so let's plot that instead of \(\mathcal{L}\).

Graph showing the exponential of the PLRV for the Laplace distribution, where the x-axis represents the probability space

Now, what if you were using some other type of noise? Say, from a normal distribution? It would make data analysts happier: Laplace noise is weird to them, it never shows up in the real world. Normal distributions, by contrast, are familiar and friendly. A lot of natural data distributions can be modeled with them.

In the context of differential privacy, the normal distribution is called « Gaussian noise ». Let's try to add Gaussian noise, of variance \(\sigma^2=2\):

Graph showing two normal distributions with variance 2, centered on 1000 and 1001

OK, looks reasonable, now let's see what \(e^\mathcal{L}\) looks like:

Graph showing the exponential of the PLRV for the normal distribution, where the x-axis represents the probability space

Ew. Look at this line going up to infinity on the left side. Gross. We can't just draw a line at \(e^\varepsilon\) and say "everything is underneath". What do we do, then? We cheat, and use a \(\delta\).

\(\delta\) and the PLRV

In a previous article, we said that the \(\delta\) in \((\varepsilon,\delta)\)-DP is the probability that something terrible happens. What does that mean in the context of Gaussian noise? First, we pick an arbitrary \(\varepsilon\), say, \(\varepsilon=\ln(3)\). Then, we look at how likely it for \(e^\mathcal{L}\) to be above the \(e^\varepsilon=3\) line. It's easy to do: the \(x\)-axis is the probability space, so we can simply measure the width of the bad events.

Same graph, but with δ marked at x=0.05, where the curve is approximately equal to 3

This simple intuition is correct: this mechanism is \((\ln(3),\delta_1)\)-DP, with \(\delta_1\approx0.054\). But it misses an important subtlety. Let's zoom in on the part where things go wrong, and consider two possible outputs.

Same graph, zoomed on the "bad events" part before 0.05, with two points O1 and O2 marked respectively at x=0.045 and x=0.002

Returning \(O_1\) is not great: \(e^\mathcal{L}>e^\varepsilon\). But it's not terrible: the privacy loss is only a tiny bit larger than we'd hope. Returning \(O_2\), however, is scary news: \(e^\mathcal{L}\) is huge. Intuitively, \(O_2\) leaks much more information than \(O_1\).

With our way of quantifying \(\delta\), we don't account for this. We only measure the \(x\)-axis. What we count is whether \(e^\mathcal{L}\) is above the line, not how much it's above the line. For each bad event of probability \(p\), we're adding \(p\times1\) to the \(\delta\). A finer approach is to weigh the bad events by "how bad they are". We want to give a "weight" of \(\approx1\) to the very bad events, and a weight of \(\approx0\) to the "not too bad" ones.

To do this, we transform a bit the curve above by doing two things. First, we take the inverse of the curve: very bad events are now close to \(0\) instead of very large. Second, we normalize the curve by taking the ratio \(e^\varepsilon/e^\mathcal{L}\). This way, events that are "not too bad" are close to \(1\).

Plotting exp(ε)/exp(PLRV) and highlighting the area under 1

This allows us to consider the area between the curve and the \(y=1\) line. When \(\mathcal{L}\) is very large, the inverse is close to \(0\), so the distance to \(1\) is almost 1. And when \(\mathcal{L}\) is close to \(\varepsilon\), the ratio is one, and the distance is almost 0. Very bad events count more than sort of bad events.

This is the tighter, exact characterization of \(\delta\). In \((\varepsilon,\delta)\)-DP, the \(\delta\) is the area highlighted above. It is the mass of all possible bad events, weighted by how likely they are and how bad they are. This tells us that the mechanism is \((\ln(3),\delta_2)\)-DP with \(\delta_2\approx0.011\), a much better characterization than before.

The typical definition of \((\varepsilon,\delta)\)-DP doesn't use this complicated formulation. A mechanism \(A\) is \((\varepsilon,\delta)\)-DP if for any neighboring \(D_1\) and \(D_2\), and any set \(S\) of possible outputs:

$$ \mathbb{P}[A(D_1)\in S] \le e^\varepsilon\cdot\mathbb{P}[A(D_2)\in S]+\delta. $$

This definition is equivalent to the previous characterization. If you want to see the proof of that, click here:

What about infinity values?

Using Gaussian noise, all possible values of \(\mathcal{L}\) are finite. But for some mechanisms \(A\), there are outputs \(O\) such that \(\mathbb{P}[A(D_1)=O]>0\), but \(\mathbb{P}[A(D_2)=O]=0\). In that case, \(\mathcal{L}(O)=\infty\). This kind of output is called a distinguishing event. If we return a distinguishing event, the attacker immediately finds out that \(D\) is \(D_1\) and not \(D_2\). This is the case for the "thresholding" example we looked at previously.

Our interpretation of \(\delta\) captures this nicely. Since we inverted the curve, if \(\mathcal{L}=\infty\), we simply have \(e^\varepsilon/e^\mathcal{L}=0\). The distance to \(1\) is exactly \(1\), so we count these events with maximal weight. The graph looks like this:

Plotting exp(ε)/exp(PLRV) and highlighting the area under 1 when that function is 0 below 0.006 and 1 everywhere else

In that case, \(\delta_1=\delta_2\): all "bad" events are worst-case events. For such a mechanism, the two characterizations of \(\delta\) are the same.

Final note

You might be wondering: why use Gaussian noise at all if it requires \(\delta>0\)?

This is an excellent question. I'm glad you asked it, because it is exactly the topic of the next blog post in this series. Or you can, as always, select another article to read next in the table of contents!


Thanks to Sebastian Meiser, who wrote the reference paper about the subtleties with \(\delta\). It makes for excellent reading if you want to dig a bit deeper into this. Thanks also to Antoine Amarilli for proofreading this blog post, and to Ivan Habernal for detecting a mistake in an earlier version.

All opinions here are my own, not my employer's.   |   Feedback on these posts is very welcome! Please reach out via e-mail (se.niatnofsed@neimad) or Twitter (@TedOnPrivacy) for comments and suggestions.   |   Interested in deploying formal anonymization methods? My colleagues and I at Tumult Labs can help. Contact me at oi.tlmt@neimad, and let's chat!