This is another one of those posts where I answer a question I get regularly. What is it like being a privacy engineer? Besides a cool job title, what does the job actually entail?
Before I even start, I need to point out a few major caveats of this post.
- My experience is not representative of privacy engineering as a whole. I'm sure it's different across companies, and even across people within the same company.
- I have a particularly narrow view. I work on a sub-field of privacy (anonymization), and I've never led a team or hired privacy engineers.
- "Privacy engineer" is not even my official job title. I still have my initial job title at Google: software engineer (part-time). Privacy engineering is definitely the job I do, though — they don't care much about job titles. I might switch at some point.
- Like in all other entries from this blog, all opinions are mine and mine only. I'm only talking about my own experiences and feelings, and this post isn't vetted by my employers.
So, what do I do, on a daily basis, while working as a privacy engineer? I would split the job responsibilities in three broad categories.
Consulting with teams
A large part of my time is spent helping product teams get privacy right.
Back when I did generic privacy reviews, I checked future product launches for a wide variety of things. Will users understand what happens with the data they share? Is the data appropriately protected in transit and on-rest? Can the product be misused and allow bad people to do evil1? Is the system collecting only what is required for the product's functionality? Will it have a harmful behavior in specific cases, or will it protect at-risk users properly2? Are deletions handled correctly?
Some of this is compliance: making sure that the product aligns with existing policies and regulations is important. This often isn't the focus point, though. Everything that can be ticked off a checklist is usually pretty straightforward. What's most complex and interesting is to identify what can go wrong in specific scenarios.
How do I do this in practice? I read through design docs, slide decks and sometimes code or demos to understand the product. I ask for additional documentation when it's lacking. And poke at the system to see where it could fail in unexpected and problematic ways. Then, I'll communicate the findings to the team, and help them correct possible issues.
Nowadays, I'm in a group focused on anonymization, so I have a narrower focus. Product teams come to us when they need to anonymize data, and we help them get it right. We make sure they understand what they need to do and how to do it. Then, we give them the green light when we end up on a solution we're comfortable with.
This process is much easier when teams consult with privacy folks as soon as possible. I love to be involved in early design discussions! It's in everyone's interest. It avoids making choices that we'll regret later, which can save a lot of engineering time. And if the team does the right thing in the first place, that makes my job much easier at review time!
This part of the job requires lots of empathy. It's necessary to relate with users, and understand what will create issues. It's also crucial to build productive and respectful relationships with product teams. Luckily for me, it's something that can be learned and improved over time. I wasn't very good at it at first!
This is why the job title has « engineer » in it!
Finding issues in products is only the first step. Can we automate some of these investigation methods? Make sure certain classes of problems don't happen in the future? Detect failures early to prevent them from causing harm? These follow-up questions can lead to impactful engineering projects.
- Technical improvements to processes: this is not a phenomenon limited to privacy. When engineers run processes, they'll detect inefficiencies, and identify automation opportunities. Building tooling to assist with checklist-type things is often a good idea. Time is better spent focusing on the complex and unique aspects of consultations!
- Improving infrastructure: baking privacy into your tech stack is an excellent investment. Suppose that some vetted storage system takes care of encryption and deletion correctly. You no longer need to worry about these aspects in a product that uses this system. That's a great way to save time and avoid problems! Privacy engineers are uniquely positioned to notice when this is worth doing.
- Monitoring: how do you check that products continue to behave as expected over time? Some problems might arise after the privacy review. Bugs happen. Code evolves over time. Changes might appear harmless, but have unintended consequences. Catching potential failures with automated monitoring before they harm anyone is very rewarding. And again, to know what to look for and how to detect privacy issues, you need domain experts.
Sometimes, it makes sense for a privacy engineer to take on such projects on their own. This is especially true when there is a lot of specific expertise involved, which is the case for my group: a large chunk of our work is about building tools to make anonymization easier and safer. Sometimes, building a new thing yourself is not the optimal move. It might make more sense to collaborate with existing teams, or influence their roadmap.
This part of the job is like a classic software engineering job: writing design docs, getting resources, writing and reviewing code, improving documentation, writing tests… In my experience, though, there's often more communication involved than on a typical project. The problems tackled are often very horizontal: pain points shared across an entire organization, a new rule that applies across the board… In these cases, maintaining technical alignment between related efforts is crucial. Privacy engineers are in the perfect central position to help with this aspect.
Privacy engineers try to make their organization do the right thing. But who decides what the « right thing » is in the first place?
Turns out, it's also part of the job. Privacy engineers take high-level goals or regulations and translate them into concrete, actionable requirements. Typical privacy principles are broad and vague in nature, and have many interpretations. This isn't great: people will have some creative ideas about how to put them in application.
Being at the interface between non-technical stakeholders and engineers is tricky, but necessary: if nobody does this job, it's not going to end well. And it goes both ways! Helping policy makers understand what makes sense from a technical perspective is crucial. Otherwise, you end up with inapplicable rules, or counter-productive efforts.
In addition to that, policies don't all come from top-down regulations or principles. It's frequent to stumble upon questions for which there is no existing guidance. You often have to make judgment calls, and when you do, it's important to document these decisions: this is the only to keep them consistent across products and over time. And doing that is, in effect, setting an unofficial policy. Depending on how generalizable it is, it might be worth turning it into official guidance.
I find policy work way more complex than consulting or engineering work. It involves long discussions with a wide variety of people: lawyers, executives, engineers, product managers… And it's critical to get right. Spending the time to write great guidance is an investment that pays back many times over. And inconsistent or inapplicable policies can have a huge damaging impact3. So it's sometimes frustrating, but also very challenging and rewarding.
In practice, what does policy work look like? Many meetings, long discussions on docs, unending email threads. Yum!
All the other stuff
I lied! There's a fourth category. It's a catch-all, for all the extra responsibilities besides the core ones.
- Education: you can't make every person in your organization a privacy expert, but it doesn't hurt to try! Giving talks and doing outreach is useful for many reasons. It helps orient people in the right general direction when designing new products. It increases their awareness of potential issues, and makes them more likely to consult with experts early. It's also a great way to recruit! People you reach this way might later join your team, or become local privacy experts in their own team.
- Proactive investigations: poking at existing products outside of structured consultations can be worthwhile. Especially if nobody has looked at them in a while…
- User advocacy: privacy engineers try to make sure products aren't harmful to marginalized communities. As such, they have a duty to speak up when that's the case, even if it's not technically about privacy4.
- Incident response: bad stuff happens, and you need to have a process to make sure some people are ready to put out fires! Privacy engineers can be in such incident response roles. I never had such a role, though, so I don't know what it's like.
- External outreach: few privacy engineers do external outreach, even though they're in a unique position to do so. We could do better in that area, and collaborate more with academia and civil society. Luckily, most excellent folks are actively improving the situation on that front!
- And a bunch of other engineering-related things. You sometimes need to analyze data to quantify and prioritize issues. You want to keep educating yourself on new developments in the privacy space. You also need to pay attention to public discourse; it's crucial to better understand users, and anticipate new regulations. You sometimes want to run user studies to align your UIs with user expectations.
I'm sure that I'm forgetting some aspects. You might be a privacy engineer and this list might not feel very familiar. If that's the case, drop me a line! I'd love to understand other perspectives on that topic. I hope you agree with me on one thing, though: being a privacy engineer is challenging, fun and rewarding!
Now, there's a related question I also get sometimes: « being a privacy engineer sounds awesome, how can I become one ? ». Unfortunately, I don't have a great answer to that one. For me, it was pretty random. I joined Google as a software engineer, switched to the privacy team because it sounded fun, and learned everything I know there. I've heard good things about a Master's degree at CMU… But apart from that, I don't know a lot of educational opportunities. I'm not sure how to hunt for privacy engineering jobs, either. I hope someone else writes a good answer to that question! Because we definitely need more privacy engineers in industry.
There are lots of sub-questions here! Can a domestic abuser use the product to spy on their spouse? Can criminals take advantage of it to run online scams? Can political actors use it to spread disinformation? ↩
Will your system deadname trans people, or unmask political activists? Are privacy-critical surfaces accessible and understandable to people with disabilities? Can the product amplify hurtful or triggering content? Needless to say, this is why you need your privacy engineering team to be diverse. ↩
Things like ethics or ML fairness deserve specific processes and technical expertise. They're like security and privacy, complex and crucial to get right. But when these processes don't exist, they fall by default under the privacy umbrella. So existing practitioners have a duty to keep an eye on this sort of thing in the meantime. ↩