We Can’t Trust Facebook to Regulate Itself
Credit Shawn Thew/European Pressphoto Agency
I led Facebook’s efforts to fix privacy problems on its developer platform in advance of its 2012 initial public offering. What I saw from the inside was a company that prioritized data collection from its users over protecting them from abuse. As the world contemplates what to do about Facebook in the wake of its role in Russia’s election meddling, it must consider this history. Lawmakers shouldn’t allow Facebook to regulate itself. Because it won’t.
Facebook knows what you look like, your location, who your friends are, your interests, if you’re in a relationship or not, and what other pages you look at on the web. This data allows advertisers to target the more than one billion Facebook visitors a day. It’s no wonder the company has ballooned in size to a $500 billion behemoth in the five years since its I.P.O.
The more data it has on offer, the more value it creates for advertisers. That means it has no incentive to police the collection or use of that data — except when negative press or regulators are involved. Facebook is free to do almost whatever it wants with your personal information, and has no reason to put safeguards in place.
For a few years, Facebook’s developer platform hosted a thriving ecosystem of popular social games. Remember the age of Farmville and Candy Crush? The premise was simple: Users agreed to give game developers access to their data in exchange for free use of addictive games.
Unfortunately for the users of these games, there were no protections around the data they were passed through Facebook to outside developers. Once data went to the developer of a game, there was not much Facebook could do about misuse except to call the developer in question and threaten to cut off the developer’s access. As the I.P.O. approached, and the media reported on allegations of misuse of data, I, as manager of the team responsible for protecting users on the developer platform from abuse of their data, was given the task of solving the problem.
In one instance, a developer appeared to be using Facebook data to automatically generate profiles of children, without their consent. When I called the company responsible for the app, it claimed that Facebook’s policies on data use were not being violated, but we had no way to confirm whether that was true. Once data passed from the platform to a developer, Facebook had no view of the data or control over it. In other cases, developers asked for permission to get user data that their apps obviously didn’t need — such as a social game asking for all of your photos and messages. People rarely read permissions request forms carefully, so they often authorize access to sensitive information without realizing it.
At a company that was deeply concerned about protecting its users, this situation would have been met with a robust effort to cut off developers who were making questionable use of data. But when I was at Facebook, the typical reaction I recall looked like this: try to put any negative press coverage to bed as quickly as possible, with no sincere efforts to put safeguards in place or to identify and stop abusive developers. When I proposed a deeper audit of developers’ use of Facebook’s data, one executive asked me, “Do you really want to see what you’ll find?”
The message was clear: The company just wanted negative stories to stop. It didn’t really care how the data was used.
When Russians decided to target Americans during the 2016 election, they didn’t buy TV or newspaper ads, or hire a skywriter. They turned to Facebook, where their content reached at least 126 million Americans. The fact that Facebook prioritized data collection over user protection and regulatory compliance is precisely what made it so attractive. Now the company is arguing that it should be allowed to regulate itself to prevent this from happening again. My experience shows that it should not.
Facebook’s chief operating officer, Sheryl Sandberg, mentioned in an October interview with Axios that one of the ways the company uncovered Russian propaganda ads was by identifying that they had been purchased in rubles. Given how easy this was, it seems clear the discovery could have come much sooner than it did — a year after the election. But apparently Facebook took the same approach to this investigation as the one I observed during my tenure: react only when the press or regulators make something an issue, and avoid any changes that would hurt the business of collecting and selling data.
This makes for a dangerous mix: a company that reaches most of the country every day and has the most detailed set of personal data ever assembled, but has no incentive to prevent abuse. Facebook needs to be regulated more tightly, or broken up so that no single entity controls all of its data. The company won’t protect us by itself, and nothing less than our democracy is at stake.