FaceApp Makes Today’s Privacy Laws Look Antiquated

Cameras are everywhere, and data brokers are vacuuming up information on individuals. But regulations have not kept pace.

A woman taking a selfie
Mike Segar / Reuters

Americans give billions of dollars a year to industries that promise to make them look younger. FaceApp became wildly popular, seemingly overnight, for doing the exact opposite. Applying a filter powered by artificial intelligence, the photo-editing app modifies photos of its users’ faces to show them what they might look like when they’re much older. The resulting images aren’t the only thing about FaceApp that strikes some people as creepy. FaceApp is the handiwork of a relatively unknown company in Russia—a provenance that, amid evidence of election interference and other misdeeds by Russian hackers, has raised widespread concerns in Washington. The Democratic National Committee and Senate Minority Leader Chuck Schumer are now calling out the app as a privacy threat.

Which it is. Yes, you should stop using FaceApp, because there are few controls on how your data, including your face data, will be used. But the problems that FaceApp poses aren’t unique. Walking around anywhere can get your face included in facial-recognition databases. How that information can be mined, manipulated, bought, or sold is minimally regulated—in the United States and elsewhere. Militaries, law-enforcement agencies, and commercial interests alike envision far-reaching uses of AI and facial recognition, but legal and regulatory controls lag far behind the pace of technology.

For most people, never going outside is not an option. So laws in the United States and elsewhere need to be tuned up quickly—and not just because of FaceApp.

The suddenly ubiquitous portrait-aging app collects user-submitted photos and other user data and stores some or all of that data in cloud servers. In a response to criticisms of its privacy practices, FaceApp released a statement claiming that “most” photos are deleted within 48 hours. However, there are no legal guarantees for this in the privacy policy. Wireless Lab, which developed the app, also says users can request that their data be deleted, but the process for doing this is not noted in the policy either.

FaceApp is not the only app with weak privacy protections. It’s not even the only photo-editing app with weak privacy protections. Consider China’s Meitu, or even Snapchat and Instagram. Like FaceApp, all three of those apps allow users to submit their own photos and apply an AI-powered filter to transform their image. Apps developed by large American companies, such as Snapchat and Instagram, do generally at least try to comply with existing privacy laws. However, as we saw with the Cambridge Analytica scandal—in which a third-party app developer harvested user data that Facebook had promised to protect, and then used that data to help sway elections—major tech platforms with highly sophisticated engineering capabilities can still fail at privacy on a large scale.

A key difference between FaceApp and Facebook is that a Russian company developed the former. The New York Post published an explosive headline claiming, “Russians Now Own All Your Old Photos.” But falling back into Cold War–style rhetoric can be misleading. Concerns about Russian apps stem from the close relationship between government and industry, and the likelihood that Russian companies will be unable to fight government requests for data. Then again, companies in even the most liberal, democratic nations often have to share data with their government as well. In the United States, tech companies and their users do generally enjoy a higher level of legal protection from government than their counterparts in Russia do. But users of non-Russian apps should still be concerned about where their data will end up.

Regardless of origin, tech companies need to do better to protect the privacy of their consumers. Part of this is simply making users more aware of how data are being used. This is the rationale behind privacy policies. However, many users don’t read those policies. Developers need to go further and build actual privacy protections into their apps. These can include notifications on how data (or photos) are being used, clear internal policies on data retention and deletion, and easy workflows for users to request data correction and deletion. Additionally, app providers and platforms such as Apple, Microsoft, and Facebook should build in more safeguards for third-party apps.

But asking tech companies to make a few fixes will not be enough to solve the larger systemic problem, which is simply that our society hasn’t figured out how to deal with privacy in a way that actually protects individuals. The way we conceptualize privacy—by focusing, for instance, on the point at which a user decides to enter personal data into a website—is inadequate for the realities of today’s technology. Data are being collected all the time, often in ways that are all but impossible for consumers to know about. You cannot expect every traffic camera to include a privacy policy. Meanwhile, data sets are often sold, bought, aggregated, and transformed by third-party data brokers in ways unimaginable to consumers.

Many privacy regulations, including the European Union’s General Data Protection Regulation, include a right to request that your data be deleted. But that right doesn’t apply when your data are used to train AI and machine-learning systems, or when your face is added to a facial-recognition data set without your knowledge.

Facial recognition is only the tip of the iceberg. License-plate readers, shopping beacons, and a whole suite of mobile trackers follow individuals both online and offline. Amazon Alexa and Google Home devices listen to everything you say. Roombas map your floor plans. Strava and Fitbit record your location. 23andMe and Ancestry.com collect and essentially own your very sensitive genetic information. Our current laws cannot handle the sheer amount of data collected on individuals in what the Harvard professor Shoshanna Zuboff calls the “age of surveillance capitalism.”

We need better privacy laws that address the new harms and risks arising from facial recognition, AI and machine learning, and other technological advances. To deal with privacy risks in the larger data ecosystem, we need to regulate how data brokers can use the personal information they obtain. We need safeguards against the practical harms that invasions of privacy can cause; that could mean, for example, limiting the use of facial-recognition algorithms for predictive policing. We also need laws that give individuals power over data they have not voluntarily submitted.

We can’t have effective laws until we expand our understanding of privacy to reflect the data-hungry world we now live in. The FaceApp privacy controversy is not overblown, but some attacks are misdirected. The problem isn’t photo-editing apps or third-party developers or Russian tech companies. What we are facing as a society is a systemic failure to protect privacy when new technologies force our preconceived notions of privacy to collapse.

Tiffany C. Li is a tech attorney and fellow at Yale Law School’s Information Society Project.