Loading...
Blog

YOUR FACE IS THE PRODUCT: FACIAL RECOGNITION TECHNOLOGY AND THE RIGHT OF PUBLICITY

By: Emily Hanson

Tech companies like Google, Amazon, and IBM have developed facial recognition technology by feeding the enormous number of publicly available photos on social media through highly advanced machine-learning algorithms. Facial recognition technology has serious implications for privacy both when it malfunctions and when it functions as intended. Social media users need an extension of right of publicity doctrine to keep these companies from profiting from their images and using them to develop technologies that break down the right to move through the world without being recognized.


One of the axioms of the internet age is that if you get to use a service or platform for free, you are the product.[i] Companies like Facebook, Google, and YouTube provide services that are free to individual consumers and that are algorithmically tailored to hold consumers’ attention for as long as possible.[ii] These giants and others like them sell your attention to still other companies in the form of advertising.[iii] Thus, you are the product. 

There is nothing particularly innovative about this model (it is essentially the same concept as newsprint advertising). What is troubling is that, in recent years, consumers of free internet content are becoming the product in a whole new way. The data we all generate by posting photos on social media sites like Facebook, Instagram, and LinkedIn is the fuel for a technological machine that none of us signed up for: facial recognition technology.

Companies like Google, Amazon, and IBM have used the tremendous volume of publicly available photos to develop facial recognition technology through a process called machine learning.[iv] Machine learning involves feeding a massive amount of data (in this case photos) into an algorithm and letting the algorithm sort through them until it learns how to distinguish different faces and match up photos of the same face.[v]

It is not difficult to imagine how many entities, public and private, might find the ability to identify anonymous faces useful. Imagine the possibilities for law enforcement’s power to solve and prosecute crime or for high-end stores and restaurants to identify loyal customers the moment they walk in the door. However, it is also not difficult to imagine how this kind of technology could be used for nefarious purposes, which is already happening. During the pro-democracy protests in Hong Kong in 2019, the Chinese government made use of pole-mounted surveillance cameras and facial recognition technology to identify protestors.[vi] The penalty for rioting in Hong Kong can be up to ten years’ imprisonment, and the difference between a protest and a riot is, of course, in the eye of the beholder.[vii]  

In the U.S., there are already examples of law enforcement use of facial recognition technology causing an innocent person to get swept into the criminal justice system. For instance, in January 2020, police in Detroit arrested Robert Julian-Borchak Williams for robbery after facial recognition software identified him as the individual captured on the grainy surveillance camera footage.[viii] It later transpired that Williams had nothing to do with the crime, and the American Civil Liberties Union is now suing the police department in Detroit for civil rights violations.[ix] A handful of U.S. cities have banned use of facial recognition technology as a law enforcement tool.[x] Most recently, the city council of Berkeley, California, unanimously adopted a ban, citing the degree of automation in the software as “fundamentally undermin[ing] the community’s liberty.”[xi]

The kind of false positive that occurred in Detroit is likely to exacerbate existing disparities in the way race informs an individual’s encounters with the criminal justice system. Machine learning algorithms, on average, do a better job of telling the difference between two white faces than between two black faces.[xii] The disparity may be due in part to the overwhelming whiteness of the tech industry, and the blind spot that forms when racial minorities are underrepresented in the rooms where these machine learning algorithms are developed.[xiii]

As disturbing as it is to contemplate the consequences that occur when facial recognition gets it wrong, an even worse prospect is what happens when facial recognition gets it right. The privacy implications of being recognizable anywhere and everywhere you go cannot be overstated. Law professor Woodrow Hartzog and philosopher Evan Selinger argue that there should exist a right to obscurity, which is to say a right to move about in public without being identified.[xiv] What protects an average person from violations of their right of obscurity, they argue, is that the transaction cost of manually comparing a photo of someone to a database of names and faces is far too high to be practical.[xv] Surveillance tools like facial recognition, however, practically eliminate the transaction cost.

Reliable facial recognition software will enable public actors such as intelligence services and law enforcement to target dissidents and quash popular movements. It has serious implications for the right to assemble and the right to protest. Likewise, this technology will enable private actors to target individual customers in physical stores the way they already do online. Imagine what the in-person version of a targeted advertisement might be like. 

The rub in all of this is that this technology is made possible by the data we all voluntarily put online, and the data about us that our friends, family, and acquaintances voluntarily put online. The average private person with an average online presence (which is, by the way, a degree of publicity that would have been unthinkable just twenty years ago) needs legal protection to would be an extension keep billion-dollar corporations from using our images for their own ends. One option would be an extension of the doctrine of the right of publicity to cover this kind of appropriation. 

The right of publicity is a common law tort wherein the tortfeasor “appropriates to his own use or benefit the name or likeness of another.”[xvi] The interest at stake here is “the interest of the individual in the exclusive use of his own identity, in so far as it is represented by his name or likeness, and in so far as the use may be of benefit to him or to others.”[xvii] Famous examples include the use of a Bette Midler sound-alike in a commercial for Ford automobiles[xviii] and the use of a robot version of Vanna White to invoke a futuristic world in a Samsung commercial.[xix] In both of these cases, the Ninth Circuit examined the appropriation of an aspect of a celebrity’s identity and ruled in her favor.[xx]

Some states have enacted legislation that prohibits tech platforms from appropriating their users’ photos for commercial purposes along the more traditional lines of right of publicity (i.e., use of a photo for promotional purposes).[xxi] However, this does not address the problem that arises when these companies use photos in enormous numbers not for publicity or advertising purposes, but as fuel for machine learning. Classically, the right of publicity can only be invoked when there is some commercial exploitation of a particular likeness.[xxii] This creates a problem when the commercial value lies not in any particular face, but in the aggregate of millions or billions of faces.

The reader might ask why, in light of the fact that the photos are not being appropriated in the classic sense (in the way that Vanna White’s image was appropriated, for example), should users of social media sites care what these companies do with them. After all, search history and browsing data have been fair game for commercial use for years now,[xxiii] and that is arguably far more private and less curated than photos we post to social media. Users’ interests in this case are similar to a celebrity’s interest in her image. Vanna White had an interest in maintaining control over her image because it had value, and Samsung used it without permission to sell something that White did not necessarily endorse.[xxiv] Likewise, the sound of Bette Midler’s voice, also a commodity with value, was used without her permission to sell something she did not necessarily endorse.[xxv]

Something very similar happens with facial recognition technology; the images of millions or billions of people are being appropriated without their knowledge to develop a product that many of us would not endorse. The consequences of this technology are equally troubling when it malfunctions, as it did in the case of Mr. Williams, the non-burglar, and when it functions as intended, as it does under surveillance regimes like the Chinese government in Hong Kong. As a matter of policy, society has a strong interest in making it more difficult to develop, or at least to find a market for these technologies.

The doctrine of the right of publicity should be extended to cover this activity. Users of social media sites need a cause of action against the tech companies’ use of personal photos to develop and train Orwellian technologies that strip away at our ability to pass through public places anonymously. Tech companies have been allowed to appropriate our images behind the scenes, and the consequences for individual privacy in both private and public spheres could be disastrous.


[i] Scott Goodson, If You’re Not Paying For It, You Become The Product, Forbes (Mar. 5, 2012), https://www.forbes.com/sites/marketshare/2012/03/05/if-youre-not-paying-for-it-you-become-the-product/#278e0d65d6ee.

[ii] Matthew Yglesias, The case against Facebook, Vox (Apr. 9, 2018), https://www.vox.com/policy-and-politics/2018/3/21/17144748/case-against-facebook.

[iii] See, e.g., YouTube Ads, https://youtube.com/ads (last visited Sept. 10, 2020).

[iv] Joss Fong, What facial recognition steals from us, Vox (Dec. 10, 2019), https://www.vox.com/recode/2019/12/10/21003466/facial-recognition-anonymity-explained-video.

[v] Id.

[vi] Steve Mollman, China’s new weapon of choice is your face, Quartz (Oct. 5, 2019), https://qz.com/1721321/chinas-new-weapon-of-choice-is-facial-recognition-technology/.

[vii] Id.

[viii]  Bobby Allyn, ‘The Computer Got It Wrong’: How Facial Recognition Led to False Arrest of Black Man, NPR, (Jun. 24, 2020), https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig.

[ix]  Id.

[x] Tom McKay, Berkeley Becomes Fourth U.S. City To Ban Face Recognition In Unanimous Vote, Gizmodo (Oct. 16, 2019), https://gizmodo.com/berkeley-becomes-fourth-u-s-city-to-ban-face-recogniti-1839087651.

[xi] Id.

[xii] Irina Ivanovna, Why face-recognition technology has a bias problem, CBS News (Jun. 12, 2020), https://www.cbsnews.com/news/facial-recognition-systems-racism-protests-police-bias/.

[xiii] Alina Tugend, Exposing the Bias Embedded in Tech, N.Y. Times (Jun. 17, 2019), https://www.nytimes.com/2019/06/17/business/artificial-intelligence-bias-tech.html.

[xiv] See Woodrow Hartzog & Evan Selinger, Surveillance as Loss of Obscurity, 72 Wash. & Lee L. Rev. 1343, 1377 (2015).

[xv] Id. at 1345-46.

[xvi] Restatement (Second) of Torts § 652 (Am. Law Inst. 1977).

[xvii] Restatement (Second) of Torts § 652 cmt. a (Am. Law Inst. 1977).

[xviii] Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1988).

[xix]  White v. Samsung Electronics America, Inc., 971 F.2d 1395 (9th Cir. 1992).

[xx]  Midler, 849 F.2d at 463-64; White, 971 F.2d at 1402.

[xxi] William K. Smith, Saving Face: Adopting a Right of Publicity to Protect North Carolinians in an Increasingly Digital World, 92 N.C. L. Rev. 2065, 2067 (2014).

[xxii] Restatement (Second) of Torts § 652 cmt. b (Am. Law Inst. 1977).

[xxiii] See, e.g., Google, Privacy & Terms: Technologies, https://policies.google.com/technologies/partner-sites?hl=en-US (last visited Sept. 10, 2020).

[xxiv] White, 971 F.2d at 1399.

[xxv] Midler, 849 F.2d at 463.

Leave a Reply