Google Photos Facial Recognition

Mini Data Ethics Consultation: Google Photos Facial Recognition

https://www.google.com/photos/about/

Organization

Google LLC is an American multinational corporation and technology company that
specializes in consumer electronics, cloud computing, computer software, quantum
computing, online advertising, search engine technology, e-commerce, and artificial
intelligence (AI).
1 Google Photos is a service from Google where you can store, edit, and
share your pictures. One of the features of Google Photos is that it automatically groups
people or pets with AI facial recognition technology.
Google has formulated certain AI principles.
2 To ensure the concise size of this DEC, I
have decided to shorten and paraphrase them as follows. For the complete principles,
see the website of Google AI.
1. Bold innovation

AI technologies employed

Google Photos uses a technology called Face Groups to recognize faces in photos. It
can group similar faces for an enhanced user experience and effortless organization,
and it’s time-saving.3 This facial recognition technology consists of the following steps:
face detection, face alignment, feature extraction, face recognition, and clustering.
Even though they do not disclose the exact AI models used, sources claim that they
make use of Convolutional Neural Networks (CNNs) for feature extraction.
4 These
networks create numerical representations of each face. A picture consists of a lot of
pixels, and each pixel holds information about the color and the intensity, making it
possible to represent a pixel in terms of three values: red, green, and blue. The smaller
the distance between the representations of faces, the more they are grouped and seen
as similar.5
For the purpose of improving its accuracy and personalization, Google Photos might ask
for your help in labeling these groups (for example, by providing names). Although
accuracy can vary, Google estimates that it is between 80 and 85 percent.6 The
groupings can always be reviewed and changed by the user if necessary. If users wish to
turn off the Face Grouping feature, they can do so in the settings. This will delete the
face groups, the models used, and the labels that were created.

Ethical concerns

Google has an elaborate privacy policy, which clearly states what types of data are
collected and for what purpose.7 In order to make it accessible to a broader audience,
Google has created short, animated videos with explanations. However, based on the
DEDA, there are some ethical principles that are at risk when it comes to using AI for the
purpose of facial recognition. Privacy, transparency, and bias play a role in how the AI
technology unfolds.

Privacy

Privacy is “the ability of an individual or group to seclude themselves or information
about themselves and thereby express themselves selectively”.8 Google Photos’ facial
recognition makes use of people’s personal photos. This kind of data is highly personal
and potentially sensitive. These pictures contain information about people’s closest
acquaintances and their everyday lives, and their location could be known through the
metadata. Even if you don’t supply the names to certain faces, Google Photos can still
recognize patterns through faces tagged in pictures on other platforms, contextual data
from your Google environment, and repeated patterns (having more pictures of the
same face improves the capacity to identify them correctly).9
There is a way to opt out. In settings, there is a possibility to turn ‘face grouping’ off. This
feature is on by default. If you turn the feature off, the face groups, the face models used
to create those face groups, and the face labels you created will be deleted.
Privacy is based on the principle that individuals should have the right to make informed
decisions about their lives, and in this case, their data. Having the face grouping feature
on by default without explaining to users how it operates has an impact on people's
privacy.
Regulators have taken notice of the face grouping technology. In Illinois, the business
was sued in a class-action suit for gathering and keeping biometric information without
the required authorization. Instead of opposing it, Google quietly reached a $100 million
5 (Cardete, n.d.)
6 (Cloud, n.d.)
7 (Google, n.d.)
8 (Wikipedia, Privacy, n.d.)
9 (Cloud, n.d.)Student number: 6755666
settlement in 2022.10 This goes to show that Google has a tendency to prioritize
developing the AI technology over their own AI principle: ‘promoting privacy, security,
and respecting intellectual property rights’.

Transparency

In the context of artificial intelligence, transparency has the following elements: high-
quality data collection, moral handling, and unambiguous accountability. Transparent
data practices give stakeholders visibility into how AI systems are built and enable
informed trust in the systems' outcomes.11
Nevertheless, this trust has been impacted since Google Photos became the center of
public outrage. There was an incident where a Black person was mistakenly labeled as a
‘gorilla’.12 People were horrified by the racist overtones of the label. Google executive
Yonatan Zunger acknowledged that it was ‘100% not okay’ and made sure that the
necessary steps were taken to prevent a similar mistake. This example shows the
consequences of the mistakes the AI technology can make. This is not in line with two of
Google’s AI principles: putting safeguards in place to lessen potentially harmful
outcomes, and putting in place suitable human oversight in line with user objectives
and social responsibility.
Users are left with little clarity as AI becomes more integrated into everyday tools
because the core privacy policy is unclear about whether your Google Photos are
automatically excluded or included from AI training.13 Although users can turn off the
Face Grouping feature, it is harmful to have it on by default, as users are not adequately
informed.

| Bias

The problematic gorilla incident described above indicates a bias either in the data or in
the AI model itself. This could be due to the underrepresentation of Black people in the
training data of the AI model. However, attempting to design a demographically
balanced dataset might not be as easy as it seems. A paper by Raji et al. states that “by
actively seeking to include members of underrepresented groups, the privacy risk is
disproportionately increased for that group”
. This paper was conducted by Google
Researchers in order to investigate ethical concerns of Facial Recognition Auditing.[1]
This incident goes to show that there is still some work to do in order to comply with
Google’s AI principle: putting in place suitable human oversight in line with user
objectives and social responsibility.
Another problematic side of this technology is that the technology will work better the
more labels or pictures you deliver. Having the quality of a product depend on the
amount of data you share has a consequence: it works as an incentive to give away

more and more. The question is whether this is in line with Google’s AI principle:
promoting privacy, security, and respecting intellectual property rights.

Recommendations

Google offers countless services and states that it depends on the service what kind of
data it collects. The dense and extensive privacy policy does not invite users to read and
get informed about their rights and regulations, even though they have included some
short, animated clips. Transparency can be increased by providing more information in
a more accessible way, instead of putting the responsibility of being aware of those
regulations on the user. By having features off by default, it gives the user more
awareness and autonomy over using the feature.
The choice not to reveal the precise AI model and not to tell users about the training
process and data used has an impact on users' privacy and trust. Instead of having the
feature Face Grouping active by default, it should be deactivated by default, where
users are given a choice to activate it, as long as they are informed about the data use
and how the model works.
Moreover, raising awareness about the flaws of the facial recognition software can help
in managing potential mistakes and harmful outcomes. There will always be biases in
data and in AI algorithms, which does not have to mean that we cannot use AI
algorithms at all. But Google has a social responsibility to inform its users properly.

Works Cited

Wikipedia. (n.d.). Google. Retrieved 2025 from Wikipedia:
https://en.wikipedia.org/wiki/Google
AI, G. (n.d.). Our AI Principles. Retrieved 2025 from Google AI:
https://ai.google/responsibility/principles/
Cloud, L. (n.d.). How Does Google Photos Recognize the Names and Faces? Retrieved
2025 from Luxand Cloud: https://luxand.cloud/face-recognition-blog/how-does-
google-photos-recognize-the-names-and-
faces/?utm_source=devto&utm_medium=how-does-google-photos-recognize-
the-names-and-faces
Cardete, J. (n.d.). Convolutional Neural Networks: A Comprehensive Guide. Retrieved
2025 from Medium: https://medium.com/thedeephub/convolutional-neural-
networks-a-comprehensive-guide-5cc0b5eae175
Google. (n.d.). Google Privacy & Terms. Retrieved 2025 from Privacy Policy:
https://policies.google.com/privacy#infocollect
Wikipedia. (n.d.). Privacy. Retrieved 2025 from Wikipedia:
https://en.wikipedia.org/wiki/Privacy
Constantinescu, E. (2025, April 22). Is Google Photos safe? For sharing private photos,
not so much. Retrieved 2025 from Proton: https://proton.me/blog/is-google-
photos-safe#privacy
Dittmar, L. (n.d.). What Does Transparency Really Mean in the Context of AI
Governance? Retrieved 2025 from Oceg: https://www.oceg.org/what-does-
transparency-really-mean-in-the-context-of-ai-Student number: 6755666
governance/#:~:text=Data%20transparency%20is%20an%20outcome,trust%20i
n%20the%20systems'%20outcomes.
BBC. (2015, July 1). Google apologises for Photos app's racist blunder. Retrieved 2025
from BBC News: https://www.bbc.com/news/technology-33347866
Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, R. (2020). Saving
Face: Investigating the Ethical Concerns of Facial Recognition Auditing.
Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES
’20), 145-151.


  1. (Raji, et al., 2020) ↩︎