
The rapid growth of artificial intelligence is one of the most significant changes of our time. It is changing industries and how we live our daily lives. The excitement about new AI and machine learning tools is understandable because they can make tasks easier, improve decision-making, and offer new experiences. One of the most exciting uses of AI today is facial recognition technology, which raises important ethical questions and concerns. Facial recognition is a part of computer vision that helps machines identify and verify people by analyzing their facial features against a database. This technology has important uses in security, marketing, and social media. For example, law enforcement can use facial recognition to find suspects, which may help improve public safety quickly. Despite the benefits of facial recognition, we must address the ethical concerns it raises. The misuse of this technology can lead to serious issues, such as invasive surveillance, racial profiling, and privacy violations. Studies show that many current facial recognition systems are biased, often misidentifying people from marginalized communities more often. This raises important questions about fairness and accountability, especially in law enforcement. If facial recognition technology is widely used without proper oversight, it could worsen social inequalities and erode trust in public institutions. Consent is also an important issue in this debate. Many people may not know that their faces are being analyzed and stored, which poses ethical concerns regarding personal autonomy and privacy rights. The idea of “surveillance capitalism” complicates the situation further, as personal data is often collected and used without clear consent. Therefore, it is essential to set clear rules for using facial recognition technology. In conclusion, while there are strong advantages to making facial recognition technology available for commercial use, we must carefully consider the ethical issues and societal risks it brings. We need a thoughtful approach that balances innovation with protecting individual rights and freedoms. As we continue to explore AI and machine learning, it is vital to have open discussions and create strong guidelines about whether facial recognition technology should be used commercially. Balancing these interests is key to navigating this complex area responsibly.
Source Summary
de Seta, Gabriele, and Anya Shchetvina. “Imagining Machine Vision: Four Visual Registers from the Chinese AI Industry.” AI & Society, vol. 39, no. 5, Oct. 2024, pp. 2267 — 84. EBSCOhost, https://doi-org.citytech.ezproxy.cuny.edu/10.1007/s00146-023-01733-x.
de Seta and Shchetvina analyze the evolving landscape of machine vision within the Chinese AI industry, identifying four distinct visual registers: surveillance, object recognition, artistic expression, and data visualization. Each register reflects differing social and cultural implications of machine vision technologies. The authors argue that these visual registers not only shape the public perception of AI but also influence regulatory and ethical considerations. Their examination reveals how the interplay between technology and society informs the development and deployment of AI systems in China.
Key Quotes
- “The four visual registers highlight the multifaceted nature of machine vision and its impact on social structures.”
- 2. “Understanding these registers is crucial for assessing the ethical implications of AI technologies in contemporary society.”
- Process Writing
- This source has provided valuable insights into the cultural and social dimensions of machine vision technologies, particularly within the Chinese context. The identification of the four visual registers offers a framework for analyzing how these technologies are perceived and utilized. This framework will enhance my research by allowing me to explore the broader implications of AI deployment in different societal settings. The authors’ nuanced perspective will aid in constructing a well-rounded argument about the interplay between technology, culture, and ethics in AI development.
Murray, Daragh. “Police Use of Retrospective Facial Recognition Technology: A Step Change in Surveillance Capability Necessitating an Evolution of the Human Rights Law Framework.” *Modern Law Review*, vol. 87, no. 4, July 2024, pp. 833 — 63. EBSCOhost, https://doi-org.citytech.ezproxy.cuny.edu/10.1111/1468-2230.12862.
Summary
Murray explores the rapid development and deployment of retrospective facial recognition technology by police forces and its implications for human rights law. He argues that these technologies significantly enhance surveillance capabilities, which in turn challenges existing legal frameworks designed to protect privacy and civil liberties. Murray emphasizes the need for a rethinking of human rights protections in light of these advancements, urging lawmakers to adapt existing laws to ensure that they address the growing power of state surveillance while balancing security and individual freedoms.
Key Quotes:
- “The rise of retrospective facial recognition technology represents a significant shift in the capacity of the state to monitor and control individuals, challenging fundamental human rights protections.”
- 2. “Human rights law must evolve to keep pace with these new surveillance capabilities, ensuring that the balance between state security and individual privacy is properly maintained.”
Process-Writing:
This source has been highly useful in my research on the intersection of surveillance technology and human rights. Murray’s exploration of the legal and ethical challenges posed by facial recognition technology provided important context for understanding how existing frameworks might fail to protect citizens’ privacy. His argument that human rights law must evolve in response to technological advances has informed my own thinking on the topic. It helped me to see that my research should not only focus on the technical aspects of surveillance tools but also consider the broader societal and legal impacts, which is central to my thesis.
Guo, Shangwei, et al. “Towards Efficient Privacy-Preserving Face Recognition in the Cloud.” *Signal Processing*, vol. 164, Nov. 2019, pp. 320 — 28. EBSCOhost, https://doi-org.citytech.ezproxy.cuny.edu/10.1016/j.sigpro.2019.06.024.
Summary of Major Findings:
Guo and colleagues examine methods to enhance privacy protection in facial recognition systems, specifically in cloud computing environments. The paper proposes a novel approach to improve the efficiency and security of privacy-preserving facial recognition by integrating encryption techniques. This approach allows for facial recognition to be conducted in a cloud environment without exposing sensitive data to unauthorized access. The authors suggest that their method offers a balance between accuracy, computational efficiency, and privacy, making it a viable solution for scalable cloud-based face recognition systems that comply with privacy standards.
Key Quotes:
- “Our proposed approach achieves both high recognition accuracy and strong privacy protection, ensuring that sensitive biometric data remains secure even when processed in the cloud.”
- 2. “The integration of encryption techniques not only maintains the privacy of individuals but also reduces the computational burden on the cloud infrastructure, making it an efficient solution for large-scale deployment.”
Process-Writing:
This article has been invaluable to my research as it provides a technical perspective on addressing privacy concerns in facial recognition systems. Guo et al.’s work on privacy-preserving techniques in cloud-based systems is particularly relevant for understanding how privacy can be maintained even when facial recognition technologies are deployed at scale. This source has informed my exploration of the ethical and technological aspects of face recognition, as it highlights potential solutions to mitigate privacy risks. It has helped shape my argument that while facial recognition technology can offer security benefits, it is crucial to integrate privacy safeguards to ensure responsible implementation.
Source Analysis
Xiang, Alice. “Being ‘Seen’ Versus ‘Mis-Seen’: Tensions between Privacy and Fairness in Computer Vision.” *Harvard Journal of Law & Technology*, vol. 36, no. 1, Oct. 2022, pp. 1 — 60. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=9de29a78-0ef2-3bf7-b955-229062e1bcf0.
Xiang’s article explores the inherent tensions between privacy and fairness in the realm of computer vision technologies. It argues that while these technologies can enhance visibility and security, they also risk perpetuating biases and infringing on individual privacy. The paper highlights case studies that demonstrate how algorithmic decisions can misrepresent individuals, leading to adverse outcomes. Xiang advocates for a more nuanced understanding of visibility, urging the development of frameworks that prioritize ethical considerations alongside technological advancements.
Alice Xiang is the Global Head of AI Ethics at Sony. She manages the team responsible for conducting AI Ethics assessments across Sony’s business units and implementing Sony’s AI Ethics guidelines. Additionally, as the Lead Research Scientist for AI Ethics at Sony AI, Alice leads a lab of AI researchers working on cutting-edge research to enable the development of more responsible AI solutions. She also served as a Visiting Scholar at Tsinghua University’s Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law. Alice is both a lawyer and statistician, with experience developing machine learning models and serving as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master’s in Development Economics from Oxford, a Master’s in Statistics from Harvard, and a Bachelor’s in Economics from Harvard.
The Audience of this paper is likely other AI Ethics researchers and scholars. This is because this paper was published in the Harvard Journal of Law and Technology. Another audience of this paper is likely to be inquisitive or concerned citizens who worry about the use of facial recognition technology for mass surveillance in their city or country. The author created this text to respond to growing concerns surrounding the rise of facial recognition and related computer vision technology. Additionally, the text aims to respond to newly proposed regulations around AI Ethics such as the EU AI Act. The text likely appeared in a legal, academic, or policy-oriented publication aimed at informing stakeholders about the current landscape of AI regulation, particularly the proposed EU AI Act and its implications for privacy rights and technology use. The mention of influential studies like the Gender Shades paper indicates a growing recognition of the ethical implications of AI technologies, especially regarding bias and fairness.
This Article first explores the conflict between privacy and fairness in addressing algorithmic bias in human-centric computer vision (HCCV). It argues that the key challenge is the desire to remain “unseen” while avoiding being “mis-seen” by AI. Secondly, the Article examines various proposed strategies to resolve this conflict and assesses their effectiveness in tackling the technical, operational, legal, and ethical issues that arise. These strategies involve using third-party trusted entities for data collection. The tone of the paper is informatic and academic because of the author’s use of academic and scholarly language. When investigating the issue of algorithmic bias in human-centric computer vision technology, the author makes note of granular and technical details to describe how these algorithms are executed. The author chose this tone for their writing to better resonate with their intended audience who might be fellow researchers and lawmakers. This text is very much timely because the issue of facial recognition technology is an active field of research. This area of research is rapidly evolving with new breakthroughs happening regularly. This paper is extremely relevant to my research topic as it addresses both the underlying technology and its societal implications in the context of bias and fairness. The information conveyed through this text seems factual because it closely aligns with other texts I have read on the subject and my prior knowledge on the topic.
This source has been helpful in shaping my understanding of the ethical implications of computer vision technologies. Xiang’s nuanced discussion of privacy versus fairness provides a foundational perspective for my research, highlighting the complexities involved in technological implementation. The case studies presented illustrate real-world consequences, reinforcing the importance of incorporating ethical considerations in technology development. This source will help inform my arguments about the need for responsible innovation in this field.
Shein, Esther. “Using Makeup to Block Surveillance.” Communications of the ACM, vol. 65, no. 7, July 2022, pp. 21 — 23. EBSCOhost, https://doi-org.citytech.ezproxy.cuny.edu/10.1145/3535192.
Esther Shein is a freelance journalist with extensive experience writing and editing for both publications and content providers with a focus on business and technology, as well as education and general interest features. Also operate a college essay consulting business working with high school seniors and graduate students on crafting essays that bring out their personality and best represent who they are.
Her intended primary audience are likely female technology and business professionals who are curious about facial recognition technology. Given that this article was published in the Communications of the ACM Journal, which advertises itself as; “Reach the innovators and thought leaders working at the cutting edge of computing and information technology through ACM’s magazines, websites and newsletters.”, the intended audience would also be readers of the ACM magazine.
The topic of the paper is relevant due to the effectiveness of anti-surveillance makeup, which has been debated because of racial justice protesters who do not want to be tracked, Magee notes.
Nitzan Guetta, a Ph.D. candidate at Ben-Gurion University in Israel, was among a group of researchers who spent the past two years exploring “how deep learning-based face recognition systems can be fooled using reasonable and unnoticeable artifacts in a real-world setup.” The researchers conducted an adversarial machine learning attack using natural makeup that prevents a participant from being identified by facial recognition models, she says. The researchers “chose to focus on a makeup attack since at that time it was not explored, especially in the physical domain, and since we identified it as a potential and unnoticeable means that can be used for achieving this goal’’ of evading identification, Guetta explains.
The author’s purpose is to highlight innovative ways individuals can resist or mitigate surveillance, particularly through the use of makeup as a form of camouflage. Shein aims to raise awareness about the implications of facial recognition technology and encourages a dialogue on privacy, agency, and the intersection of technology with everyday life. The author used the genera of scholarly articles to organize her work and to best address her intended audience. The conventions of genera include Evidence-Based Argumentation, Critical Analysis, and an Objective Tone. Shein slightly shifts from strict academic objectivity by incorporating more personal and relatable elements, such as the practical use of makeup as a countermeasure against surveillance. This choice makes the topic more accessible and engages readers on a personal level, bridging the gap between scholarly discourse and everyday experience.Generally, an academic tone is maintained, emphasizing rationality over emotion. This can be described as having an objective tone. By discussing the practical use of makeup as a form of resistance, she evokes a sense of empowerment and creativity, suggesting a proactive approach to privacy. Shein includes factual information about surveillance technologies, such as facial recognition, to inform readers about the scope and implications of these tools. The use of personal anecdotes and relatable scenarios (like makeup as a countermeasure) evokes emotional responses, encouraging readers to feel a connection to the topic. By citing research and discussing current trends, she establishes credibility, positioning herself as knowledgeable about both technology and privacy issues. These appeals effectively resonate with a tech-savvy audience concerned about privacy. The logical facts educate, the emotional connections engage, and the credibility reassures readers that the information is reliable.
The article, published in July 2022, is timely given the ongoing discussions about surveillance technology and privacy concerns. Its recent publication means it addresses current trends and developments, making its insights particularly relevant.Shein’s exploration of makeup as a means to block surveillance directly answers questions about personal privacy strategies and societal implications of surveillance technologies. Shein cites various authorities and relevant studies, supporting her claims with evidence, which bolsters the accuracy of her arguments. The language is analytical and objective, focusing on presenting facts and evidence rather than emotional appeals or sensationalism. This academic tone indicates a commitment to providing reliable information rather than promoting a specific agenda.