In a recent interview at TechCrunch Disrupt 2023, Signal president Meredith Whittaker spoke about the relationship between artificial intelligence (AI) and surveillance technology. According to Whittaker, many companies that rely on monetizing user data are interested in AI because it allows them to expand their surveillance business models. She argues that AI and big data are fundamentally connected, and that AI technology itself can be surveillant. For example, facial recognition cameras with emotion recognition capabilities collect data about individuals and can be used by employers, governments, and border control to make determinations with far-reaching consequences.
Whittaker also highlights the role of human labor in developing AI systems. She explains that thousands of workers are paid very little to organize and annotate the data that underlies these systems. In some cases, this labor is disguised as reinforcement learning with human feedback. When the curtain is pulled back, the intelligence behind these systems may not be as advanced as it appears.
However, not all AI and machine learning systems are exploitative. Whittaker mentions that Signal uses a small on-device model for its face blur feature in their media editing toolset. While it may not be perfect, this AI tool helps detect and blur faces in crowd photos to protect individuals’ biometric data. Whittaker acknowledges that such applications of AI are beneficial but highlights that the economic incentives driving the development of facial recognition technology go beyond these positive use cases.
In summary, Whittaker’s perspective emphasizes the intersection of AI and surveillance, the role of human labor in developing AI systems, and the potential misuse of AI technology beyond its current applications. While some AI tools can be used responsibly, the economic forces behind surveillance technology suggest a broader and potentially more exploitative future for AI.