Meta’s Ray-Ban smart glasses are recording more than just your daily life: they’re capturing deeply personal moments, and those recordings are being reviewed by human contractors. A recent investigation by Swedish news outlets Svenska Dagbladet and Göteborgs-Posten reveals that Meta outsources the analysis of user-recorded footage to workers in Kenya, who are tasked with labeling data to train AI models.

Data Labeling and Privacy Concerns

The process, known as data labeling, involves humans manually reviewing raw footage before it’s fed into AI systems. This ensures the AI can accurately identify objects, scenes, and even behaviors in future recordings. According to the report, workers have been exposed to disturbing content, including footage taken in private spaces like bathrooms, explicit sexual material, and recordings containing sensitive personal information such as bank account details.

The issue isn’t just about graphic content; it’s about the implied consent of users. Many of these recordings appear to have been captured unknowingly, meaning individuals were filmed without their direct awareness. Workers describe a culture of silence where questioning the nature of the work is discouraged, with potential job loss as a consequence. “You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone,” one contractor told the publications.

Exploitation and Working Conditions

The Meta contractor involved, Sama, already faces a class-action lawsuit alleging exploitation of content moderators forced to review traumatic material without adequate support. This latest revelation adds another layer to these concerns, raising questions about the ethical responsibilities of companies relying on outsourced labor to fuel AI development. Meta’s Terms of Service explicitly reserve the right to share user interactions with human moderators.

Sales, Backlash, and Surveillance Concerns

Sales of Meta’s Ray-Ban smart glasses tripled in 2025, reaching over 7 million units sold. However, the device has faced growing criticism for its potential misuse, including viral videos of wearers secretly recording strangers. Users have even found ways to disable the recording indicator light, turning the glasses into an undetectable surveillance tool.

Beyond privacy violations, experts warn of a broader trend toward unregulated facial recognition and surveillance tech. Meta’s planned live AI features, including potential facial recognition capabilities, could further expand this reach. The technology raises serious questions about data security, government access, and the potential for militarized surveillance.

The rapid integration of AI-powered wearable devices is outpacing ethical oversight, creating a dangerous gap between innovation and accountability. The case of Meta’s Ray-Ban glasses is a stark example of how easily intimate data can be exploited, outsourced, and weaponized without adequate safeguards.