In 2021, the US-based video interviewing software vendor HireVue discontinued facial coding APIs from its system following an FTC complaint that alleged the platform disproportionately threatened the privacy rights and livelihoods of minority candidates by favoring certain speaking styles, tones of voice, and facial expressions.
This wasn’t an isolated incident. Industry giants Google, Microsoft, and Nielsen followed suit by retiring facial coding features from their own emotion recognition tools, signaling a reassessment of the technology’s limitations and risks. Despite this, many companies continue to develop and use products that detect, categorize, and respond to human emotions in environments from retail stores to workplaces, vehicles, and public spaces.
To be clear, emotion recognition technology (ERT) can provide companies with valuable insights that help personalize experiences and enhance efficiency. For example, they’ve successfully predicted the hit potential of songs on Spotify and detected signs of fatigue in drivers. But without ethical frameworks and boundaries, their widespread use threatens consumers’ right to emotional privacy. Autonomy over emotional information shouldn’t be seen as a limitation to overcome, but a critical aspect of humanity worth protecting.
This article provides chief privacy officers (CPOs), chief technology officers (CTOs), and HR technology leaders with a framework for evaluating emotion recognition tools—not to reject them entirely, but to implement them more carefully in ways that respect boundaries, mitigate legal exposure, and align with AI governance best practices.
How emotional data is different from other personal data
While data protection frameworks like Europe’s GDPR have established clear categories for sensitive personal data like genetic information, biometric identifiers, and health records, emotional data exists in a regulatory blind spot. A 2022 Journal of Law and the Biosciences article offers the following as a working definition of emotional data: “Emotions are not per se ‘sensitive data,’ but they might be if they are collected through emotion detection tools based on biometric tools such as facial recognition.”
Emotional data is hard to pinpoint because it’s uniquely positioned at the intersection of what can be observed and what’s felt. A grocery store receipt, for instance, can show you bought eggs, berries, and ice cream. But data that captures your furrowed brow when seeing the price of those eggs, your hesitation while deciding between organic or conventional berries, and the way your pupils dilate when you’ve spotted your favorite ice cream flavor—the last carton, too!—reveals parts of yourself that corporations can’t typically access.
Consider also that facial expressions aren’t universal across cultures or neurodiverse populations, and accurate interpretation of emotional signals becomes another challenge entirely. In fact, in their 2019 annual report, research institute AI Now says that because there’s little scientific basis to ERT, it should be banned from use in decisions that affect people’s lives. They argue that attempting to “read” consumers’ inner emotions by interpreting physiological data is invasive whether or not it offers conclusive insights.
Given these scientific uncertainties and ethical concerns, implementing ERT without established compliance frameworks or industry standards presents the following risks:
Legal liability and regulatory penalties: Emotion recognition technologies span multiple regulatory domains, including privacy laws, biometric data protection, and workplace monitoring. Without established frameworks, organizations are subject to penalties like the 200,000-euro fine imposed on the organizers of Mobile World Congress, a global trade show, for imposing facial recognition on attendees for identity verification in 2021.
Algorithmic bias and discrimination claims: Current emotion recognition systems show significant performance disparities across demographic groups. When these tools influence decisions about employment, service prioritization, or security assessments, companies like HireVue and Amazon, whose machine-learning-based recruitment program was found to be biased against women in 2018, risk discrimination claims.
Data security and governance challenges: Emotional data requires exceptional security measures and careful lifecycle management. Organizations that don’t implement appropriate collection limitations, retention policies, access controls, and minimization strategies risk the same fate as facial recognition startup Clearview AI, which was recently fined 30.5 million euros after failing to obtain consent from the billions of people whose faces appear in their database.
Erosion of consumer and stakeholder trust: Using this technology without transparent guidelines and proper oversight damages trust with employees, customers, and regulators, potentially resulting in decreased engagement, consumer backlash, or investor concerns about governance practices. Zoom is one such company that received pushback in 2022 from 27 human rights groups for its use of emotion detection AI with its Zoom IQ (now Zoom AI Companion) feature.
Put simply, the stakes are high. And while established boundaries around emotional data collection and use are an important first step toward more ethical practices, another challenge awaits: obtaining informed consent from consumers.
Why obtaining informed consent to emotional data use is so complicated
It’s easy to suggest that companies like Clearview AI obtain consent from the people whose faces are featured in their database. But given the power imbalances that are inherent in employer, commercial, and government applications, it’s extremely difficult to identify contexts where consumers can give meaningful consent to emotional data use.
This sentiment is echoed in a 2023 report from the University of Pennsylvania’s Annenberg School for Communication, “Americans Can’t Consent to Use of Their Data”:
When you phone an 800 number to complain, do you want the company to infer your emotional state by the sound of your voice? Based on that, do you want the firm to decide how long to keep you waiting or to triage you to an agent who is successful at satisfying and even “upselling” people with your emotions and purchase history? That already happens, and it indicates the rise of marketers peering into the human body for data.
What’s more is that “[b]iometric data cannot be changed like email addresses” or other identifiers that make it easy for consumers to opt in or out as they choose. But in scenarios where employers, commercial entities, and government services use ERT, employees, customers, and private citizens don’t always have a clear-cut out:
Employer use: A 2023 AI & Society article mentions Walmart’s use of “performance metric” bracelets that use sensors to measure employee productivity, as well as UK company Moonbeam’s wrist-wearable “happiness tracker” that alleges to enhance workplace social dynamics. Those who wish to remain employed likely feel pressured to comply with policies that require them to wear such devices, especially if they work in states with at-will employment.
Commercial use: It’s one thing to consent to cookies when shopping online, but another for a brick-and-mortar to collect data off your person. Retail tech startup Wayvee does just that by analyzing in-store shoppers’ breathing rate, heart rate, and body gestures through radio frequency waves and proprietary AI algorithms.
The company claims that because their product operates without cameras, it “ensur[es] customer privacy while providing instant insights into 100% of customer interactions.” But customers who don’t want to participate may simply have to avoid places that use this technology, which is easier said than done at establishments like grocery stores and banks.
Government use: The U.S. Special Operations Command has already used voice analytics software to vet Afghan commando recruits, and Deloitte posits further that government agencies like the IRS could one day use affective computing to gain insights into citizens’ emotional states to tailor communications and enhance compliance.
China has also used ERT on Uyghur citizens as a surveillance tool to identify nervous or anxious mindsets. Human rights advocates have condemned this: “It’s people who are in highly coercive circumstances, under enormous pressure, being understandably nervous, and that’s taken as an indication of guilt.”
When companies consider emotional recognition tools, commercial incentives typically outweigh actual human benefit. But those that are intentional about implementing robust security protocols, establishing clear compliance frameworks, and balancing analytical insights with transparent employee protections will find it easier to mitigate risks and build stakeholder trust.
How to create a better path forward that respects boundaries and preserves privacy
Organizations that aspire to use ERT in ways that prioritize informed consent and privacy should take the following actions:
Establish comprehensive transparency protocols: Inform users when they’re interacting with ERT and provide them with accessible explanations of data collection and use that go beyond standard privacy policies. Conduct regular third-party audits with publicly available results to maintain accountability.
Implement ethical consent and purpose limitation: Create straightforward opt-in processes where users can make informed decisions about sharing emotional data with the ability to revoke consent at any time. Define specific purposes for data collection and implement safeguards to prevent unauthorized repurposing.
Maintain rigorous risk assessment and compliance programs: Conduct ongoing monitoring to identify emotional manipulation risks and evaluate systems against regulatory frameworks such as the EU AI Act. Prohibit manipulative tactics and regularly assess for unintended consequences or potential misuse.
Develop robust organizational capabilities: Train all team members on responsible ERT deployment practices. Establish clear accountability structures with designated individuals responsible for implementation, issue resolution, and staying current with evolving ethical standards and regulations.
Key takeaways
The future of ERT depends on a careful balance of innovation and responsibility. The most successful deployments will treat emotional privacy as a fundamental human right rather than a regulatory hurdle. Organizations that demonstrate this respect through transparent practices and meaningful consent mechanisms will build lasting trust with stakeholders while minimizing legal and reputational risks.
As ERT continues to evolve, deployment decisions made today will shape industry standards tomorrow. By approaching these tools with appropriate caution and ethical consideration, leaders can harness the benefits of emotional insights while preserving the authentic human connections that drive their organization’s success.