The reality of workplace surveillance

Amazon warehouse workers already live with constant tracking. Algorithms log their walking speed and bathroom breaks to decide who stays and who gets fired. This isn't a futuristic warning; it is the current standard for thousands of employees. AI monitoring is moving into offices and remote setups, fundamentally changing how we are managed.

We’re seeing a shift beyond simple monitoring of computer activity. While tracking emails or website visits isn't new, AI introduces a level of granularity and automation that’s profoundly different. Tools now include productivity scoring, which assigns a numerical value to employees based on a range of metrics, emotion AI that attempts to read emotional states, and even keystroke monitoring to analyze typing patterns.

These technologies aren’t just about measuring output; they’re about predicting behavior and identifying "risks’ before they materialize. The promise, from an employer"s perspective, is increased efficiency and reduced costs. But the reality, as we're beginning to see, is a workforce under immense pressure and facing serious privacy concerns. It's a fundamental shift in the employer-employee dynamic.

The speed of this adoption is what’s particularly concerning. Businesses are eager to implement these tools, often without fully considering the ethical or legal implications. It’s a bit like the early days of the internet – a rush to innovate without a clear understanding of the long-term consequences. Workers need to be aware of what’s happening and what rights they have.

AI workplace monitoring & worker privacy rights - new laws in 2026

State laws coming in 2026

The legal response to AI workplace monitoring is gaining momentum, and 2026 marks a significant turning point. While no single federal law currently governs this area, several states are taking the lead in establishing worker privacy rights. These laws are often complex and vary considerably, so staying informed is essential.

California’s Assembly Bill 2773, effective January 1, 2026, requires employers to disclose to applicants and employees if AI is being used in the hiring process or to make decisions related to their employment. This disclosure must be clear and understandable, outlining the type of AI used and how it impacts decisions. It’s a significant step towards transparency.

New York is also considering legislation similar to California’s, with a focus on automated employment decision tools. A key difference is the potential for a private right of action, allowing employees to sue employers for violations. Illinois, with its existing Biometric Information Privacy Act (BIPA), is already a challenging state for employers utilizing facial recognition or other biometric data for monitoring.

Washington State is focusing on algorithmic accountability, requiring impact assessments for high-risk AI systems used in employment. These assessments must identify and mitigate potential biases. What’s common across these states is a growing awareness that AI isn’t neutralβ€”it can perpetuate and even amplify existing inequalities. The laws are attempting to address that.

These laws aren't just about disclosure. They're also starting to restrict what data can be collected and how it can be used. For example, some states are limiting the use of AI to analyze protected characteristics, such as race or gender. The overarching goal is to protect worker privacy and ensure fair treatment.

State AI Workplace Monitoring Laws (as of Late 2023/Early 2024)

StateDisclosure RequirementsData Collection RestrictionsWorker Access RightsPenalties for Non-Compliance
CaliforniaEmployers must disclose if and how AI is used in decisions related to hiring, promotion, and discipline. (Effective March 2025)Restrictions on the use of AI to discriminate based on protected characteristics. Focus on algorithmic bias.Workers have the right to request information about the AI systems used and the data they contain about the worker.Violations can lead to civil penalties and potential legal action.
New YorkEmployers using 'automated employment decision tools' (AEDTs) must provide notice to candidates and employees. (Effective January 1, 2024)Requires bias audits of AEDTs annually. Focus on disparate impact.Workers can request information about how the AEDT works and obtain a summary of the assessment results.Fines up to $500 per violation, plus potential legal action.
IllinoisThe Illinois Artificial Intelligence Video Recording Act regulates the use of AI-powered video surveillance in the workplace.Requires clear and conspicuous notice if AI is used for biometric data collection or video monitoring.Workers have the right to know when they are being recorded and how the data is being used.Allows individuals to sue for violations, including unauthorized collection or disclosure of biometric data.
MarylandRequires employers to disclose the use of automated decision-making tools in employment decisions.Focuses on transparency regarding the types of data collected and the purpose of the tool.Workers can request an explanation of the decision made by the automated tool.Penalties include fines and potential legal action.
ConnecticutRequires employers to disclose the use of AI in certain employment decisions.Focuses on algorithmic transparency and fairness.Workers have the right to access information about the AI systems used in decision-making.Penalties for non-compliance are still being defined.
WashingtonRequires employers to provide notice before using automated employment decision systems.Focuses on preventing discriminatory outcomes.Employees have the right to request information about the automated system’s decision-making process.Potential for legal action and damages for violations.

Illustrative comparison based on the article research brief. Verify current pricing, limits, and product details in the official docs before relying on it.

Productivity Scoring: What’s Being Tracked?

Productivity scoring tools are arguably the most widespread form of AI workplace monitoring. They work by collecting data on a vast array of metricsβ€”keystrokes, email response times, meeting attendance, even the length of bathroom breaks, as we’ve seen with Amazon. This data is then fed into an algorithm that assigns a score, ostensibly reflecting an employee’s productivity.

These metrics rarely capture actual value. A fast typist isn't always a productive one. A 2024 CNBC report found that 45% of workers say this constant surveillance ruins their mental health, causing spikes in stress and anxiety.

This system can incentivize presenteeismβ€”showing up and appearing busyβ€”over actual results. Employees may feel pressured to prioritize metrics over meaningful work, leading to burnout and decreased job satisfaction. It creates a culture of surveillance and distrust. The focus shifts from doing good work to looking good on the score sheet.

The potential for bias is also significant. Algorithms are trained on data, and if that data reflects existing biases, the algorithm will perpetuate them. For example, an algorithm might unfairly penalize employees who take more breaks due to medical conditions or caregiving responsibilities. It's a serious concern that needs to be addressed.

  1. Tracked metrics include keystroke speed, email response times, and how long you spend in specific apps.
  2. Potential biases: Algorithms trained on biased data can discriminate against certain groups of employees.
  3. Negative impacts: Increased stress, anxiety, burnout, and decreased job satisfaction.

Is Your Employer Using AI-Powered Workplace Monitoring?

  • Have you noticed a significant increase in the amount of data your employer collects about your work habits?
  • Are you being evaluated based on metrics you don't fully understand, or that seem unrelated to your job description?
  • Has your employer implemented new software that tracks your keystrokes, mouse movements, or application usage?
  • Are you receiving automated feedback or 'coaching' based on your activity data, potentially without direct human oversight?
  • Has your employer informed you about the specific types of AI monitoring being used and the purpose of that monitoring?
  • Do you have access to the data collected about you, and an opportunity to correct any inaccuracies?
  • Has your employer explained how the data collected through AI monitoring will be used, and who will have access to it?
You've taken a crucial first step in understanding your workplace privacy. If you answered 'yes' to several of these questions, it’s important to research your rights under the new 2026 AI workplace monitoring laws in your state and consider consulting with an employment attorney.

The rise of emotion AI

Emotion AI, also known as affective computing, attempts to identify employees’ emotional states using facial recognition, voice analysis, and even physiological sensors. The idea is to detect signs of stress, frustration, or disengagement. It’s a deeply unsettling technology with serious privacy and ethical implications.

The accuracy of emotion AI is highly questionable. Studies have shown that these systems are often unreliable and can misinterpret facial expressions and vocal cues. A forced smile or a moment of concentration could be misconstrued as disengagement, leading to unfair evaluations and potential disciplinary action.

The legality of using emotion AI to make employment decisions is also uncertain. Several legal challenges have been filed, arguing that these systems violate privacy laws and discriminate against individuals with mental health conditions. It’s a rapidly evolving legal landscape, and the outcome of these cases could have significant implications.

Beyond the legal concerns, there are fundamental ethical questions. Is it appropriate for employers to monitor and analyze employees’ emotions? Does this create a hostile work environment? This technology treads into very sensitive territory and raises serious concerns about the future of work.

  • Companies use facial recognition, voice analysis, and heart rate monitors to guess how you feel.
  • Accuracy concerns: Emotion AI systems are often unreliable and prone to misinterpretation.
  • Ethical concerns: Invasion of privacy, potential for discrimination, creation of a hostile work environment.

AI Workplace Monitoring: Your Rights

Your Rights: Accessing and Correcting Your Data

As AI monitoring becomes more prevalent, understanding your rights regarding your data is crucial. Several of the new state laws grant employees the right to access data collected about them by AI systems. This means you can request a copy of the information the employer has gathered, including productivity scores, emotion analysis results, and any other data used for evaluation.

The right to correct inaccurate data is also emerging, although it’s not yet universally guaranteed. If you believe the data collected about you is incorrect or biased, you should have the opportunity to challenge it and request a correction. The procedures for making these requests vary by state and employer.

Typically, you’ll need to submit a written request to your employer’s HR department or designated privacy officer. Be specific about the data you’re requesting and the reason for your request. Employers are generally required to respond within a reasonable timeframe, often 30 days. What happens if they refuse?

If your employer refuses to provide access to your data or correct inaccuracies, you may have legal recourse. Depending on the state, you may be able to file a complaint with a regulatory agency or pursue legal action. It's always a good idea to consult with an attorney specializing in employment law.

How to Request Your Workplace Data Under New 2026 AI Monitoring Laws

1
Understand Your Rights: The 2026 AI Monitoring Laws

As of 2026, several states have enacted legislation regarding employer use of Artificial Intelligence (AI) for workplace monitoring. These laws generally grant employees the right to know what data is being collected about them, how it’s being collected (e.g., through keystroke logging, video surveillance, or analysis of communication data), and why it’s being collected. Familiarize yourself with the specific laws in your state, as requirements vary. Resources like your state’s Department of Labor website and Weary Worker can provide detailed information.

2
Draft a Formal Request Letter

Begin by drafting a formal, written request to your employer. This letter should clearly state your intention to access the personal data collected about you through AI-powered monitoring systems. Be specific: request information about the types of data collected, the methods used for collection, the purpose of the data collection, and how long the data is retained. Include your full name, employee ID (if applicable), and contact information. Keep the tone professional and factual.

3
Identify the Appropriate Contact

Determine the correct person or department to send your request to. This may be your Human Resources department, your direct manager (though HR is generally preferred), or a designated privacy officer if your company has one. If you are unsure, start with HR. Your employee handbook or company intranet may also list the appropriate contact for data requests. Sending it to the wrong person could delay the process.

4
Send the Request and Retain Proof of Delivery

Send your request via a method that provides proof of delivery, such as certified mail with return receipt requested, or through an internal company system that logs delivery confirmation (e.g., a secure messaging platform). Keep a copy of the request letter and the proof of delivery for your records. This documentation is crucial if you need to escalate the issue.

5
Follow Up on Your Request

The 2026 laws often specify a timeframe within which employers must respond to data requests (e.g., 30 days). If you haven’t received a response within the legally mandated timeframe, follow up with the contact person. A polite email or phone call reiterating your request and referencing your original letter is appropriate. Document the date and details of your follow-up attempts.

6
Review the Provided Data

Upon receiving the data, carefully review it to ensure it's complete and understandable. If the information is unclear or you suspect something is missing, note those concerns. The employer is generally required to provide the data in a reasonably accessible format.

7
Consider Legal Counsel if Your Request is Denied or Incomplete

If your request is denied, or if the data provided is insufficient or raises further concerns, consult with an attorney specializing in employment law and data privacy. They can advise you on your legal options, which may include filing a complaint with the appropriate state agency or pursuing legal action. A lawyer can help you understand your rights and navigate the complexities of these new laws.

Employers have a legitimate interest in monitoring employee productivity and ensuring compliance with company policies. However, that interest must be balanced against employees’ privacy rights. Generally, employers can monitor employee activity on company-owned devices and networks, but they must be transparent about their monitoring practices.

What crosses the line? Collecting data on employees’ personal lives, using AI to discriminate against protected groups, and failing to disclose the use of AI monitoring are all potentially illegal. Employers should also avoid using emotion AI to make employment decisions without a clear legal basis.

The key is to implement AI monitoring systems responsibly and ethically. This includes providing clear notice to employees, limiting data collection to what’s necessary, ensuring data security, and providing opportunities for employees to challenge inaccurate data. It’s not just about avoiding legal trouble; it’s about building trust and maintaining a positive work environment.

Employers should also be aware of the potential legal risks of violating worker privacy rights. These risks include lawsuits, regulatory fines, and reputational damage. A proactive approach to compliance is essential. Ignoring these laws isn’t just unethical, it’s bad for business.

Transparency is paramount. Employers should clearly articulate the purpose of any AI monitoring, the types of data being collected, and how that data will be used. A well-defined AI monitoring policy can help mitigate legal risks and foster a more trusting relationship with employees.

Does your employer currently use AI to monitor your work?

As new AI workplace monitoring laws take shape for 2026, we want to hear from you. Understanding how widespread AI monitoring is right now helps us all stay informed about the privacy rights that matter most to workers. Vote below and share your experience!