Discussions about data privacy tend to focus on the consumer-seller dynamic. What personal information do companies have a right to collect, and how should they be expected to use and care for it? But another dynamic, between employer and worker, raises even thornier questions.
For years people analytics — the science of using data to manage employees — drew on details about age, gender, and tenure and ratings from performance reviews for insights. But that paltry harvest limited its usefulness.
More recently, sensor technology and real-time data collection have produced bumper crops of employee information for companies. Now managers can access second-by-second feedback on what a worker is doing and, to some extent, what a worker is feeling. Data from emails, chats, and calendar systems can be analyzed alongside traditional HR data. Sensors can gather incredibly granular data on workers’ habits — everything from who speaks with whom, how much people interrupt one another, where they spend time, and even their stress levels. And as ID badges and office furniture join the internet of things, the information that companies have on their workers will expand by orders of magnitude. HR departments now have the potential to know nearly everything about employees.
Already, the new measurement tools have had an immensely positive impact — when deployed correctly and ethically. Companies have used data from wearable sensors and digital communication to quantify and reduce gender bias at work , increase alertness and reduce fatigue , significantly lift performance, and lower attrition, in industries from railways tofinance to quick service restaurants . And we are just beginning to tap the potential of these new technologies.
For workers, though, the value of all this data gathering isn’t as clear. Advanced people analytics may even hinder employees’ ability to freely manage their time and experiment. The numbers might suggest, for instance, that a new way of working isn’t productive, even though it could eventually lead to long-term company gains. Worse still, analytical tools open up the risk of abuse through Tayloristic overmonitoring.
Just because you can measure something doesn’t mean you should. Workers’ advocates worry that data-based surveillance gives employers unreasonable power over employees, and they aren’t sure companies can be trusted not to lose or abuse sensitive personal information.
After all, companies’ systems are frequently breached. And it’s not a long leap from monitoring employees’ stress to using health care data to predict medical conditions and take preemptive action. Data also gives a false sense of validity. That is, it can make certain conclusions seem true (employee X is not productive because he generates 10% less output) even if there are legitimate alternative points of view (employee X is productive in a different way — by, say, reducing errors or training others).
Given this new reality, managers now face challenging questions: Should they use analytical tools that examine employees’ worktime habits to assess their performance? What data should firms have access to? Should they share their analyses with employees? Should they look at individual data? What about using data to determine the risk that an employee will develop a mental illness? Companies, lawmakers, and regulators are already starting to grapple with rules for the use of monitoring tools in the workplace.
In the meantime, managers need guidance on how to run effective and ethical people analytics programs that will avoid an employee backlash or a heavy-handed legislative response. Through my work at MIT withSandy Pentland and in designing products and services for my own analytics company, I have identified several scientifically backed ground rules for the use of monitoring technology. I’ve seen these techniques effectively mitigate potential issues, and I’ve seen serious problems arise when they weren’t used.
In general, successful rollouts of people analytics technologies take four to six weeks. While faster implementation may be possible in some organizations, it’s important to do it right. That will show employees that management is being thoughtful about thorny ethical issues and ensure that the findings’ validity will be respected. Blowing off any one of these steps can cause opt-in rates to plummet and undermine a program for years.
Here’s your playbook for the ethical, smart use of employee data:
Opt in. It starts with one of the simplest and oldest privacy guidelines: If you launch a program collecting new kinds of data, requiring employees to opt in to it (and leaving out all who don’t) is essential. Forcing people to give up data about themselves at work may be strictly legal in the United States and several other countries, but that is not the case globally. Regulations such as GDPR , while not explicitly focused on the workplace, do spell out restrictions that would make data collection difficult for a multinational organization.
But even in jurisdictions that permit it, coerced monitoring or requiring employees to opt out (especially if the choice is obscured by, say, being buried in the fine print during onboarding) opens many ethical and business concerns. First and foremost, it may backfire from a purely economic perspective. Groundbreaking research by Harvard Business School’s Ethan Bernstein has shown that when employees feel that everything they do is completely transparent, the result is often reduced performance . And when competition for talent is intense, workers may leave companies that compel them to give up their data. Beyond that, firms face reputational risk. For example, Amazon , Tesco , and the Daily Telegraph all experienced weeks of negative media coverage for their proposed or poorly executed monitoring efforts. Some of those programs were very well intended: The Telegraph’s was aimed at improving energy efficiency — something few employees would probably object to — through the use of desk sensors. But the media company rushed the rollout and provided little information to its employees before foisting the sensors on them. It was forced to quickly withdraw them after hard internal pushback and skewering in the media.
Setting up an opt-in program is challenging and time-intensive in the short term. The program must include strong protections for employees who choose not to participate so that they don’t feel coerced or penalized. Chief among those protections is data aggregation to prevent individuals’ behavior from being identified. But I also advise further precautions, such as consent forms and data anonymization at the source of collection (so overeager, curious-to-a-fault managers can’t snoop on the minute-by-minute activities of employees).
To design opt-in consent forms that are clear, concise, and easy to understand, companies should take their cue from internal review boards (IRBs) at universities, which have stringent procedures for how researchers interact with human subjects. On IRB forms, researchers must clearly specify what data is collected and how it will be used. Employees should be provided with appendices that spell out the specific database tables that will be populated, so they can see exactly what kind of information will be stored. Finally, companies also need to sign the forms, creating legally binding contracts with employees. (For an example, see the consent form we use at my company.)
Communication and transparency.Blindly sending out consent forms to all employees and hoping for high opt-in rates isn’t a winning strategy. The rollout of ethical people analytics involves lots of communication and constant transparency. I approach it like this:
In some cases companies have opted to compensate workers for participating in analytics programs, either with small amounts of money or rewards such as Amazon gift cards or company T-shirts. However, in my experience this is problematic and ineffective. For one, it gives the employer specific information about who’s participating. But these incentives also typically yield no measurable increase in participation. Employees seem to feel that payment for their data means they’re signing away their right to privacy, which produces a more negative reaction. If they have to pay me to participate, they must be taking a lot, and who knows what they’re doing with it? seems to be the thinking.
With all programs, managers should prepare for a backlash. Emotional reactions, tough questions, and accusations are common even with well-intentioned monitoring. Expecting reasonably universal buy-in is a mistake, because employees not only need to understand exactly what’s being done but must trust managers’ assurances that the company is being honest and open. In cultures where trust or morale is low, this is a massive hurdle. Simply telling employees you will behave responsibly isn’t good enough; you have to show them with completely transparent program operations.
Often as I learn about people analytics initiatives in other organizations, I discover that companies intentionally withhold information from workers about what data’s being collected and why. Companies naively assume that these practices won’t be discovered by employees, but they often are. Doing something legal but unethical tends to incur a severe backlash. There are many examples of this in the marketplace, and typically companies that engage in unethical monitoring behaviors suffer both internal and external consequences.
Aggregation.Companies often assume that data becomes anonymous when you detach a name from it. It doesn’t. Because human behavior is unique, it’s possible to identify people in data without their names, particularly with communication network data .
Imagine Anna has a private office and a Bluetooth beacon on her ID tag that detects her precise location in the office at all times. Anna is a workaholic. If we showed data on how each worker spent time in the office without revealing anyone’s name, we would likely see that one person spent much more time there than anyone else did. That would be Anna, and she and everyone who works with her would know that beyond any doubt. That’s just a simplified example involving a single type of data. In fact, data analysis and machine learning allow us to identify individuals with less obvious data. For instance, it’s extremely easy to identify individual people through their location patterns , and semantic analysis can determine with high probability who the author of a text is, just by recognizing the author’s language habits.
Company-issued cell phones are often used for location tracking, but they can be problematic. If only data associated with the office is collected, the phone effectively is just like an ID tag. In practice, however, information on employees’ whereabouts when outside the office may be logged and collected. That data, besides having quite limited business applications, is extremely sensitive and should be avoided.
Steering clear of these pitfalls isn’t difficult, and it’s actually beneficial. For, beyond creating privacy risks, analyzing the behavior of individuals or singling out one person for tracking is a methodologically inferior approach to data analysis. Why?
Instead of individual data, companies should ask their analytics teams to report aggregate data: group averages or correlations. Given that companies should care about distributions of behavior and not individuals’ patterns, this practice also fits nicely with organizational needs.
Look beyond the numbers.No matter how sophisticated a company’s data gathering is, it will be useless if the firm doesn’t measure the right things.
For example, while it’s natural to think that the content of communication is more important to examine than communication patterns,that’s not true. At a company we advised, my team and I found that top management communicated with one division fewer than five hours a month. That division had more than 10,000 employees and was responsible for over 10% of the company’s revenue. Not surprisingly, it was consistently underperforming and not strategically aligned with the organization. The exact substance of the small number of conversations that did occur was immaterial. The bigger issue was that management rarely talked with people in the division. We could confidently predict that if management simply had more communication with them, it would boost the division’s performance.
It’s also important to keep in mind that no algorithm or data set, no matter how complete or advanced, will be able to capture the entire complexity of work. And you shouldn’t try to build such an algorithm or, worse, buy into a consultant who promises one. People within the organization already have an understanding of the work’s complete scope. Casting that aside in favor of blindly following an algorithm will lead to many stupid decisions. Contextual, qualitative information helps organizations understand how to weight quantitative metrics.
I remember one case in which an engineering organization wanted to use behavioral data to improve team performance. In such situations, a metric like cohesion (group strength, gathered from chat and sensor data) is often correlated with higher performance. A pilot showed that increasing cohesion helped teams hit their key performance indicator, on-time delivery. Looking at those results alone, management thought it would roll out policies to increase cohesion across all teams. That would have been an error. After all, some teams were trying to invent radically new products. Management should expect them to miss their milestones more frequently than other teams, because their timelines are harder to estimate. Applying the cohesion algorithm there wouldn’t have been optimal. Other behaviors, like exploration (interacting more with other teams), predicted their success better. So if the company had blindly set up programs to increase cohesion across all teams, it would have reduced the performance of the ones focused on innovation.
My colleagues and I encounter this problem all the time, and because of it we make sure to work with internal stakeholders to understand why one group’s data analysis won’t always apply to another’s. Their deep contextual knowledge points us to what data collection and analysis matter for each part of the organization.
The potential of people analytics to improve decision making is astounding. It can help workers like their jobs better, make more money, and spend more time with their families. In Japan, for example, monitoring technology is starting to be used to reduce the tremendous human cost of overwork . While in the past companies there would implement a workload reduction program and consider it a success if after a year no one had committed suicide, today they’re able to see immediately whether workloads are actually reduced. Rather than continuing to do something ineffective, they can quickly figure out what will improve their work environments and adjust. This literally saves lives.
However, companies have a responsibility to avoid succumbing to using analytics for outcomes workers may blanch at. Firms need to start putting protections in place today. If they don’t, a wave of overreactive legislation will hit them; you can even see the glimmerings of one in GDPR. That may wipe out people analytics’ enormous potential for good. So it’s incumbent upon the analytics industry and companies to advocate for strong protections too. The stakes are too high not to. The Big Idea
About the author: Ben Waber is the president and CEO of the organizational analytics company Humanyze and the author of People Analytics: How Social Sensing Technology Will Transform Business and What It Tells Us About the Future of Work.