- According to Brian Honan, a cybersecurity consultant and former advisor to Europol, introducing AI-powered work tracking tools like facial recognition brings a whole host of risks for companies.
- A recent report said that algorithmic systems typically used in monitoring the performance of warehouse workers have pervaded more and more industries.
- The majority of respondents in one survey said they were uncomfortable with the likes of camera monitoring or keystroke monitoring.
With many companies working from home during the pandemic, managers and employers have found themselves in a difficult position with running scattered teams away from the office.
Some have turned to technology to help, but they may be walking a dangerous path using tools like artificial intelligence and algorithms to track employees and their work throughout the day, or even facial recognition that can ensure that someone is at their desk.
A recent report by the Institute for the Future of Work, a British research and development group, said that algorithmic systems typically used in monitoring the performance of warehouse workers or delivery riders have pervaded more and more industries.
Andrew Pakes, deputy general secretary at U.K.-based trade union Prospect, told CNBC that these "digital leash" technologies have been an upward trend for some time and that Covid-19 remote working accelerated it.
"This was an issue we were picking up before Covid but over the last year, it's grown rocket boosters as companies have turned to technology," Pakes said.
"On the one hand, technology has been really important in keeping us safe and connected whilst being at home but there's another side to it and that's the worry we're seeing around it."
Prospect has published some research into workers' attitude to these technologies. The majority of respondents in one survey said they were uncomfortable with the likes of camera monitoring or keystroke monitoring.
This technology is catching more and more attention from critics. Microsoft faced a backlash over its "productivity score" in Microsoft 365, which allowed managers to track an employee's output. Microsoft has since rowed back on the product's features, minimizing the data collected on individuals.
PwC was criticized last year for developing a facial recognition tool for finance firms that would monitor an employee and ensure they are at their desk when they're supposed to be. A PwC spokesperson told CNBC that the tool was a "conceptual prototype."
But these types of backlashes haven't stopped others from tinkering with the technology. Fujitsu has developed an AI tool that can determine how hard someone is concentrating in an online meeting or class by analyzing muscle movements in the face.
As more and more tech like this hits the market, employers will need to be careful around what they deploy.
According to Brian Honan, a cybersecurity consultant and former advisor to Europol, introducing AI-powered work tracking tools like facial recognition or keystroke monitoring brings a whole host of risks for companies.
"Companies do have a duty of care to protect their business and they do have a legitimate interest in ensuring their business interests are taken care of, but they have to be balanced against the rights of the individual in the workplace," Honan said.
He suspects that many tools like keystroke monitoring or programs that snap screenshots of a person's desktop could be illegal under the EU's sweeping GDPR regulations. "If you think about all the information that these tools could be gathering as people are working," he said.
Honan added that the power of these tools is heavily weighted toward the employer and may overreach into a worker's personal space.
He said the case of a camera monitoring that a person is at their desk can be particularly problematic in a work-from-home scenario. The camera could capture footage of the employee's family or housemates, he said, and now their privacy has been violated.
Beyond the regulatory risks at play, he added, the use of these technologies does little to foster a positive culture in the workplace.
"Invariably what you're saying to your employees is 'I don't trust you to do your job, what I'm paying you to do'," he said.
Pakes said that GDPR provides a good framework for employers to follow when considering any technology for managing employees, but stricter rules specifically for the workplace in the age of hybrid and remote working is needed.
Prospect advocates for a "right to disconnect" law in the U.K., which sets a clear line for when communication between a worker and their boss ends. Pakes said such regulations are necessary for protecting workers from overreach by employers through technology. Right to disconnect laws have been passed in France and Ireland.
Separately, tighter rules in the EU around artificial intelligence are coming down the track, which will rein in how AI is used in various industries. Any employer dabbling with facial recognition will need to be wary of new obligations.
"Most of the employment laws in Europe were designed in the last century around physical harm and risk, health and safety. They weren't designed for this digital age of AI and decisions about data being taken in clouds or black boxes, which is why we very much argue that data is the new health and safety," Pakes said.
"We need to update our employment laws to keep them fit for purpose for the way AI is being used on us."