AI & HR Data Security: The Pros and Cons

curved-strip-right bottom-curved-strip-white bottom-curved-strip-white-mobile

The use of AI in HR has revolutionized the way HR departments can improve efficiency, streamline processes, make informed decisions, and enhance HR data security while personalizing the employee experience.

 However, HR is also responsible for handling significant amounts of sensitive people data; and using AI for data management can pose a security risk as AI is unable to contextualise or be held accountable for mistakes in the same way a human can.

 Recent research by PwC revealed that 85% of employees are concerned about the security of their personal data, as the use of AI in HR becomes more widespread.

It’s therefore critical that HR teams that use AI ensure they rigorously safeguard employee data, enhance HR data privacy, and abide by data protection laws.


AI security risks

The biggest concern for HR when using AI to handle sensitive data revolves around HR data privacy and HR data security, with data breaches and misuse of personal data being a very real risk.

“Many AI systems are poorly trained which can also lead to sensitive data being mishandled or inadequately anonymised and protected,” comments Chris Stouff, chief security officer at cybersecurity specialists Armor. “This can result in information being exposed or misused, violating employee data protection regulations, such as the General Data Protection Regulation, and damaging employee trust. 

Another key security risk is unintentional bias, he adds. “AI systems learn from the training data put into them, which means incomplete or biased datasets can severely hamper the performance and reliability of AI models, leading to skewed outcomes and inaccurate predictions.”

Organisations can enforce strict data encryption and access controls, regularly audit AI algorithms for biases, provide transparent explanations for AI-driven decisions, and prioritise employee consent and employee data protection with regulations such as the EU’s GDPR in all AI initiatives.

Organisations also need to be aware that AI may perform inadequate levels of machine learning unlearning or data destruction.

According to Chris Stouff, “If an AI system does not properly remove sensitive data, that information remains vulnerable to unauthorized access and security breaches.” “It may also mean that privacy regulations or organisational policies have been broken, which could lead to prosecution or financial penalties.” 

Data poisoning is also a risk as AI technology continues to develop and more companies begin to incorporate it into the workplace, remarks Aidan Cramer, CEO and Founder of AiApply. “This is a process where cybercriminals provide misleading or false information into the training datasets that AI use to work effectively. This could affect the software by influencing the decisions it makes, which benefits hackers as they can steer the AI to make choices that benefit them, such as collecting sensitive information, leading to potential identity theft.”


Minimise the risks

There are several steps that HR leaders can take to mitigate AI risks, such as training staff, researching thoroughly before using AI within the business, and putting robust security measures and data protection policies into place.

“If you’re using AI within your business, it’s very important that proper training and awareness is provided to employees who will be using it on a regular basis,” advises Aidan Cramer. “A thorough training session will help employees understand exactly how to use AI systems, mitigating the risk of misuse and mistakes being made when handling sensitive data. It may also help employees identify when AI systems aren’t working properly, which could help flag any potential security breaches much quicker and ensure compliance with data privacy laws.”

Adam Biddlecombe, AI expert at AutoGPT and Mindstream, adds: “Organisations can enforce strict data encryption and access controls, regularly audit AI algorithms for biases, provide transparent explanations for AI-driven decisions, and prioritise employee consent and data protection with regulations such as the EU’s GDPR in all AI initiatives.”

HR teams must also be disciplined in the amount of personal data they input and store, says Chris Stouff. “Collect only the necessary data required and, where possible, anonymize data to enhance employee data protection and protect employee identities in the event of a breach.


The flip side

On the other hand, using AI can also help organisations to improve their data security through, for example, threat detection, real-time alerts, and the reduction of human error.

“Manual entry of data in HR can run the risk of human error with incorrect inputs, which could create security vulnerabilities in the future,” comments Aidan Cramer. “Incorporating a good quality AI system will help HR teams to automate these manual tasks, ensuring that data is implemented correctly and reducing any AI security risks.”

He adds: “As AI systems continue to develop, so too do their ability to detect threats and anomalies in the inputted data. Detecting anomalies in data naturally has practical benefits, allowing HR teams to address any wrongly inputted data in their systems. It could help identify potential security threats too, allowing teams to act fast to prevent any potential data leaks.”

In addition, AI can significantly enhance HR data security protocols and eliminate threats by detecting patterns in very large sets of data, adds Leon Gordon, founder and CEO of Onyx Data.

“Also, AI can be leveraged as an enhancement to human processes by initially screening and then passing on the findings to a human counterpart. AI can be programmed based on rules, which have little to no margin of error and do not get fatigued over long periods of time like humans do.”


What does the future hold?

When looking ahead, HR departments and the wider business may be thinking about how they can realistically use AI in the future when managing their data. And, indeed, what impact this might have on employee experience and workplace norms.

“There’s no doubt that datasets within businesses and their HR departments will continue to grow, becoming a real effort to manage, classify, and control,” says Chris Stouff. “In the future, AI can help businesses to scale, providing support to reduce the human input required for timely routine and everyday tasks. These include automated data entry and validation, document filing and management, data cleansing and deduplication and record retention adherence.”

Rolling out AI systems that are compliant with GDPR, and have high levels of security in place, will help businesses store sensitive information more securely in the future, adds Aidan Cramer. “AI can monitor data and spot any potential data modifications or login attempts in real time, allowing businesses to intervene quickly and prevent any further threats. Implementing multi-factor authentication can further enhance security measures.”

The increased use of AI could also see a more personalised approach from HR teams, helped by the data collected from employees, he adds. “This could allow HR teams to tailor training programmes and development plans specific to the employee to ensure they continue to grow within the business, rather than a ‘one size fits all’ approach.”

AI will also facilitate more self-service actions, enabling employees to carry out HR-related tasks independently, therefore enhancing the employee experience.

“Employees will be able to access and update their personal information, view pay stubs, manage benefits, manage their own work schedules, request time off, and swap shifts with minimal HR intervention for greater efficiency and a better employee experience,” remarks Chris Stouff.

Yet, when using AI to manage people data, there’s one thing worth remembering, he adds. “Whilst organisations can transfer a lot of the risk associated with HR data protection to managed providers, monitoring services and insurers, they can never transfer the accountability for HR data privacy and security for the data generated.

They are responsible and liable for that data from cradle to grave. Therefore, they must understand that they are using AI as a tool that interfaces between themselves and that data – it does not absolve them of the risks and responsibilities.”