A new report has revealed that HR departments in tech companies are using AI to draft termination letters.
The findings show that over 1 in 10 HR employees at tech firms are using Chat GPT to craft employee terminations as part of wider time-saving measures. Now HR and management experts from DLC Training share their advice to the wider industry on to use AI ethically.
Neil Finegan, Leadership and Management Tutor at DLC Training reflects on the current state of AI within HR:
“Like many other sectors, those in HR are feeling the pinch when it comes to time and cost-saving measures which means it can be tempting to fall back on AI to help us in our day-to-day. While neither inherently good nor bad it’s important to remember that AI lacks the most important element of the HR profession – humans. It’s vital that if you do decide to start incorporating AI into your role you do so responsibly, ethically and within reason.”
Lay-offs and workforce reductions are never an easy task, however crafting a respectful termination letter is an essential part of the employee dismissal process. While tools like Chat GPT can be a good starting point to research the essential elements of a termination letter they should not be relied on to do all the heavy lifting.
AI generated text can often lack depth as these tools have a limited understanding of the nuances of natural language. Without human oversight, it’s easy for AI-generated termination letters to read cold and robotic which only serves to make the process more difficult for all of those involved.
Always sense check AI-generated text and try to add in any individual details on the reasons for termination and employment history.
AI bias in the hiring process
AI tools collect their information through large scale data collection which can often reflect existing human biases. This is perhaps one of the most essential things to remember for HR professionals who are keen to start using AI, particularly within the recruitment process.
Diversity, equality and inclusion remains a central focus for most professionals in the recruitment process. While it may be tempting to think that AI will bypass unconscious and conscious biases of a human team this is not always the case. Therefore, employing AI recruitment programmes must be closely monitored. In fact, in 2018 Amazon found that their machine learning-based programme had to be scrapped after they discovered that the algorithm was biased against women. The tool had observed patterns in resumes submitted over a 10 year period – the majority of which had been submitted by men.
Relying on AI to screen CVs can of course help to shortlist the best candidates however it’s important to continually review your processes and regularly, ethically audit the results of AI screening. Always try to use your human judgement where possible to identify clients with atypical work experience or those whose work ethics and character may also make them a good fit.
Educate yourself and your team
If you wish to start incorporating AI into your day-to-day processes it’s important first of all to educate your wider HR team on the ethical frameworks and guidelines that are currently in place within AI. While ever-changing, these frameworks can serve as a reference point where challenges arise with your application of AI.
Likewise, aim to provide adequate training and support for employees to help them understand AI and its role in the organisation. This can include circulating resources, organising workshops or hosting Q&A sessions for your team to address concerns or fears related to the use of AI in the workplace.