Generative AI has caused much insecurity among everyone as it affects one very critical aspect of our daily lives – the jobs. As the world recovered from COVID layoffs, we had OpenAI launching the disruptive GPT technology in late 2022 which has led to many new businesses, more layoffs, and the rise of AI influencers who pretend to know everything about it.
This is why the adoption of Generative AI technology in human resource management and recruitment space needs special supervision. There are many manual tasks and analysis work that Generative AI tools for HR can potentially automate. We have covered 10 Generative AI for HR use cases and detailed at least 5 tasks that these GPTs can help automate.
However how such automation practices affect the lives of common people, especially those not equipped to access GPTs must be taken into consideration.
In this article, we have covered 5 such critical risks of adopting Generative AI for HR operations along with mitigation strategies.
Key takeaways:
- 4 risks of adopting Generative AI for HR and recruitment operations
- Risk assessment for each – includes mitigation strategies, assigning responsibilities, contingency plans, communicating with employees, and monitoring for more risks.
Identifying risks associated with Generative AI adoption for HR and recruitment operations
Generative AI will help many organizations save costs for hiring and improve the quality of hired candidates. But Generative AI is a growing technology that improves every 3-6 months. It is not perfect yet, and companies must consider its limitations and risks before adopting it at the organizational scale.
Here’s a simple framework for risk assessment which we will use to gauge various risks of Generative AI adoption for the HR industry:
- Define: definition and description of the potential risk
- Impact: Figure out the impact of the defined risk
- Ownership: assign responsibilities for these risks to help delegate and form teams
- Response: define measures HR teams can take to prevent the risks.
- Monitor: continuously monitor the defined risks and look out for potential consequences from
Now, let us understand four key risks of using Generative AI for HR and people operations using the above risk assessment framework:
1. Using historical data sets to train LLM models which may amplify historical bias
Bias in – Bias out.
That is a big problem with adopting LLMs using historical data for training. It is very important to build LLMs and update them to ensure ethics are taken into consideration while generating output. This is even more important for HR professionals who may unknowingly make decisions that are not in line with modern and global workplace standards.
Here’s how AI works in bias – learn more in the report titled – Discriminating Systems: Gender, Race, and Power in AI – Report

While AI hallucination incidences are reducing as the technology continues to improve – it is important that your organization is sensitized and trained on using LLMs to detect bias and still include human decision-making.
Impact of using biased LLMs for performing recruitment operations:
- Losing on bright candidates: When LLMs get trained on datasets that have historical bias, it gets reinforced in the algorithm. It may cause your HR teams to depend on skewed candidate screening, and unknowingly miss out on great talent. Not correcting such biases will increase bias against underrepresented groups, causing systematic discrimination against them without the knowledge of your company’s leadership.
- Reduced diversity in candidate pool: HR teams of a rapidly scaling organization may prefer to use software for curating a list of good candidate pools to hire on an ad-hoc basis. If you’re using biased LLM-enabled Generative AI software, will reduce diversity in your candidate pool. If you have implemented good diversity hiring practices on paper, it may prove detrimental to your cause and hamper the execution of the same.
- Cost of discrimination lawsuits: candidates are protected by law to sue organizations that have not hired them based on gender or racial discrimination. Depending on the country, your organization can face massive legal notices, which will result in expenses for providing compensation and lawsuit costs.
Mitigation response for ensuring bias-free hiring using Generative AI tools:
- Conduct LLM algorithm audit to detect bias: implement regular LLM audit checks to detect any anomalies in the candidate screening process.
- Manually collect data sets: using Generative AI itself to create datasets to train LLMs is not a wise option. Use zero-party or first-party datasets and manually curate them to ensure adequate societal diversification.
- Maintain transparency with stakeholders and impacted candidates: your employees, candidate pool, HR teams, and other organizational teams must be aware of the use of LLMs in making hiring choices. Ensure adequate training and transparency are provided before scaling your Generative AI usage.
Assigning responsibility to mitigate usage of biased LLMs for hiring:
Here are three key roles who should be assigned the responsibility of mitigating the risk of using biased LLMs:
- Data scientists or expert AI specialists: help in understanding the code’s test cases, fine-tuning the LLMs to suit the organization’s hiring practices, clean data sets, and conducting algorithm audits to detect biases.
- Legal advisors: train your in-house legal team to work on reducing the risks of using LLMs and define a framework for the same. Your company can also hire specialists to help draft AI usage policies and design ethical guidelines.
- HR Technology specialists: conduct training sessions for employees and HR teams to detect bias, mitigate the same, and communicate discrepancies to relevant AI specialists for fixing.
Contingency plans in case of biased LLMs usage in hiring operations:
- Flagging of biased results: Your company should have necessary provisions in the chosen Generative AI software to flag any potential bias discovered and to take action only when a human reviews the candidate screening decision made by the AI algorithm. You can also make your hiring algorithm adaptive, such that if a biased decision gets flagged, you provide an alternative hiring method option to the candidate immediately.
- Alternative hiring method: Your company should have ready-to-use alternative software which includes data integrations and API availability to transfer existing data. You should also inform your candidate pool about any changes in hiring criteria to ensure transparency. Or, you can deploy additional evaluation procedures, such as manual screenings or human-in-the-loop reviews, to validate or override biased AI-generated assessments.
- Notify impacted candidates: On detecting any bias, make your AI specialists find out impacted candidates and ensure you notify them about the issue and how you’re working to resolve the same. You should revise your hiring practices for affected candidates and ensure a separate selection process to ensure fair play.
- Provisions to re-train LLMs: on detection of biased hiring, ensure you update your database on priority to include diverse datasets and prevent their perpetuation in subsequent hiring cycles.
- Set up an emergency response team: your legal, HR, and leadership teams should form a suitable emergency response team to help mitigate biased candidate hiring incidences. Activate collaborative efforts to swiftly address biased outputs, implement corrective measures, and ensure compliance with legal and ethical standards.
Periodically review the prepared contingency plans and consistently provide a bias-free hiring experience for your candidates.
Monitoring the risk of bias while using Generative AI for the hiring process:
To monitor biased candidate hiring detection occurrences, set up reporting and analytics for the same. Design metrics for bias detection, and set up goals across each metric Discuss these reports with HR managers, leadership, and AI specialists to identify issues and align your hiring process. Continuously refine monitoring strategies based on insights gathered from ongoing assessments.
2. Cost of implementing Generative AI-enabled recruitment processes
Generative AI is not a cheap technology, and it will take time for the full-scale commercialization of LLMs. Given the complexity and evolving nature of Generative AI technologies, the cost of implementation can vary. Organizations that opt for advanced AI systems might encounter higher initial and ongoing expenses. The financial impact encompasses not just initial setup costs but also ongoing expenses related to data management, AI model training, infrastructure, and potential staffing requirements.
Here are six key risk factors to consider in terms of the cost of adopting Generative AI for HR:
- Licensing costs: Many of these technologies would be either patented or commercially available for usage. Such services are usually charged on a ‘usage fee’ basis. If not calculated or consulted with an expert, you may overshoot your bills.
- Purchase of data sets costs: managing and purchasing data sets will add costs as you’re required to train the LLMs as per your business needs. You must also ensure they are not biased and regularly audited to ensure alignment with the required LLM output.
- Employee training costs: hiring experts will add costs to train your employees in using Generative AI tools. This also means implementing relevant measures to onboard candidates as they use your updated Generative AI workflows for the hiring process.
- Infrastructure maintenance costs: once your company sets up the Generative AI-enabled recruitment and employee management workflows, your HR teams have to maintain their uptime, secure data, repair hardware if required, and update AI systems periodically.
- Scalability costs: if your AI systems are not optimized for costs, it will lead to a surge in expenses as your usage increases.
- Legal costs: there are high chances of legal lawsuit exposure, especially if you train your models on copyright data or face lawsuits from candidates due to unfair hiring by AI. Any non-adherence to future changes in AI regulations by the government can also result in legal expenses.
Impact of adopting Generative AI technology for HR on the overall company environment
The major risk of adopting an evolving technology is how one is not able to foresee its impact on expenses. Your company may overshoot its budget or may experience that Generative AI tools are not generating the required benefit as anticipated. You may not only lose money during the whole technology adoption and off-boarding process but also valuable productive time for your employees.
Here’s an infographic by Gartner about how you can understand the value and ROI of your Generative AI use case and the cost + risk + complexity associated with it.

Mitigation response – implementing cost control measures for adopting Generative AI-enabled HR infrastructure:
- Conduct a cost-benefit analysis: hire an AI consultant to understand how you can truly benefit from adopting AI technology for your daily operations. Calculate usage beforehand based on your current employee strength and future hiring requirements.
- Negotiate with vendors: before implementing software, conduct a thorough vendor selection process. Negotiate prices and try for flexible payment plans to control costs.
- Audits for AI expenses: regular auditing will help notice any significant usage spikes and help you take measures to control the same. Adopt a thorough audit process that includes various factors like legal, technology, expenses, employee productivity, data management, etc.
- Perform scenario-based planning: develop scenarios to anticipate potential cost escalations and devise preemptive strategies to mitigate them.
Assigning responsibility to mitigate cost escalations while adopting Generative AI:
- Leadership: your management team should be involved in knowing the complete cost of adopting Generative AI solutions for hiring.
- Ensure vendor oversight: ensure the customer support from your AI technology vendor is informed about your implementation results so that they can help strategize for cost management.
- Rapid response team: prepare a multi-disciplinary team of AI specialists, HR professionals, managers, and IT team for cost management and mitigate unforeseen cost escalations.
Contingency plans in case of unmonitored cost escalations in Generative AI adoption for HR operations:
- Prepare flexible budgets: assign separate budgets for various stages of AI adoption for the hiring and employee management process. This helps understand and detect escalated costs better such that on reassessing the budgets, it is easier to adapt.
- Keep alternative workflows ready: do not abandon previously followed HR operations workflows. Keep them in backup in case you’re required to stop AI usage due to budget reasons.
- Design escalation protocols: train your employees to follow certain protocols to help escalate any cost spikes observed and have a plan of action ready.
- Implement temporary cost control measures: implement a temporary halt or controlled spending measures to prevent further escalations until the cause is identified and addressed.
A thorough budget planning before adopting Generative AI solutions helps avoid unmonitored expenses.
Monitoring the risk of cost escalation while using Generative AI for the HR operations:
Monitoring is a key activity to perform to ensure you catch the upticks in costs when implementing a Generative AI solution for HR workflows. Here are some ways to do it:
- Create KPIs and monitor them: when implementing Generative AI tools for HR, ensure you design KPIs for each use case and build a dashboard that tracks them. Monitor activity across each KPI and update your cost versus benefit analysis to truly measure AI digitization return on investment.
- Set up approval hierarchy: assign approval levels and vendor checks whenever you make any additions, removal, or changes in Generative AI-enabled HR workflows. This helps multiple people spot any potential mistakes and fix them early before deploying.
- Conduct vendor contractual compliance checks: Regularly evaluate costs associated with AI vendors, ensuring adherence to agreed-upon pricing and services. Review vendor contracts to verify cost stipulations and avoid unexpected financial implications.
3. Rapid adoption of Generative AI can cause anxiety among employees about their jobs
Technology should empower people to achieve better outcomes, not put them in a position to lose jobs. Good companies with ethical employment practices should consider upskilling their employees and re-aligning their roles without choosing to lay off anyone.
As your organization adopts Generative AI into its workflows, many workers may feel insecure about their job security and overwhelmed to adapt to rapidly changing workflows and job scope. Here are some risk factors to consider concerning employee well-being while adopting Generative AI for people operations:
- Miscommunication and rumors: make sure your employees understand how your organization is planning to use Generative AI. In case any role may face job cuts, it is essential to communicate the same beforehand so that your employees can focus on making a job shift. Not doing so may result in the spreading of rumors and a discussion of the same may result in anxiety among employees.
- Resistance to change: many employees may have mastered a certain way of doing their job and changing tools for the same may not be something they would appreciate.
- Uncertainty with reskilling: your employees may wonder if reskilling makes sense as Generative AI evolves to become smarter and more capable at performing tasks.
Impact of adopting Generative AI solutions in HR on organization:
While the risk factors may seem manageable, if ignored, may have drastic consequences like negative impact on employee mental health, bad PR due to layoffs, or reduced employee productivity. Such impact will not help you realize the return on investing in Generative AI technology and may instead prove counter-productive. Since human resources is a critical section of an organization that directly impacts workers – it is important to be careful about the pace of Generative AI adoption.
According to Gartner, the right implementation of Generative AI will lead to an insight and value-driven organization that supports more strategic roles. Thus, your HR teams must ensure this transition is smooth such that you focus on automating processes and redefining job roles rather than making impulsive hiring-firing choices.

Mitigation response – implementing measures to maintain employee well-being and engagement during Generative AI technology for HR adoption
- Conduct reskilling programs: Generative AI is truly a tool for workers – and your employees must understand the same. Since it will impact job roles and the scope of work, make sure you conduct workshops to learn prompt engineering, and reskilling sessions to help them navigate workflow changes.
- Include employees in testing Generative AI workflows: involve employees in the AI integration process, seeking their feedback and addressing concerns proactively. This also helps you observe beforehand if Generative AI workflows are truly beneficial for employee productivity or not.
- Be proactive in addressing concerns: acknowledge and address potential anxieties or uncertainties about job roles or security due to Generative AI integration. Create a dedicated support channel through which employees can seek clarification and support.
- Provide clarity about career path: be transparent and honest about potential career paths available post-AI integration, emphasizing opportunities for growth and advancement. Follow an open-door policy where employees can approach managers and leadership for guidance and discuss concerns.
- Celebrate adopting AI: recognize and celebrate successful adaptability and innovation resulting from AI adoption. Implement reward mechanisms to appreciate employee contributions towards adapting to AI-driven HR processes.
Assigning responsibility to mitigate employee well-being challenges while adopting Generative AI for HR:
- Form an employee representatives union: allow your employees to form a community where they can share feedback or raise concerns. Ensure that employees’ voices are heard during the AI adoption process.
- Form an internal communications team: to ensure HR teams communicate the adoption of AI in HR processes, hire an internal communications specialist who can help craft the narrative, address concerns, highlight benefits, and emphasize the organization’s commitment to employee well-being.
- Have active IT support in place: your vendors’ customer support and internal IT support should collaborate to help employees adopt new Generative AI tools for HR and AI-enabled HR processes.
Contingency plans in case of reduction in employee mental health quality and productivity due to Generative AI adoption for people operations:
It is important to have a contingency plan in place so that you have alternatives available in case of a spike in employee dissatisfaction:
- Make work flexible: Bill Gates predicts 3-day work week thanks to AI removing manual workflows. Offer flexible work arrangements, including remote work options or flexible hours, to accommodate different work preferences and alleviate potential stress associated with the new AI systems.
- First, automate manual work to justify adoption: the key benefit of using Generative AI is that it helps your employees focus on more value-generating work than manual tasks. Hence, as a first step, identify the tasks that you can automate or enable your employees to become more productive at. It is better to bring these changes in small increments such that you first experiment with its usage over small teams and observe any productivity improvements. For tasks that require making a ‘decision,’ it is always better to ensure the presence of a human that approves the LLM’s output. Doing so helps your employees understand the value of LLMs and its positive impact.
- Emergency response plan: provide training for managers to recognize signs of stress, burnout, or declining mental health in their teams. In doing so, have an emergency response plan ready to address acute mental health crises promptly. Ensure that employees are aware of emergency support services and resources.
Monitoring the risk of declining employee mental health during Generative AI for the HR adoption:
- Monitor Employee Sentiment: conduct a baseline assessment of employee mental health before the Generative AI implementation to understand the existing state. Identify key stressors and concerns that might be exacerbated by the AI adoption. Then, regularly assess employee sentiment and engagement levels post-AI integration to identify areas needing improvement. Use benchmarks and industry standards to assess the organization’s performance relative to peers.
- Identify communication gaps: analyze changes in collaboration patterns, communication breakdowns, or isolation instances that may suggest increased stress or decreased well-being.
- Regular check-ins: schedule regular one-on-one check-ins between managers and employees to discuss their experiences and concerns.
- Anonymous reporting: establish anonymous reporting channels for employees to express concerns about mental health or the impact of Generative AI without fear of reprisal.
4. Adopting commercial AI technology that lacks regulations and government oversight
A 16-year-old using Generative AI for their school homework is much different than an enterprise implementing Generative AI for their business operations. OpenAI will potentially face many lawsuits in the coming years about its unauthorized usage of copyright data for training its AI models. As per The Guardian, OpenAI has proposed to pay for their customers’ lawsuits and is working on a legal strategy to combat these issues. The occurrence of such incidents is proof of what happens when you commercially scale a technology that lacks regulatory oversight.
The challenge of employing AI in HR is that the nature of employee relations, workplace investigations, and managing company culture problems are highly contextual and interpreted on a case-by-case basis.
Gartner – Understanding Regulations Before Embracing AI in HR
Here’s a summary of various risk factors associated with adopting a technology like AI which is yet to fully commercialize:
- Internal or client data leaks: Generative AI has led to a massive boom in new-age startups that claim to enable your company with Generative AI capabilities. You must consider ethics and cybersecurity measures for these companies, especially if you deal with clients or confidential internal data.
- Transparency challenges: for an evolving technology, it may get difficult to explain AI-driven decisions once they are made. For example, many governments now make it mandatory for organizations to transparently state any usage of AI for firing decisions.
- Regulatory risks: your organization may face legal and regulatory non-compliance due to the absence of clear guidelines. For example, the use of AI for conducting surgeries is still in the research and experimental phase – and demands regulatory oversight since it involves life and death impact.
- Vendor lock-in: right now there are not many players in the Generative AI space – and there are high chances of exposure and reliance on a single vendor for AI technology. Thus, your company may face vendor lock-in issues and limited flexibility.
Impact of adopting unregulated Generative AI solutions in HR:
As an established business, getting entangled with lawsuits is not a desirable outcome of investing in Generative AI technology. This is more important if you belong to highly regulated industries like healthcare or cybersecurity. ‘Human Resources’ is a critical function in an organization, and making wrong decisions in hiring-firing can cause severe distress for the company, and the overall economy.
Regulations concerning AI in HR within the EMEA and APAC regions are indirectly influenced by broader regulatory initiatives related to privacy. For instance, in the EU, the General Data Protection Regulation (GDPR) and, in the case of Indian businesses, the Digital Personal Data Protection Bill (DPDPB) play a significant role in shaping the regulatory landscape.
Such extensions can easily be used by affected parties to gain protection by employees or candidates against any unfair usage of AI for HR practices.
Mitigation response – how to safeguard your organization from the lack of AI regulations
- Publish AI ethics and principles publicly: this can help your company restrict your Generative AI activities and adoption pace to a framework. For example, Salesforce published ‘Trusted AI Principles‘ as it adopts Generative AI into its product. Having such manifestos put up and followed provides your employees and customers with the necessary assurance of releasing ethical Generative AI applications. Of course, an organization must follow through its principles across organizational levels and disciplines.

- Use zero-party or first-party data: While publishing our guide to using Generative AI for consultants, we observed how management consultants build their LLMs as their client data needs to be secure. Their knowledge base may involve trade secrets. Such considerations may apply to industries as well – and hence, it is better to consider using data that your customers voluntarily provide, owned by your company, or publicly available. External data may not only expose you to the risk of copyright infringement but also may be inaccurate, not enriched, or cleaned properly. Thus, before using any data set, ensure its authenticity, remove a biased approach to outputs, and use toxic language.
- Share best practices and cautionary tales with the industry: Amazon’s cold hiring of warehouse employees based on their productivity using AI did raise questions over its methodology. In many nations, especially Europe, one has to provide algorithmic transparency that doesn’t allow AI to be the ‘only’ decision maker of actions that may have a legal impact on the concerned people. Such reports are important to make the world aware of the risks and impact of Generative AI adoption for human resources operations, while also implementing safeguards against it.
- Run small experiments: Generative AI is still a new and evolving technology. Its implications are still under study, while every day OpenAI releases new updates and GPT versions that bring new capabilities to the market. Thus, when implementing any Generative AI tool to automate HR operations, keep in mind to make it iterative and run smaller experiments to get data on its impact on the workplace culture and productivity.
- Document and report: all processes related to Generative AI in HR operations must be well-documented to demonstrate compliance. For example, maintain records of policy changes, training sessions, and audits.
- Establish data consent mechanisms: set up clear mechanisms for obtaining employee and candidate consent for data processing activities. They should be informed about how Generative AI will be used in people operations within your organization.
Assigning responsibility to mitigate regulatory risks of adopting Generative AI for HR operations:
- Set up a strong legal team: if you’re adopting Generative AI, especially if you’re a large-cap organization, you must set up a legal team that has oversight on various AI adoption strategies implemented. They should track various regulatory movements in AI adoption and commercialization. Conduct impact assessments to determine how Generative AI aligns with legal requirements. Accordingly, make changes in your company’s policies to ensure compliance.
- HR teams: help someone from your HR team to collaborate with the legal team to help define and update HR policies and Generative AI usage. Provide training to HR staff on legal compliance and ethical use of AI. Then, help them implement and enforce policies to mitigate legal and regulatory risks among the rest of the organization.
- Data governance team: this team must ensure your company adheres to data protection regulations where applicable. Their job is to implement data anonymization and encryption practices, while also ensuring compliance with data access and retention regulations.
- Set up an ethics committee: hire ethics specialists who can help your organization provide oversight on the ethical impact of Generative AI adoption in HR practices. Establish communication channels for employees to report concerns to this committee. Engage external consultants or auditors to provide independent assessments of the organization’s compliance with regulations.
- Regulatory Liaison Officer: appoint a designated person to serve as a liaison between the organization and regulatory authorities. Seek guidance and stay up to date with regulatory changes.
Contingency plans in case of non-compliance or legal issues on usage of Generative AI for HR and recruitment:
- Temporarily halt AI usage: On finding any non-compliance or bad PR, you may want to stop AI adoption and implementation activities. Use this time to thoroughly investigate and rectify compliance issues. Implement alternative HR and recruitment processes that comply with regulations during the suspension period. For this, you must have alternate workflows ready as a mitigation plan that helps continue essential operations without reliance on problematic AI systems.
- Share progress: You should also demonstrate compliance efforts and publicly assure candidates about improvement in your hiring processes. Work towards building a positive relationship with regulatory authorities by sharing the implemented corrective measures and preventive actions.
Monitoring the risk of regulatory hurdles of using Generative AI for the people operation:
- Regularly monitor regulatory news: your legal teams must stay informed about changes in relevant laws and regulations related to AI, data privacy, employment, and any other areas impacting people operations.
- Regular and independent audits: legal audits are a great way to align with changes in the regulatory landscape of AI to identify emerging risks and adjust strategies accordingly. Obtain third-party validation of adherence to legal standards.
- Participate in industry forums: your HR teams should attend various industry forums and events that share the latest developments and best practices on AI adoption and beyond. Collaborate with peers to share insights on regulatory challenges and solutions.
How are you mitigating the risks of Generative AI for your people operations workflows?
Generative AI will change the way we approach and perform our work – and not adopting it as it matures will be a missed opportunity. By adopting risk mitigation strategies, it is possible to adopt Generative AI for HR operations safely and inclusively.
Did you experience any of the above-mentioned risks while implementing Generative AI solutions for HR operations? We would love to feature your experience on this blog post – email to content@merrative.com
You can subscribe to our newsletter to get notified when we publish new guides – shared once a month!
This blog post is written using resources of Merrative – a publishing talent marketplace that helps you create publications and content libraries.
Get in touch if you would like to create a content library like ours in the niche of Applied AI, Technology, Machine Learning, or Data Science for your brand.

Leave a Reply