Preparing for Generative AI

Picture of Enrico Foglia

Enrico Foglia

Amidst the fervor ignited by ChatGPT and generative artificial intelligence (AI), there is a growing excitement to harness the increasingly sophisticated potential of this technology. However, according to the 2022 North America AI Survey conducted by Baker McKenzie, many business leaders may currently underestimate the risks associated with AI for their organizations. In fact, only 4% of high-level participants deemed the risks associated with AI use as “significant,” while less than half reported having experience in the AI domain within their corporate boards.

These findings reveal a concerning situation: many organizations are inadequately prepared for the era of AI, lacking the oversight and necessary expertise from key decision makers to adequately address the risk. If left unaddressed, organizational gaps in the ethical and effective implementation of this technology could overshadow transformational opportunities, hindering companies from keeping up with the explosive growth of technology.

How is generative AI changing the risk landscape? Currently, advancements and adoption of AI are occurring at an exponential pace, sometimes deemed excessively rapid. Despite this rapid growth, industry professionals, academics, scientists, policy makers, and legal experts have long advocated for ethical and legal use of AI, particularly in the workforce domain, where existing AI applications in human resources are widely spread.

A survey found that 75% of companies already use AI-powered tools and technologies for human resources selection and management. In this advanced stage of generative AI, foundational principles of governance, accountability, and transparency are more crucial than ever. Concerns about the consequences of poor AI implementation are more relevant than ever.

For instance, unsupervised algorithms can lead to biased and discriminatory outcomes, exacerbating inequalities and hindering the promotion of a diverse workforce. Data protection and privacy breaches are another issue, often caused by inadequate anonymization and indiscriminate collection of employee data.

Furthermore, generative AI has introduced new considerations regarding intellectual property, raising questions about ownership of both input and output from third-party programs and subsequent copyright infringement concerns.

Regulation-wise, we are witnessing efforts from governments and regulatory authorities to enact AI laws and enforcement mechanisms. In the United States, the use of AI in human resources selection and management will be a key focus of emerging regulations. Legal actions, including class-action lawsuits, are also on the horizon. The first cases of litigation related to generative AI and intellectual property ownership have already emerged in the United States, and these initial rulings are helping shape the legal framework in the absence of consolidated regulations.

Organizations adopting generative AI must also consider that data input into AI tools and queries will often be collected from third-party providers. In some cases, these providers may have the right to use and/or disclose this data.

In the context where employers are introducing generative AI tools to enhance their workforce, the issue of sensitive data vulnerability and confidential business information arises. In short, yes, every step in the decision-making process, whether AI-based or not, carries biases. Companies adopting these tools must develop a framework that identifies an approach to assess and mitigate biases while ensuring compliance with data privacy regulations.

Organizations implementing AI must also put in place processes that ensure a clear understanding of the datasets used, algorithmic logic, and technological limitations. This will become increasingly relevant as future AI laws will likely demand reporting requirements.

What are the right moves for organizations?

To keep pace with generative AI and address the specific risks of various use cases, companies will need to move beyond isolated efforts and establish a robust governance framework that brings together all relevant functions. While many rely on data scientists to drive AI-related initiatives, involving all stakeholders, including legal, senior management, boards of directors, privacy, compliance, and human resources, throughout the decision-making process is crucial.

The results of our survey highlighted a lack of representation in this regard. Currently, only 54% of respondents involve human resources in the decision-making process regarding AI-powered tools, and only 36% stated having a Chief AI Officer (CAIO) in their company.

In this high-risk context, the CAIO will play a fundamental role in ensuring adequate governance and oversight at the C-suite level and involving human resources in training and assisting a cross-functional AI team.

In parallel, companies should develop and implement a robust internal governance framework that considers business risks across all use cases, enabling timely corrective compliance actions when issues arise.

The risk for companies lacking an AI governance structure and adequate oversight from key stakeholders or blindly relying on third-party tools is that they might use AI in a way that generates legal liabilities (such as discrimination claims).

In practice, any decision-making process, whether based on AI or other criteria, is inherently subject to biases. Companies adopting these technologies must develop a framework that identifies an approach to assess and reduce biases, along with ensuring compliance with data privacy regulations. Efforts to counter biases should be accompanied by effective pre- and post-implementation testing measures.

Furthermore, companies implementing AI should establish processes that ensure a clear understanding of the data used, algorithmic logic, and technological limitations, as future AI laws are likely to demand reporting on these aspects.

The takeaway is simple: AI is being widely and rapidly adopted, offering numerous benefits. However, its rapid implementation requires strategic attention and rigorous governance to ensure responsible use and risk mitigation. Many companies are not adequately prepared for AI and tend to underestimate risks, risking adopting this technology without the necessary precautions.

Fortunately, by establishing robust governance and oversight structures, organizations can navigate technological challenges, wherever they are on their AI journey.

Ultimately, the long-term management of AI-related risks will require collaboration among stakeholders, including legal professionals, regulators, and privacy experts, to develop laws, ethical codes, or guidelines that recognize both the opportunities and risks presented by this technology.

With a solid framework in place, organizations will be able to implement AI securely and leverage its benefits with greater confidence.