Join top executives in San Francisco July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn more
The past few weeks have seen a number of significant developments in the global discussion of AI risk and regulation. The emerging theme, both from the US OpenAI hearings with Sam Altman and the EU’s announcement of the amended AI law, has been a call for more regulation.
But what has been surprising to some is the consensus among governments, researchers and AI developers about this need for regulation. In congressional testimony, Sam Altman, the CEO of OpenAI, proposed creating a new government agency to issue licenses to develop large-scale AI models.
He offered several suggestions for how such a body could regulate the industry, including “a mix of licensing and testing requirements,” and said companies like OpenAI should be independently audited.
While there is growing agreement on the risks, including potential impacts on people’s jobs and privacy, there is still little agreement on what such regulation should look like or what potential audits should focus on. At the first Generative AI Summit, held by the World Economic Forum, where AI leaders from companies, governments and research institutions gathered to agree on how to address these new ethical and regulatory considerations, two main themes emerged:
The need for responsible and auditable AI auditing
First, we need to update our requirements for companies developing and deploying AI models. This is especially important when we ask ourselves what “responsible innovation” actually means. The UK is leading this discussion, with the government recently issuing guidelines for AI via five core principles, including safety, transparency and fairness. There is also recent research from Oxford highlighting that “LLMs like ChatGPT are in dire need of an update in our concept of responsibility.”
A major driver behind this push for new responsibilities is the increasing difficulty of understanding and controlling the new generation of AI models. To consider this evolution, we can look at “traditional” AI vs. LLM AI, or large language model AI, in the example of recommending candidates for a job.
If traditional AI is trained on data that identifies race or gender employees in senior positions, it can create biases by recommending people of the same race or gender for jobs. Fortunately, this can be caught or checked by inspecting the data used to train these AI models, as well as the output recommendations.
With new LLM-powered AI, this kind of bias auditing is becoming increasingly difficult, if not sometimes impossible, to test for bias and quality. Not only do we not know what data a “closed” LLM has been trained on, but a conversation recommendation may introduce biases or “hallucinations” that are more subjective.
For example, if you ask ChatGPT to summarize a presidential candidate’s speech, who judges whether it is a biased summary?
So it’s more important than ever for products that include AI recommendations to consider new responsibilities, such as how traceable the recommendations are, to ensure that the models used in recommendations can actually be checked for bias rather than of using only LLMs.
It is this boundary of what counts as a recommendation or a decision that is key to new AI regulation in HR. For example the new one NYC AEDT Law calls for bias audits for technologies specifically related to employment decisions, such as technologies that can automatically decide who to hire.
However, the regulatory landscape is rapidly evolving beyond how AI makes decisions and into how AI is built and used.
Transparency around transferring AI standards to consumers
This brings us to the second main theme: the need for governments to define clearer and broader standards for how AI technologies are built and how these standards are communicated to consumers and employees.
At the recent OpenAI hearing Christina Montgomery, IBM’s Chief Privacy and Trust Officer, stressed that we need standards to ensure that consumers are informed every time they interact with a chatbot. This kind of transparency about how AI is developed and the risk of bad actors using open source models is key to the EU AI law’s recent considerations to ban LLM APIs and open source models.
The question of how to manage the proliferation of new models and technologies will require further discussion before the trade-offs between risks and benefits become clearer. But what is becoming increasingly clear is that as the impact of AI grows, so does the urgency for standards and regulations, as well as awareness of both the risks and opportunities.
Implications of AI regulation for HR teams and business leaders
The impact of AI is perhaps being felt most quickly by HR teams, who are being asked to both handle new pressures to provide employees with upskilling opportunities and provide their executive teams with customized forecasts and workforce plans around new skills that will be necessary to adjust their business strategy.
At the two recent WEF Summits on Generative AI and the Future of Work, I spoke to AI and HR leaders, as well as policymakers and academics, about an emerging consensus: that all companies must push for responsible adoption and awareness from AI. The WEF has just released its “Future of Jobs Report,” highlighting that over the next five years, 23% of jobs are expected to change, with 69 million created but 83 million eliminated. That means that the jobs of at least 14 million people are at risk.
The report also highlights that not only will six in ten workers need to change their skills before 2027 to do their job — they will need upskilling and reskilling — but only half of workers currently have access to adequate training opportunities.
So how should teams keep employees engaged in the AI-accelerated transformation? By driving internal transformation focused on their employees and carefully thinking about how to create a compliant and connected set of people and technology experiences that give employees more transparency in their careers and the tools to develop themselves.
The new wave of regulation helps shed new light on how bias in decisions about people, such as talent, should be considered – and yet, as these technologies are adopted by people both in and out of work, the responsibility is greater than ever for business and HR leaders to understand both the technology and regulatory landscape and drive responsible AI strategy across their teams and companies.
Sultan Saidov is president and co-founder of Beamery.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers