The matter of AI regulations arises more and more in the news due to the unprecedented development advancement that we all could experience in the span of this year. The ChatGPT breakthrough opened the “Pandora’s box” of tech companies that now jump out of their skin to obtain dominance in the industry.
Naturally, such obsession with AI raises major concerns and calls for the immediate setting of regulations to make this fast-advancing technology safer and companies more responsible for their creations. However, this is exactly where the real can of worms opens as there isn’t much agreement on the means of control over artificial intelligence.
Monitoring the whole AI regulation dispute, experts draw parallels with another global threat – climate change. This global issue is agreed upon to be taken seriously and it has a strong international regulation policy. While a warmer climate might offer marginal benefits in specific regions, authoritative scientific reports from the UN's Intergovernmental Panel on Climate Change (IPCC) emphasize the scarcity of positive outcomes for human health due to climate change. However, the main problem here is that there are lots of pitfalls resulting in a decreased efficiency of established regulations. Many businesses that should’ve adhered to the laws and mitigated pollution use these regulatory imperfections which leads us to where we are now – on the brink of climate catastrophe.
While both issues demand global solutions, it is crucial to avoid the pitfalls that hindered climate change mitigation efforts over the past three decades, especially considering the immense potential AI holds. The IPCC, established all the way back in 1988, took years to issue its substantial reports. AI regulation cannot afford such delays.
The matter of AI regulation is even in worse conditions because climate change experts at least achieved a consensus through scientific analysis and reporting that helped to shape climate change policies. The AI community lacks a similar consensus, as experts remain politically and technologically divided on the potential harms and existing risks associated with AI.
Given this disparity, the idea of exploring an IPCC-like model for AI regulation has gained traction as a working strategy for now. Professor Robert Trager from the Centre for the Governance of AI acknowledges the active consideration among policymakers for such an approach.
Drawing inspiration from the climate secretariat within the UN, formed after the 1992 treaty on climate change, the annual "Conference of the Parties" (COP) meetings have led to crucial climate agreements. However, COPs have often been hindered by the requirement for consensus among nearly 200 countries, impeding progress and obstructing decision-making.
While COPs have served valuable purposes, they have also exposed the need for agility and effectiveness in global collaboration. Therefore, any future Intergovernmental Panel on Artificial Intelligence must be agile and adaptable to keep pace with the rapid advancements in AI.
Except for IPCC, some other models are being considered to guide the regulation of AI. Some advocate for a structure akin to the International Atomic Energy Agency, while others prefer the less intrusive approach of the International Civil Aviation Organization.
In conclusion, the urgent need for global AI regulation necessitates drawing lessons from the international approach to climate change. While recognizing the distinct challenges and benefits associated with each, policymakers and experts must collaborate to establish an agile and efficient regulatory framework. By avoiding the shortcomings of climate change governance, we can harness the power of AI to address pressing global issues, improve lives, and shape a future that is beneficial for all.
You can also find out about the governmental approach to regulating AI copyright law from our previous news.