top of page
  • Adrien Book

The Case Against AI Regulation Makes No Sense

Ever since OpenAI released ChatGPT into the wild in late 2022, the world has been abuzz with talks of Generative Artificial Intelligence and the future it could create. Capitalism’s fanboys see the technology as a net positive; the logical continuation of the digital world, which has contributed to the creation of untold wealth… for a select few. Skeptics, meanwhile, recall the best of the 80s’ Sci-Fi, and fear we may be well on our way to create our own HAL / SHODAN / Ultron / SkyNet / GLaDOS.

These are the loud minorities. Most people presented with the possibilities offered by Generative Artificial Intelligence understand that technology is merely a tool, without a mind of its own. The onus is on users to “do good” with it. And if that is not possible because “good” is inherently subjective… then democratic governments need to step in and regulate.

How (and if) this is to be done is still hotly debated. The European Union was first out of the gate with the proposed AI Act. It is an imperfect first draft, but has the benefit of being a real attempt at managing a highly disruptive technology rather than letting tech billionaires call the shots. Below is a summary of the proposed law, and the pros and cons of such regulations.

The Case Against AI Regulation Makes No Sense

What is in the EU’s AI Act

The AI Act puts risk at the core of the discussion : “The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

  • AI posing “unacceptable” levels of risk (behavioural manipulation, real-time and remote biometrics, social scoring…) will be banned

  • High-risk AI systems (relating to law enforcement, education, immigration…) “will be assessed before being put on the market and also throughout their lifecycle

  • Limited-risk AI systems will need to “comply with minimal transparency requirements that would allow users to make informed decisions.

Generative AI gets a special mention within the proposed regulation. Companies using the technology will have to:

  • Disclose AI-generated content

  • Design safeguards to prevent the generation of illegal content

  • Publish summaries of copyrighted data used for training

If that seems satisfyingly pragmatic while remaining overly broad, trust your instincts. Companies failing to comply could face fines of up to 6% of their annual turnover and be kept from operating in the EU. The region is estimated to represent between 20% and 25% of a global AI market that is projected to be worth more than $1.3 trillion within 10 years… which is why tech companies may say they’ll leave… but never will. The law is expected to pass around 2024.

Why Generative Artificial Intelligence should not be regulated

There has been plenty written about the fact that tech billionaires say they want AI to be regulated. Let’s make one thing clear: that is a front. Mere PR. They do not want regulation, and if it comes, they want it on their own terms. Below are some of the best arguments presented by them and their minions over the past few months.

Stifling Innovation and Progress

The case could be made that regulations will slow down AI advancements and breakthroughs. That not allowing companies to test and learn will make them less competitive internationally. However, we are yet to see definitive proof that this is true. Even if it were, the questions would remain: is unbridled innovation right for society as a whole? Profits are not everything. Maybe the EU will fall behind China and the US when it comes to creating new unicorns and billionaires. Is that so bad, as long as we still have social nets, free healthcare, parental leaves and 6 weeks of holidays a year? If having this, thanks to regulations, means a multi-millionaire cannot become a billionaire, so be it.

The non-international competitiveness argument is a lot more relevant for the discussion at hand: regulation can create barriers to entry (high costs, standards, or requirements on developers or users) for new companies, strengthening the hand of incumbents. The EU has already seen this when implementing the GDPR. Regulations will need to carve out a space for very small companies to experiment, something that is already being discussed at EU-level. If they’re so small, how much harm can SMEs do anyway, given the exponential nature of AI’s power?

Complex and Challenging Implementation

Regulations relating to world-changing technologies can often be too vague or broad to be applicable. This can make them difficult to implement and enforce across different jurisdictions. This is particularly true when accounting for the lack of clear standards in the field. After all, what are risks and ethics if not culturally relative?

This makes the need to balance international standards and sovereignty a particularly touchy subject. AI operates across borders, and its regulation requires international cooperation and coordination. This can be complex, given varying legal frameworks and cultural differences. This is what they will say.

There are however few voices calling for one worldwide regulation. AI is (in so many ways) not the same as the atomic bomb, whatever the doomsayers calling for “New START” approach may claim. The EU will have its own laws, and so will other world powers. All we can ask for is a common understanding around the risks posed by the technology, and limited cooperation to cover blind spots within and between regional laws.

Potential for Overregulation and Unintended Consequences

Furthermore, we know that regulation often fails to adapt to the fast-paced nature of technology. AI is a rapidly evolving field, with new techniques and applications emerging regularly. New challenges, risks and opportunities continuously emerge, and we need to remain agile / flexible enough to deal with them. Keeping up with the advancements and regulating cutting-edge technologies can be challenging for governing bodies… but that has never stopped anyone, and the world still stands.

Meanwhile, governments must make sure that new industries (not considered AI) are not caught up in the scope of existing regulation, with unexpected consequences. We wouldn’t want, for example, ecology to suffer because a carbon capture system uses a technology akin to generative AI to recommend regions to target for cleanup.

It is important to avoid excessive bureaucracy and red tape… but that is not a reason to do nothing. The EU’s proposed risk-based governance is a good answer to these challenges. Risks are defined well-enough to apply to all people across the territory, while allowing for changes should the nature of artificial intelligence evolve.

There are, in truth, few real risks in regulating AI… and plenty of benefits.

Why Generative Artificial intelligence needs to be regulated

There are many reasons to regulate Gen. AI, specifically when looking through the prism of risks to under-privileged or defenceless populations. It can be easy not to take automated and wide-scale discrimination seriously… when you’ve never been discriminated against. Looking at you, tech bros.

Ensuring Ethical Use of Artificial Intelligence

Firstly (and obviously), regulation is needed to apply and adapt existing digital laws to AI technology. This means protecting the privacy of users (and their data). AI companies should invest in strong cyber-security capabilities when dealing with data-heavy algorithms… and forego some revenues as user data should not be sold to third parties. This is a concept American companies seem to inherently and wilfully misunderstand without regulation.

As mentioned in the AI Act, it is also crucial that tech companies remove the potential for bias and discrimination from algorithms dealing with sensitive topics. That entails A) ensuring none is purposefully injected and B) ensuring naturally occurring biases are removed to avoid reproduction at scale. This is non-negotiable, and if regulatory crash testing is needed, so be it.

More philosophically, regulation can help foster trust, transparency, and accountability among users, developers, and stakeholders of generative AI. By having all actors disclose the source, purpose, and limitations of AIs’ outputs, we will be able to make better choices… and trust the choices of others. The fabric of society needs this.

Safeguarding Human Rights and Safety

Beyond the “basics”, regulation needs to protect populations at large from AI-related safety risks, of which there are many.

Most will be human-related risks. Malicious actors can use Generative AI to spread misinformation or create deepfakes. This is very easy to do, and companies seem unable to put a stop to it themselves — mostly because they are unwilling (not unable) to tag AI-generated content. Our next elections may depend on regulations being put in place… while our teenage daughters may ask why we didn’t do it sooner.

We also need to avoid letting humans do physical harm to other humans using generative Artificial Intelligence: it has been reported that AI can be used to describe the best way to build a dirty bomb. Here again, if a company cannot prevent it to the best of its abilities, I see no reason for us to continue to allow it exist in its current form.

All this is without even going into the topic of AI-driven warfare and autonomous weapons, the creation of which must be avoided at all cost. This scenario is however so catastrophic that we often use it to hide the many other problems with AI. Why concentrate on data privacy when Terminator is right around the corner, right? Don’t let the doomers distract you from the very boring, but very real fact: without strong AI regulation tackling the above, society may die a death of a thousand cuts rather than one singular weaponized blow.

This is why we must ensure that companies agree to create systems that align with human values and morals. Easier said than done, but having a vision is a good start.

Mitigating Social and Economic Impact

There are important topics that the AI Act (or any other proposed regulation) does not completely cover. They will need to be further assessed over the coming years, but their very nature makes regulating without over-regulating difficult, though not any less needed.

Firstly, rules are needed to fairly compensate people whose data is used to train algorithms that will bring so much wealth to so few. Without this, we are only repeating the mistakes of the past, and making a deep economical chasm deeper. This is going to be difficult; there are few legal precedents to inform what is happening in the space today.

It will also be vital to address gen. AI-led job displacement and unemployment. Most roles are expected to be impacted by artificial intelligence, and with greater automation often comes greater unemployment. According to a report by BanklessTimes.com, AI could displace 800 million jobs (30% of the global workforce) by 2030.

Finally, it will be important to continuously safeguard the world’s economies against AI-driven economic monopolies. Network effects mean that catching up to an internet giant is almost impossible today, for lack of data or compute. Anti-trust laws have been left rather untouched for decades, and it can no longer go on. Regulations will not make us less competitive in this case. It may make the economy more so.

 

The regulatory game has just started. Moving forward, governments will need to collaborate and cooperate to establish broad frameworks while promoting and encouraging knowledge sharing and interdisciplinary collaboration.

These frameworks will need to be adaptive and collaborative, lest they become unable to keep up with AI’s latest development. Regular reviews and updates will be key, as will agile experimentation in sandbox environments.

Finally, public engagement and inclusive decision-making will make or break any rules brought forwards. We need to Involving diverse stakeholders in regulatory discussions, while engaging the public in AI policy decisions. This is for us / them, and communicating that fact well will help governments counter-act tech companies' lobbying.

The regulatory road ahead is long: today, no foundational LLM complies with the EU's AI Act. Meanwhile, China’s regulation concentrates on content control rather than risks, further tightening the Party’s grip on free expression.

The regulatory game has just started. But… we’ve started, and that makes all the difference.

Good luck out there.

 

This article was originally written for wearedevelopers.com, Europe’s developer-focused job platform.

You may also like :

Thanks for subscribing!

Get the Insights that matter

Subscribe to get the latest on AI, innovative business models, corporate strategy, retail trends, and more. 

No spam. Ever.

Let's get to know each other better

  • LinkedIn
  • Twitter
  • Instagram
  • Buy Me A Coffee
bottom of page