top of page
  • Adrien Book

Exploring the Ethics of Artificial Intelligence: A Guide to Navigating the Risks and Benefits of AI

Is Artificial intelligence (A.I) a revolution or a war? A god or a pet? A hammer or a nail? Do we really need more metaphors to describe it? Nowadays, A.I dictates what information is presented to us on social media, which ads we see, and what prices we’re offered both on and offline. An algorithm can technically write and analyse books, beat humans at about every game conceivable, make movies, compose classical songs and help magicians perform better tricks. Beyond the arts, it also has the potential to encourage better decision-making, make medical diagnoses, and even solve some of humanity’s most pressing challenges. It’s intertwining with criminal justice, retail, education, recruiting, healthcare, banking, farming, transportation, warfare, insurance, media… the list goes on.

Yet, we’re so often busy discussing the ins and outs of whether A.I CAN do something, that we seldom ask if we SHOULD design it at all.

This is where ethic comes in. Companies and governments alike have come to realise that statistics on steroids are capable of great harm, and are studying various ways to deal with potential fallout without impacting their bottom line or strategic geopolitical advantage. They have come up with dozens of “principles”, each more unenforceable than the next, failing to even agree on a basic framework. Discussing war, automation, mass surveillance, authoritarianism is paramount; but these discussion cannot take place before key ethical principals and red lines are agreed to. 

As such, below is a “quick” guide to the discussions surrounding A.I and ethics. It aims to help democratise conversations : we do not necessarily need smarter people at the table (and anything I write will not be news to an expert), but we DO need a bigger table. Or more tables. Or more seats. Or some sort of a video-conference solution. 

I hate metaphors. 

Ethic can Mean Many Different Things

Before we dive into the contemporary discussion about ethics, we first need to understand what ethic is. Ethic has a pretty straightforward dictionary definition : “moral principles that govern a person’s behaviour or the conducting of an activity”.

That’s about as far as anyone can get before contrarians such as myself come to ruin the fun for everyone. You see, even if we separate normative ethics (the study of ethical action) from its lamer cousins meta-ethics and applied ethics, there’s still no one definition of what is good/bad and/or wrong/right. Indeed, what is good may be wrong, and what is bad may be right

Here are the schools of thoughts to know about in order to best understand why current propositions on A.I ethics have little to do with moral principles : 

  • Consequentialism; TL;DR = The greatest happiness of the greatest number is the foundation of morals and legislation, aka “the ends justify the means”. Close cousin : utilitarianism.

  • Deontology; TL;DR = It is our duty to always do what is right, even if it produces negative consequences. “What thou avoidest suffering thyself seek not to impose on others” (Epictetus, aka, the guy with the most epic name in philosophy — also a Stoic). Close cousin : Kantianism.

  • Hedonism; TL;DR = Maximising self gratification is the best thing we can do as people.

  • Moral intuitionism; TL;DR = It is possible to know what is ethical without prior knowledge of other concepts such as good or evil.

  • Pragmatism; TL;DR = Morals evolve, and rules should take this into account.

  • State consequentialism; TL;DR = Whatever is good for the state is ethical.

  • Virtue ethics; TL;DR = A virtue is a character trait that stems from the prioritization of good versus evil through knowledge. It is separated from an action or a feeling. Close cousin : stoicism.

Lesson 1 : if a company or government tells you about its ethical principles, it is your duty to dig and ask which ethical branch they’re basing these principles on. Much information can be found in such definitions.

It’s important to ask this, because as we see below, institutions like to use the word ethic without actually ever going near anything resembling a moral principle (please refer to the title of this article for a regular sanity check). The good news, however, is that there are literally no correlation between knowing a lot about ethics and behaving ethically.

“Ethics Theater” Plagues Companies

Companies exist to reward shareholders. At least, that’s the business philosophy that has been espoused for the past 50 years. As such, companies have no incentive to do either “right” or “good”, unless their profits are at risk. All that matters to them, technically, is that customers see them as doing good/right. Ethics theater is the idea that companies will do all they can to APPEAR to be doing their best to behave ethically, without doing so, in order to prevent consumer backlash. A perfect way to do this is by announcing grand, non-binding principles and rules in no way linked to actual ethics, pointing to them should any challenge arise.

Below are such principles, as defined by a few large A.I companies. This is in no way exhaustive (yet is exhausting), but provides an insight into corporation-sponsored ethics-washing. These rules generally fall into 4 categories.

Accountability / Responsibility

Designate a lead AI ethics official” (IBM), “AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes” (IBM), “Be accountable to people” (Google); “AI systems should have algorithmic accountability” (Microsoft).

Why it’s B-S : firstly, and like much of the points below, none of this is about ethics per se, even as some of the papers actually have the word itself in their title. Secondly, nowhere is it written that executives should be accountable to the law of the land, giving them free reign to do whatever the hell they want. Indeed, few laws exist to reign in A.I, but this is literally why we have ethics; nowhere is it stated by which standards the companies will be held accountable or responsible for. Deontology? Consequentialism? It’s anyone’s guess at this point. 

Transparency

Don’t hide your AI” (IBM), “Explain your AI” (IBM), “AI should be designed for humans to easily perceive, detect, and understand its decision process” (IBM), “AI systems should be understandable” (Microsoft).

Why it’s B-S : I won’t go into much detail here because this is more technical than theoretical (here’s a quick guide), but A.I is a black box by its very nature. In order to have full transparency, companies would have to make parts of their code available, something that has been discussed but is (obviously) fiercely opposed. The other solution comes from the GDPR’s “right to explanation”, and concentrates on input rather than output. Said right mandates that users be able to demand the data behind the algorithmic decisions made for them. This is a great idea, but is not implemented anywhere outside of Europe. 

Fairness / Bias

Test your AI for bias” (IBM), “AI must be designed to minimize bias and promote inclusive representation”(IBM); “Avoid creating or reinforcing unfair bias” (Google), “AI systems should treat all people fairly” (Microsoft).

Why it’s B-S : A system created to find patterns in data might find the wrong patterns. That is the simplest definition of A.I bias. Such a buzzword helps companies shy away from hard topics, such as sexism, racism or ageism. God forbid they have to ask themselves hard questions, or be held accountable for the data-set they use. We have every right (duty) to demand what bias are exactly being addressed, and how. 

Data and Privacy 

AI must be designed to protect user data and preserve the user’s power over access and uses” (IBM); “Incorporate privacy design principles” (Google), “AI systems should be secure and respect privacy” (Microsoft).

Why it’s B-S : If they really cared, they would have implemented the European standard (all hail the GDPR). They have not. Case closed. 

Ethics is only truly mentioned twice in the many reports I’ve read :

AI should be designed to align with the norms and values of your user group in mind” (IBM) We will not design or deploy AI in the following application areas: technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.” (Google)

This tells us that IBM believes in pragmatism (fair enough), while Google is a consequentialist company. This is odd, because “do no evil”, the company’s longtime slogan, is technically deontology. Such a dichotomy highlights a glaring carelessness: one of the world’s largest company is defining A.I principles that may have a far reach in society while simultaneously going against its internal culture. This sounds like over-analysis until you realise that there has been many internal employee revolts within Google over the past few months for this very reason

You may have noted that only three companies are named above (Google, IBM, Microsoft). It’s because the other major A.I companies have yet to produce anything worthy of being picked apart, choosing instead to invest in think-tanks that will ultimately influence governments. This point highlights a major flaw common to all principles: none offer that companies subject themselves to enforceable rules. Why then, do companies bother with ethics theater? The first reason, as explained above, is indeed to influence governments and steer the conversation in the “right” direction (see below the similarities in standards between company and government priorities). Secondly, it’s good to be seen as being ethical by customers and employees, so as to avoid any boycotts. Thirdly, and maybe most importantly, there is big money to be made in setting a standard: Patents x universal use = $$$.

Lesson 2: Companies know very little about ethics, and have no incentives to take a stand on what is good or right. Corporate ethics is an oxymoron.

Governments are Doing their Best

There are many government-published white papers out there, but they are either vague as all hell, or shamefully incomplete. Furthermore, many see A.I through the lens of economic and geopolitical competition. One notable exception is the clear emphasis on ethics and responsibility in the EU’s A.I strategy and vision, especially relative to the US and China (both morally discredited to the bone). In order to get an overall look at what countries believe A.I ethics should be, I’ve put their principles into 7 categories, most of which closely resemble those highlighted by the above analysis of corporations. 

Note that this is merely a (relevant) oversimplification of thousands of pages written by people much smarter and more informed than myself. I highly recommend reading the linked documents as they provide in-depth information about the listed principles. 

Artificial Intelligence Ethics

Accountability / Responsibility

Principle of Accountability by Design” (UK); “Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems (…)” (Australia) “All AI systems must be auditable” (Norway); “DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities” (US DoD) ; “the principle of liability” (China) ; “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes (…)” (EU) ; “Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles” (OECD) “those who design and deploy the use of AI must proceed with responsibility and transparency” (the Vatican).

Accountable TO WHAT?! TO WHOM?! How is this question so very systematically avoided? 

Transparency

Process and outcome transparency principles” (UK) ; “There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, (…)” (Australia); “AI-based systems must be transparent” (Norway) ; “The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology (…)” (US DoD) ; “the data, system and AI business models should be transparent (…)” (EU) ; “There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them” (OECD) ; “in principle, AI systems must be explainable” (the Vatican).

How about we start by forcing companies to reveal whether or not they are REALLY using A.I?

Fairness / Bias

Principle of discriminatory non-harm” (UK) ; “AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups” (Australia) ; “AI systems must facilitate inclusion, diversity and equal treatment” (Norway) ; “The Department will take deliberate steps to minimize unintended bias in AI capabilities” (US DoD) ; “Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination (…)” (EU) ; “do not create or act according to bias, thus safeguarding fairness and human dignity” (the Vatican).

As a reminder, bias can be avoided by ensuring that the data input is representative of reality, and that it does not reflect reality’s existing prejudices.

Data and Privacy

AI systems should respect and uphold privacy rights and data protection, and ensure the security of data” (Australia); “AI must take privacy and data protection into account” (Norway) ; “besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data” (EU); “AI systems must work securely and respect the privacy of users” (the Vatican).

Oh, China and the US aren’t on that list ? Cool, cool, cool… just a coincidence, I’m sure. I’m sure it’s also a coincidence that 3 completely different organisations came up with principles that are VERY similarly phrased. 

Safety / Security / Reliability

Accuracy, reliability, security, and robustness principles” (UK); “AI systems should reliably operate in accordance with their intended purpose” (Australia); “AI-based systems must be safe and technically robust” (Norway) ; “The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security (…)” ; “The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences (…)”(US DoD); “AI systems need to be resilient and secure (…)” (EU) ; “AI systems must function in a robust, secure and safe way (…) and potential risks should be continually assessed and managed” (OECD) ; “AI systems must be able to work reliably” (the Vatican).

Easier said than done, when a simple sticker can make an algorithm hallucinate.

Stakeholder inclusion / Societal good

Stakeholder Impact Assessment Principle” (UK); “AI systems should benefit individuals, society and the environment” (Australia); “AI must benefit society and the environment” (Norway); “principle of human interests” (China) ; “AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly” (EU) ; “AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being” (OECD) ; “the needs of all human beings must be taken into consideration so that everyone can benefit (…)”(the Vatican).

Rights

AI systems should respect human rights, diversity, and the autonomy of individuals” (Australia); “When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system” (Australia); “AI-based solutions must respect human autonomy and control” (Norway) ; “The “consistency of rights and responsibilities” principle” (China) ; “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights.” (EU) ; “AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards (…) to ensure a fair and just society” (OECD).

5-point analysis 

  • Only the EU, Norway and Australia deal with all 7 principles; much can be said from what has been omitted by certain countries. This lack of consensus is also worrying because an entity deciding between several international guidelines, its home country’s national policy, and recommendations from companies and nonprofits might end up doing nothing.

  • No list of principle ventures outside of these 7 points, and they rarely stray far from one another. This highlights a very real risk of groupthink (something that would be beneficial to the private sector). For example, nowhere is the right to self-determination mentioned, when A.I could easily be used to nudge people one way or another (say, during an election). 

  • Red lines are shamefully absent: no country has forbidden itself from certain A.I uses, and none of the principles are legally binding. FYI, strong regulation looks like this:

  • Technical definitions are entirely absent from the discussion. As are any relevant KPIs which could measure these principles. Who cares if some things are currently technically out of reach? Claiming so means misunderstanding the very definition of strategy (also, threaten to fine companies and they’ll find find technical solutions pretty darn quickly).

  • The lack of ethical guidelines is not clear at first. Neither are their necessity, lest we ask “what happens if one principle goes against another?”. Are they ranked? Are there orders of importance ? What happens if foregoing privacy rights is beneficial to society ? When we start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t exist. This is where a clear ethical philosophy would be useful : if state consequentialism is prioritised (as is generally the case in China), this at least gives us a clue as to what will be prioritised (Asimov’s three laws of robotic were pretty great at this).

Lesson 3 : Governments go a step further than companies in setting relevant principles. However, they still lack the courage of their principles, as well as the technical know-how to make these principles enforceable. 

Ethic is Easy, But Courage isn’t

Now that we’ve established the basics of what ethics has to offer (not a whole lot at face value), and that we’ve analysed various attempts by companies and governments alike, below are a few recommendations that base themselves not only on ethics, but also on courage with regards to the BIG issues (war, politics, autonomous cars, justice…).  I mention courage because this is what is missing in the current A.I discourse. The principles below have probably been thought of before, but were likely dismissed because of what they entail (loss of competitiveness, strategic advantage, cool guy points…). I risk nothing by bringing them up, because I do not wield any real power in this conversation; I may not hold the same discourse were I representing a people/a company. 

Principle of Rationality 

Sartre famously wrote “Hell is other people”. This is particularly true when it comes to A.I; not because people are forcing algorithms to be bad, but because our actions may create a world within which bad behaviours have been enshrined in algorithms, forcing oneself to adopt said behaviours or suffer as a consequence (ex : a woman removing gendered words from her CV) (you may need a primer on machine learning if this confuses you). Tocqueville referred to this as the Tyranny of the Majority: a decision “which bases its claim to rule upon numbers, not upon rightness or excellence”. Under the Principle of Rationality, key golden rules would be enforced within all A.I companies through public consultation and technical consulting, ensuring that even if people lose their mind, algorithms enshrining that madness are not built. May I recommend starting with this little-known piece of deontological history 

Principle of Ranking

Let’s assume that the principle above is applied worldwide (ha!). How can companies deal with competing fundamental rights when creating an algorithm? For example, can we forego article 9 and 12 to better enforce article 5 ? Can we produce an A.I that would scour communication channels in order to find potential criminal activity ? These question is the very reason why we need an ethical stand, which would help develop a stable rank of values, ethics and rights, wherein some would stand above others. Take the infamous Trolley Problem, for example, and apply it to autonomous cars. Given the choice, should an autonomous car prioritise saving two pedestrians over a passenger? What if the passenger is a head of state? What if the pedestrians are criminals? Choosing one school of thought, as hard as this may be, would help create algorithms in line with our beliefs (team Deontology FTW).

Principle of Ambivalence

The above example is not random: the largest study on moral preference ever was started in 2014, encouraging users all over the world to respond to a number of variations of the “trolley problem”. The results, though expected, are clear: different cultures believe in different things when it comes to ethics. Japan and China, for example, are less likely to harm the elderly. Poorer countries are more tolerant towards law-benders. Individualistic countries generally prefer to spare more lives. Ethic is dynamic, but coding is static. This is why no one algorithm should ever be created to make decisions for more than one population. The way I see it, at least three sets based on different worldviews should be made : West, East and South. 

Put simply, if I get into a Chinese autonomous car, I’d like to be able to choose a Western standard in case of an accident. 

Principle of Accountability 

This principle may appear blasphemous for many free-market proponents, raised as they are in countries where tobacco groups do not cause cancer, distilleries do not cause alcoholism, guns do not cause school shootings and drug companies do not cause overdoses. Silicon Valley has understood this, and its go-to excuse when its products cause harm (unemployment, bias, deaths…) is to say that its technologies are value neutral, and that they are powerless to influence the nature of their implementation. That’s just an easy way out. Algorithms behaving unexpectedly are now a fact of life, and just as car makers must now be aware of emissions and European companies must protect their customers’ data, tech executives (as opposed to scientists, whose very raison d’être is pushing barriers — and so it should be) must closely track an algorithm’s behavior as it changes over time and contexts, and when needed, mitigate malicious behavior, lest they face a hefty fine or prison time.

Can’t handle it? Don’t green-light it. 

If your signature is at the bottom of the page, you are accountable to the law. 

Principal of Net Positive

Is A.I really worth it? Currently, even the simplest algorithm is unethical by its very nature : mining, smelting, logistics, black box upon black box of trade secrets, data center resources, modern slavery, e-waste rubbish mountains in Ghana… none of this is sustainable, though the UK, Australia and the EU all have the environment named in their grand principles. Is it really worth it for minor pleasures and simplifications? Once, just once, it’d be good to have a bit of sanity in the discussion. 

And by sanity I mean being able to see the whole damn supply chain or your algorithm isn’t entering production.

Environmental issues cannot take the back seat any longer, even when discussing something as seemingly innocent as the digital world. 

Conclusion

In the face of a limited technology and a plethora of potential uses, the benefits of A.I clearly outweigh the risks. This is however no reason not to have a conversation about its implementation, before the robots start doing the talking for us (yes, this is a hyperbole, sue me). 

Let me say it loudly for the people at the back: A.I is not something to be trusted or not trusted. It is merely a man-made tool which is “fed” data in order to automate certain tasks, at scale. Do you trust your washing-machine? Your calculator (Yeah, me neither. Math is black magic)? It is all-too-easy to assume the agency of something that has none. A.I cannot be good or evil. Humans are good or evil (and so often simultaneously both). At the end of the day, A.I merely holds a dark mirror to society, its triumphs and its inequalities. This, above all, is uncomfortable. It’s uncomfortable because we keep finding out that we’re the a-holes. 

A.I Ethic does not exist.

Let me say it loudly for the people at the back: Algorithms serve very specific purposes. They cannot stray from those purposes. What matters is whether or not a company decides that this purpose is worthy of being automated within a black box. As such, the question of A.I ethics should be rephrased as “do we trust (insert company’s name here)’s managers have our best interest at heart?” and, if yes, “do we trust the company’s programmers to implement that vision flawlessly while taking into account potential data flaws?” That’s trickier, isn’t it? But more realistic. 

A.I Ethic does not exist.

Let me say it loudly for the people at the back: The vague checklists and principles, the powerless ethics officers and toothless advisory boards are there to save face, avoid change, and evade liability. If you come away with one lesson from this article, it is this :

A.I Ethic does not exist.

 

This article was originally written for Honeypot.io, Europe’s developer-focused job platform.

You may also like :

Thanks for subscribing!

Get the Insights that matter

Subscribe to get the latest on AI, innovative business models, corporate strategy, retail trends, and more. 

No spam. Ever.

Let's get to know each other better

  • LinkedIn
  • Twitter
  • Instagram
  • Buy Me A Coffee
bottom of page