Fundamental Concepts in AI Ethics

Commencez. C'est gratuit
ou s'inscrire avec votre adresse courriel
Fundamental Concepts in AI Ethics par Mind Map: Fundamental Concepts in AI Ethics

1. AI Consciousness

1.1. Definition

1.1.1. AI consciousness is a hotly debated topic within the realm of cognitive robotics, and tends to be discussed in terms of whether AI can be held morally responsible or not if consciousness is achieved. Here, consciousness is believed to differ from 'sentience', also known as the ability to feel or perceive things, which is a function that AI can arguably already perform by virtue of processing and learning from its own data outputs. On the other hand, conscious AI or 'strong AI', if instantiated, would involve self-awareness by the machine. However, there does not even a shared understanding of how consciousness arises in humans, which is demonstrated by the 'hard problem of consciousness'. If we don't understand how consciousness arises in a carbon substrate then it is quite difficult to determine whether consciousness could arise in silicone. Our inability to understand human consciousness will likely limit our ability to determine the existence, or lack thereof, of consciousness in AI.

1.2. Relevance in AI Ethics

1.2.1. AI ethics focuses on the connection between AI consciousness and moral responsibility. As automation and 'black box' algorithms increase in prevalence, establishing moral responsibility becomes more pressing. Acknowledging 'strong AI' would help settle responsibility doubts, but then present a whole host of new challenges in accountability and justice. For example, if AI can be deemed a conscious and thus morally responsible agent, how would they be held accountable for any crimes they commit? Would we require an AI bill of rights? Hence, AI consciousness can be seen as intrinsically connected with the AI ethics world, and something that may become a hot topic in the future.

1.3. Example

1.3.1. The classic Trolley Problem can be used to demonstrate the importance of AI consciousness. Here, the self-driving car is faced with an impending crash as well as a moral dilemma of whether to save the passenger in the car or whether to save mother and her child. No matter what outcome results, the vehicle has elected to undertake a decision. Thus, can the vehicle's AI be blamed as the entity that is responsible for either the passenger's or the mother and child's death?

1.4. Helpful Links

1.4.1. https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01535/full

1.4.2. https://www.washingtonpost.com/local/trafficandcommuting/when-driverless-cars-crash-who-gets-the-blame-and-pays-the-damages/2017/02/25/3909d946-f97a-11e6-9845-576c69081518_story.html

1.4.3. https://link.springer.com/article/10.1007/s10677-016-9745-2

1.4.4. The Problem of AI Consciousness

1.5. Common Contexts

1.5.1. #aiethics #trolleyproblem #autonomouscars #consciousness #morality #responsibility #accountability #justice

2. Fairness

2.1. Definition

2.1.1. AIgorithmic fairness is the principle that the outputs of an AI system should be uncorrelated with particular characteristics, such as gender, race, or sexuality. There are many ways models can be considered fair. Common approaches to AI fairness include: equal false positives across sensitive characteristics, equal false negatives across sensitive characteristics, or minimizing “worst group error” which is the number of mistakes the algorithm makes on the least represented group. While it's possible for an AI to be considered fair across sensitive characteristics independently, the AI may be unfair from an intersectional perspective (discriminating against those at the intersection of multiple sensitive characteristics). A common argument against manipulating the models to allow for AI fairness is the inaccuracy that may arise as a result.

2.2. Relevance in AI Ethics

2.2.1. AI ethics examines how social values such as fairness can be upheld in AI systems. The difficulty is that concepts of fairness, such as demographic parity and equal opportunity, which result in equal accuracy are mathematically challenging if that parity and equality does not exist in reality. If AI fairness, say in loan applications, entails achieving demographic parity between two groups of people, models might refuse loans to repaying applicants and instead give loans to defaulting applicants. One solution might be to evenly distribute the mistakes made over the number of loans given, whereby we have the same false rejection rates for the two groups. Some might consider this to be fair to the collective but not to the individual. Even in cases where the AI models do protect sets of individual sensitive attributes, we can end up with a phenomenon known as fairness gerrymandering, where specific subgroups of the population are unfairly discriminated against. AI Ethics will have to grapple with these conflicting realities of algorithmic fairness and their trade-offs when determining what constitutes an objective and just AI system. It will also need to account for the inequalities present in society, especially when these biases are compounded by AI systems

2.3. Example

2.3.1. One example of an AI system that brings up questions of fairness is the Allegheny Family Screening Tool (ASFT), which works to support social workers when deciding whether to remove a child from their home for reasons of neglect or abuse. The goal of ASFT was to optimize accuracy and reduce incidences of false negatives (reducing the number of children wrongly removed from a loving home in which the child has not been abused or neglected). The ASFT team found that their algorithm was biased against poor families. In fact, the team found that one quarter of the variables used to predict abuse and neglect were direct measures of poverty (e.g. whether the family relied on food stamps). As a result, families relying on food stamps were often rated as higher risk even though this metric is not directly correlated with child neglect or abuse. Thus, even in cases where AI developers do their best to design fair and impartial models, these models cannot be separated from the biases and injustices embedded in the data.

2.4. Helpful Links

2.4.1. Algorithmic Bias and Fairness: Crash Course AI #18

2.4.2. Michael Kearns: Algorithmic Fairness, Privacy, and Ethics in Machine Learning | AI Podcast

2.5. Common Contexts

2.5.1. #proxyindicators #injustice #human rights #blackbox

3. Explainability

3.1. Definition

3.1.1. Explainability is the principle that humans should be able to interpret and understand how an AI system derived its output. Therefore, the goal of explainability is for the human to be able to explain, in non-technical terms, how the AI's inputs led to the AI's outputs. The term explainability can refer to either global explainability or local explainability. Global explainability implies that humans can understand the relationships the model has found between inputs and outputs as a whole. For example, an AI would require global explainability in communicating to humans whether its algorithm uses racial characteristics to determine recidivism rates. Local explainability, on the other hand, refers to humans understanding why an algorithm gave a particular output following a particular input, rather than explaining a general relationships that exists in the model.

3.2. Relevance in AI Ethics

3.2.1. Explainability is critical if we ever hope to understand AI's decision making process. Since AI can be trained using data that contains latent bias, the algorithm may optimize itself to perpetuate that bias. Once AI systems have been optimized to reproduce bias, the AI's decisions will entrench systemic discrimination, inequality and a lack of access to essential services.

3.3. Example

3.3.1. In August 2019, Apple and Goldman Sachs released their joint venture credit card. In November a couple using the card realized that the husband was receiving 20x the credit limit of his wife, who had a better credit score. Apple was then investigated by the New York State Department of Financial Services for discrimination based on gender. Although the company claims that the algorithm it used does not make decisions based on age, gender or sexual orientation, the inability to explain how the AI made its decision has resulted in Apple's failure to protect against biased decision making.

3.4. Helpful Links

3.4.1. https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/#13a3b887c9ef

3.4.2. https://www.seattletimes.com/business/judge-ibm-owes-indiana-78m-for-failed-welfare-automation/

3.5. Common Contexts

3.5.1. #optimizedforracism #blackbox #injustice

4. Artificial Intelligence

4.1. Machine Learning

4.1.1. Deep Learning

4.1.1.1. Definition

4.1.1.1.1. Deep Learning (DL) is a Machine Learning (ML) technique that simulates how human beings learn. This learning technique uses what computer scientists call "neural networks" to make predictions. Similar to how neural networks work in the human brain, AI neural networks use multiple layers of processing units that communicate between one another and prioritize variables, which inform the computer's prediction. For example, when considering whether an image of a cat is in fact an image of a cat, the computer might prioritize the look of the eyes to the shape of the tail. This process requires extremely large datasets, is computationally intensive, and can lead to incredibly accurate outputs.

4.1.1.2. Relevance in AI Ethics

4.1.1.2.1. Like humans' neural networks, computers' neural networks cannot be seen. However, unlike humans, computers cannot explain which variables it considered when making its decision. This dilemma has been deemed the "explainability problem" and it points to the black box nature of algorithms. Without insight into how the AI made its decision in the first place, it is difficult to challenge the decision or change the computer's mind, so to speak. Furthermore, it's hard to know whether the computer made its decision based on racist, homophobic, or sexist data and values.

4.1.1.3. Example

4.1.1.3.1. Deep Learning is often used in image recognition. For example, when teaching a Deep Learning algorithm to recognize a cat, one might "feed" an algorithm many pictures of a cat and other animals that look similar. Through trial and error, the Deep Learning algorithm would learn which features are most relevant to determining whether an image contains a cat. Those relevant features are then prioritized in the computer's neural network and heavily considered in the computer's subsequent decision making.

4.1.1.4. Helpful Links

4.1.1.4.1. https://www.youtube.com/watch?v=-SgkLEuhfbg&feature=emb_logo

4.1.1.4.2. Deep learning series 1: Intro to deep learning

4.1.1.4.3. Simple Image classification using deep learning — deep learning series 2

4.1.1.5. Common Contexts

4.1.1.5.1. #datarobustness #bigdata #neuralnetworks #blackbox

4.1.2. Definition

4.1.2.1. Machine Learning (ML) is a technique within the field of Artificial Intelligence (AI). Unlike AI, whose definition is more theoretical, ML provides a more technical explanation to describe how the computer is making its predictions. When someone says that their AI uses ML, it means that their machine is teaching itself to recognize patterns. Machines don't just teach themselves to recognize any old pattern, they teach themselves using a very specific dataset to recognize patterns within that data. Based on the data it observes, and using its own statistical tools, ML adapts its own algorithms to improve the accuracy of its patterns detection. The ML process allows the computers to continue to learn on new input data and to continue to derive a meaningful and relevant output. This process can be compared to how humans learn about horses. We have an initial dataset when it comes to horses, which may include seeing a couple horses in the wild or seeing pictures online, and from this dataset, we feel that we are in a good position to determine whether future animals that have a tail and hooves are in fact horses. However, when we get data about horses that differs from our initial dataset (e.g. that a pony is also a horse) we will refine our belief about horses and, in the future, will be able to determine more accurately what is a horse without getting stumped by horses of a different size and weight.

4.1.3. Relevance in AI Ethics

4.1.3.1. The data collection that is necessary to "fuel" the Machine Learning Model present a host of ethical questions, which arise based on how the data is obtained, how it is used to train the model and how it is deployed. Ethical questions include, but are by no means limited to: whether that data is collected with the consent of individuals, whether that data, either outright or by proxy, includes information about an individual being part of a minority group, whether the data set is robust enough to make consistently accurate decisions, whether the AI makes decisions that perpetuate bias, racism, etc.

4.1.4. Example

4.1.4.1. Facebook's Home Page uses Machine Learning to post content that it predicts would be of most interest to you. Facebook's Machine Learning makes this prediction based on the data it has collected about you, including the content you like and what you're tagged in. The ML model improves its predictive capacity over time as it observes which content you spend more time reading and what content you swipe right past.

4.1.5. Helpful Links

4.1.5.1. Machine Learning Tutorial for Beginners

4.1.6. Common Contexts

4.1.6.1. #prediction #algorithm #supervisedlearning #unsupervisedlearning

4.2. Definition

4.2.1. Artificial Intelligence (AI) is a term used to describe computer systems that perform tasks and functions, which were once thought to be the exclusive domain of intellectual living beings (e.g. able to recognize faces). AI has been designed to optimize its chances of achieving a particular goal. The goals of computer systems can be quite similar to that of humans, including optimized learning, reasoning and perception. Or, computers can be designed to optimize for capabilities that exceed what's possible for humans, including finding variables that have the most influence on an outcome (e.g. AI might determine that height has the biggest influence on basketball performance) . Although "Artificial Intelligence" has retained its definition over time, examples of this technology have changed as the computer's ability to mimic human thought and behavior advances. For example, it was once believed that calculators were an example of Artificial Intelligence. However, over time, this function has been taken for granted as an inherent computer capability and not evidence of artificial intelligence. Currently, more advanced technologies such as self-driving cars and translation are cited as examples of "Artificial Intelligence".

4.3. Relevance in AI Ethics

4.3.1. Artificial Intelligence has been famously described as a "prediction machine". The term references AI's ability to use large amounts of data and, based on a pattern it finds, make inferences about the likelihood of future events. The common ethical concerns that arise as a result of this process include: data rights, privacy, security, transparency, accountability, explainability, robustness, and fairness. On top of these concerns, which point to the functioning of the technology, there are also ethical questions surrounding the trade-offs that have been made when this technology is implemented. For example, the accuracy and productivity of AI has already given it a competitive advantage in the workplace. Therefore, if a company implements AI technology it may be to the detriment of human employment which has socio-economic consequences.

4.4. Example

4.4.1. Voice Assistants, such as Siri and Alexa, are prime examples of Artificial Intelligence. This technology is capable of mimicking human behavior in terms of understanding language, "thinking" about a relevant response and translating that response into speech. Although human communication was once thought to be the exclusive domain of humans, computer programs have also become capable of performing this function. Thus, we call it "Artificial Intelligence".

4.5. Helpful Links

4.5.1. https://www.youtube.com/watch?v=nASDYRkbQIY&feature=emb_logo

4.6. Common Contexts

4.6.1. #AIethics #robotics #futureofwork #productivity #labormarket #databias #racistmachines

5. Neural Networks

5.1. Definition

5.1.1. Artificial neural networks (or simply 'neural networks') get their name from being structured somewhat similarly to the neurons in the human brain. Neural networks are used to perform Deep Learning. The name 'neural network' designates a Machine Learning model that has hundreds of thousands (sometimes even billions) of artificial "neurons" (which are actually perceptrons). Each perceptron receives an input in the form of a number, and has an assigned weight within the overall model. Perceptrons are usually arranged in numerous layers, and each is directly connected to the ones in the layer before it and after it. As the model is trained, the weight assigned to each perceptron can be adjusted to improve how accurately the model performs its task. There are different types of neural networks (e.g. convolutional or recurrent). The features of each neural network type render them more or less useful depending on the specific task or application.

5.2. Relevance in AI Ethics

5.2.1. There are two prominent ethical concerns surrounding neural networks. First, neural networks create very complex Machine Learning models. As a result, the model's outputs are often not explainable. This means that us humans don't fully understand, and cannot clearly explain why a Machine Learning model, which use neural networks, gave a particular answer or outcome. This becomes particularly problematic when a person wants to contest a decision made by an AI system that uses neural networks. One may want to contest an AI's decision if they are arrested through a facial recognition match, or if they are denied a loan by their bank. In addition, although neural networks have allowed for outstanding progress in computer science, this progress often leads to greater risk. For example, neural networks have allowed for articles to be written by computers that are perceived to be written by humans. Although this signifies progress in the realm of computer science, this capability can spread misinformation online at unprecedented rates.

5.3. Example

5.3.1. Neural networks are often compared to neurons in the human brain. Our brains are constantly reacting to stimulus. What this means is that information is being transported to our neurons, which results in our neurons firing and triggering other neurons to fire. The precise way that our neurons fire dictate our responses to external stimuli. Similarly, the perceptrons that are activated in artificial neural networks fire in a pattern, which is dictated by the "weights" the computer system has determined. It is through this process that the model derives its outputs.

5.4. Helpful Links

5.4.1. How Convolutional Neural Networks work

5.4.2. How Convolutional Neural Networks work

5.4.3. Neural networks

5.5. Common Contexts

5.5.1. #deeplearning #machinelearning #imagerecognition #explainability #transparency #blackbox

6. Proxy Variables

6.1. Definition

6.1.1. Proxy indicators are seemingly neutral data points about an individual which, in practice, reveal sensitive information about that individual. Proxy data does this by virtue of serving as a “proxy” for another variable (i.e. although explicit race or gender data, for example, is not collected, the use of ZIP codes, grade point averages, credit card purchase information, etc. can serve as a proxy for data about race, gender, sex, sexual orientation or religion).

6.2. Relevance in AI Ethics

6.2.1. On the surface, AI systems that do not collect data about an individuals' protected class status may be perceived as fair - if those data points aren't collected, the AI can't make racist or sexist decisions, right? Unfortunately this is not the case. There are other variables that can disclose sensitive information by proxy and give rise to biased algorithms. Therefore, it is not sufficient for a system to simply not have data about an individual's sex, gender, racial identity or sexual orientation but rather, the system must also demonstrate a lack of proxy for those data points.

6.3. Example

6.3.1. Amazon deployed an AI-based hiring tool that sifted through candidate's applications and recommended applicants to Amazon's talent team. The model worked by prioritizing candidates with similar applications to those that Amazon hired in the past. After Amazon stopped feeding the algorithm information about the candidate's gender, to prevent the AI from perpetuating bias against women, developers found that the AI began to favor candidates who described themselves using verbs commonly found on male engineers’ resumes, such as “executed” and “captured". This case is evidence of discrimination by proxy variable.

6.4. Helpful Links

6.4.1. https://towardsdatascience.com/how-discrimination-occurs-in-data-analytics-and-machine-learning-proxy-variables-7c22ff20792

6.5. Common Contexts

6.5.1. #bias #discrimination #injustice #fairness

7. AI Justice

8. Ethics Washing

8.1. Definition

8.1.1. Ethics Washing (also called ethics-shopping) is when organizations adopt vague ethical frameworks and/or internal policies that signal to policymakers, and/or the public, a commitment to responsible AI development. However, these organizations' frameworks and policies often do not entail any genuine obligation or accountability process. Ethics-washing is therefore used to appease regulators that the ethical obligations they seek to impose are not necessary and thereby avoid deeper public scrutiny. Ethical policies developed in this context (especially in the private sector) are less concerned with practical ethical frameworks as they are with their own political goals.

8.2. Relevance in AI Ethics

8.2.1. Policy makers believe it is important for companies to strive for ethical AI development; however, it is very difficult to identify what an effective ethical AI framework looks like. The conversation in AI ethics revolves around whether voluntary ethical approaches are genuine and effective. The use of ethics washing is particularly evident among companies that advertise their policies and investment in ethical behavior and yet fail to follow through with those policies or create internal governance mechanisms to police offending behavior. As a result, it is clear that those companies are using their ethics principles simply to distract the public and prevent people from looking into the company's practices. Furthermore, since even genuine ethical commitments are difficult to entrench across the whole organization, those ethical commitments can amount to ethics washing if there is little enforcement of the company's own standards.

8.3. Example

8.3.1. Google's DeepMind has been considered a leader in the ethical AI field and has even established its own Ethics & Society department to uphold their stated priority, which is ethics. However, DeepMind was involved in the illegal breach of 1.6 million people's health data in a project they undertook with the UK's National Health Service. DeepMind's co-founder Mustafa Suleyman wrote of the scandal in a blog post, stating, "We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as a technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong, and we need to do better.” Clearly, internal ethical frameworks are not sufficient to mitigate the need for external oversight and regulation.

8.4. Helpful Links

8.4.1. https://www.privacylab.at/wp-content/uploads/2018/07/Ben_Wagner_Ethics-as-an-Escape-from-Regulation_2018_BW9.pdf

8.5. Common Contexts

8.5.1. #easylaw #softlaw #self-regulation #education #legallybinding

9. Supervised Learning

9.1. Classification

9.1.1. Definition

9.1.1.1. Classification is one approach to machine learning. Classification teaches machine learning models to classify input data into designated categories. As a result, the machine's output is a determination of which category the input data belongs. In AI, classification is achieved using different algorithms (e.g. Decision Tree, Random Forest, Naive Bayes, and Support Vector Machine). There are four main types of classification tasks: i) Binary Classification (2 class types); ii) Multi-class Classification (more than 2 class types); iii) Multi-label classification (2+ class types and the model predicts multiple class types); iv) imbalanced classification (uneven distribution of items being classified into different class types).

9.1.2. Relevance in AI Ethics

9.1.2.1. The classification technique is vulnerable to adversarial attacks. These adversarial attacks can have ethical implications in as far as they trick the model into performing poorly. This occurs when data put through the model is on the margins of classification types. This can lead to misclassification with various implications ranging from mild (spam mail not being detected) to severe (inappropriate content on Youtube being recommended to children).

9.1.3. Example

9.1.3.1. Classification can be compared to the process we undertake when sorting our recycling. During the recycling process, we group our plastic, cardboard and glass recycling together and match that recycling to the appropriate bin. In this scenario, we are classifying our recycling items by putting them into the appropriate bin.

9.1.4. Helpful Links

9.1.4.1. 4 Types of Classification Tasks in Machine Learning

9.1.5. Common Contexts

9.1.5.1. #machinelearning #adversarialattacks

9.2. Regression

9.2.1. Definition

9.2.1.1. Regression analysis is an approach to machine learning that teaches a machine learning models to predict a value based on the relationships between the data points it is trained on. Once the machine is able to understand how, for example, the size of my house affects my house's retail price, the machine can make quantitative predictions about the retail price of other homes based on their size. In order for this to work, variables in the data must have some relationship with the outcome. For example, regression analyses cannot tell you whether a picture contains a cat or a dog because classifying the photo as a cat or a dog is not a quantitative prediction problem. Rather, regression analyses can predict an individual's height given variables such as age, weight, and geography. In order to forecast, statistical models are used to predict a value based on the relationships between the variables in the training data. The most common type of regression analysis performed is a 'linear regression'. Performing a linear regression analysis involves characterizing the relationship between variables as a line of best fit. You can imagine this line of best fit connecting various points on a graph to determine the line (slope and y-intercept) that best characterizes the relationship between the incoming data points and the AI's prediction.

9.2.2. Relevance in AI Ethics

9.2.2.1. There are three major AI Ethics problems associated with regression analyses: 1) biased input data, which overvalues variables with low levels of predictive value (e.g. when predicting the risk of individual drivers, the independent variable of zip codes is overvalued relative to an individual's driving record. As a result, interest rates on car insurance are higher for individuals living in minority neighborhoods regardless of their driving record); 2) poor inferences about variables merely because they exist in the dataset (e.g. using facial recognition technology to determine an individual's IQ); and, 3) regression algorithms that perpetuate discrimination (e.g. extending credit based on zip code, which correlates with race and minority status and can amount to redlining).

9.2.3. Example

9.2.3.1. Regression analyses are designed to solve quantitative problems. For example, given an individual's age and employment status, how much time will they spend on YouTube in one sitting.

9.2.4. Helpful Links

9.2.4.1. https://towardsdatascience.com/supervised-learning-basics-of-linear-regression-1cbab48d0eba#:~:text=Regression%20analysis%20is%20a%20subfield%20of%20supervised%20machine,that%20someone%20will%20spend%20watching%20a%20video.%202.

9.2.4.2. Introduction to Statistical Learning

9.2.5. Common Contexts

9.2.5.1. #lineofbestfit #quantitativepredictions #linearregression #bias #racism #discrimination #correlationisnotcausation

9.3. Definition

9.3.1. Supervised learning is a technique used to train all types of artificial intelligence (i.e. machine learning and neural networks). This approach relies on the software's pattern recognition skills. It works by teaching the algorithm which data is associated with which label. Labels are simply tags associated with a particular data point that provides information about that data. This technique teaches algorithms to recognize data inputs and to "know" the corresponding data output, or label. For example, supervised learning is used to teach machine learning systems to recognize pictures of cats. It works by providing the algorithm photos of cats with corresponding labels that say "cat". With enough training data, the computer is able to recognize future photos of cats and provide the "cat" label on its own. The algorithm can even assign a probability of having successfully labelled the data input it was given. In order to perform this functioning, supervised learning leverages algorithms such as classification and regression.

9.4. Relevance in AI Ethics

9.4.1. In order to assess the ethics of a particular algorithm, we must be able to understand how an algorithm derived its output. However, black box supervised learning models such as complex trees and neural networks lack interpretability and transparency. These black box supervised learning models are increasingly common and are used to observe the connection between two variables that do not have a linear relationship. Transparency is important not only to validate that the model works but also to ensure that the model is not biased. For example, black box algorithms may determine consumer credit risk based on age, race etc. and without the ability to determine how an algorithm made its decision, it cannot be assured that an algorithm's output is not racist, homophobic, sexist etc.

9.5. Example

9.5.1. If you want your machine learning model to predict the time it will take you to get to work you might teach your model using supervised learning. With the supervised learning model, you would feed your algorithm information that is relevant to the length of your commute such as weather, time of day and chosen route as well as the time it takes you to get to work. Once the algorithm has been trained, the algorithm can recognize the relationship between data points and be able to predict how long it will take you to get to work based on new input data about the weather, time of day and chosen route. The machine may also see connections in the labeled data that you may not have realized. For example, the machine may able to detect that one route will take you longer than another only at a particular time and with particular weather conditions.

9.6. Helpful Links

9.6.1. Supervised Learning: Crash Course AI #2

9.6.2. https://arxiv.org/pdf/1612.08468.pdf#:~:text=black%20box%20supervised%20learning%20models%20%28e.g.%2C%20complex%20trees%2C,regard%20is%20their%20lack%20of%20interpretability%20or%20transparency.

9.6.3. 6 Types of Supervised Learning You Must Know About in 2020

9.7. Common Contexts

9.7.1. #teachingalgorithms #labeling #blackboxalgorithms #bias

10. Accountability

10.1. Definition

10.1.1. Ostensibly, AI takes the burden of decision making off of the humans' shoulders and puts it directly into the metaphorical hands of technology, As a result, AI creators may deny responsibility for an AI's decisions and its consequences. Commonly, AI companies insist that the black box nature of their AI makes it so that they don't have any insight into an AI's decision making processes and shouldn't be responsible for those decisions. However, in our current market, this logic isn't going to cut it. If no one can be held accountable for AI's decisions then: A) what's to stop all sorts of private and public sector organizations from releasing harm-inducing AI?, B) how can those whose rights have been violated by AI systems pursue justice?, and C) how can we trust any of the AI being developed? Accountability is fundamental to our justice system and critical for trust in our economy. For these reasons, it is critical for AI companies to be held accountable for the decisions and consequences of the AI they have deployed onto the market. That being said, it is a fallacy that human judgment is not involved in an AI's decision making. Humans play a role in writing the algorithms, collecting data, defining how a successful model performs, determining how the system will be used, and deploying the technology. Not to mention, it is humans, not algorithms, that are negatively implicated by an AI's problematic decision. Yet, there is a gap in the current legal framework when it comes to determining who is responsible for the AI's decisions. This lack of precedent has been abused by a wide variety of powerful actors.

10.2. Relevance in AI Ethics

10.2.1. Without accountability infrastructure, the culture of "moving fast and breaking things" pervades, As a result, those involved in all stages of the AI's development have perverse incentives that work to the detriment of social good. As a result, each of the ethical AI issues we know today are due in large part to a lack of accountability.

10.3. Example

10.3.1. In May 2016, Microsoft released Tay chatbot, an AI designed to mimic the language of 19-year-old American girls. The chatbot was designed to learn this language from interactions on Twitter with other users. However, users knew that the chatbot learned from its interactions with them and decided to teach Tay inappropriate behavior. Although none of Tay's victims charged Tay with defamation, Tay's interactions are likely to have been considered defamatory in a court of law. But, had the lawsuit been filed, it is unclear who would have been held liable. Would it have been the company who has a proprietary interest in the software? The developers who failed to program constraints in Tay's vocabulary? The people who gave Tay bad data? The current legal framework is not equipped to make this determination or to assess the accountability of each actor in this situations.

10.4. Helpful Links

10.4.1. Accountability

10.4.2. https://www.ic.gc.ca/eic/site/133.nsf/vwapj/3_Discussion_Paper_-_Accountability_in_AI_EN.pdf/$FILE/3_Discussion_Paper_-_Accountability_in_AI_EN.pdf

10.5. Common Contexts

10.5.1. #trust #abuseofpower #justice #transparency #AIlaw