The digital era—with exponential technological innovation coinciding with largely unbounded globalization—is perpetually in uniquely unprecedented times. This poses a challenge for governing bodies; how do we govern without precedent? One method for identifying innovative solutions to complex problems is by use of Artificial Intelligence (AI). During the COVID-19 pandemic, political stakeholders raced to employ technological applications in the prevention and mitigation of the effects of COVID outbreaks within their jurisdictions. The global COVID pandemic is just one of many causes behind the continuous growth in AI for the public sector. This paper seeks to understand how and why AI is so popular as a problem-solving tool, and what ethical risks arise from rapid AI adoption. To generate actionable insights from such a multifaceted issue, we will focus on applications of AI with significant impact to human society, and the public sector’s obligation to AI Governance. AI Governance will be used throughout this paper to mean a methodology for holding AI accountable, ethical, and effective for their intended purpose and audience.
The need to prevent potentially biased AI is more pertinent than ever, as more and more of our everyday lives become intertwined with, and even influenced by AI. AI is being utilized in the public sector all over the world, most prominently in automating menial but time-consuming tasks. However, utilizing AI to inform complex decisions is also growing in popularity. This advanced technology requires a high level of specialized knowledge to build it, so AI is commonly built and purchased as software through private third parties. Despite common perception, AI cannot make decisions, but rather generate insights from patterns in a data set. Nonetheless, many AI models are utilized by the public and private sector as if they have the capability to draw comprehensive conclusions. Because AI is so new, many of the individuals implementing and regulating this technology do not have the specialized knowledge necessary to effectively safeguard AI models for bias or ethicality. AI bias can often be attributed to the creators of the AI model or dataset. However, bias can also be introduced in many ways within each of the many steps of building, implementing, and regulating an AI model.
While there are specific policies and task forces working on AI initiatives in the U.S., the AI regulatory and risk field is significantly underdeveloped. Furthermore, bias is often introduced to AI through existing systemic bias in the U.S., such as unequal representation of minorities in STEM fields or datasets populated by biased policies. While AI can never be completely eradicated of harmful bias, more comprehensive policies regarding AI ethics could be a pivotal step towards U.S. governments utilizing AI for good.
In 2020, 14 countries’ governments and the EU joined together to create the Global Partnership on Artificial Intelligence (GPAI), an agreement that facilitates the development of responsible AI (Shearer et al., 2020). This highlights a larger trend of increased recognition of AI’s influence, and the resulting ethical duty of the public sector. From developing new treatments in healthcare, to optimizing public transport routes, AI can greatly improve the delivery of public services and efficiency of governing bodies. Governments utilized AI more than ever during the COVID-19 pandemic; researchers believe that an AI system spotted the COVID-19 outbreak in Wuhan, China before experts classified it as an outbreak, and then later discovered that a rheumatoid arthritis drug might act as a possible treatment (Hanson et al., 2020). Because AI can efficiently detect patterns in large datasets, it has been particularly useful in the containment, treatment, response plans, and diagnosis of COVID-19. Governments have been leveraging AI in their COVID-19 response plans everywhere from tracking and forecasting viral spread (Dong et al., 2020; City of Los Angeles, 2021;), shaping reopening plans (Gu, 2021), analyzing patient symptoms, conducting laboratory testing of treatments and vaccines (Lalmuanawma et al., 2020), investigating new epidemiological plans (Strategy of Things, 2020), contact tracing infected individuals, installing automated air ventilation and filtration systems (Hanson et al., 2020), and even detecting asymptomatic individuals through sensors or wastewater testing (Murray, 2020; Perez, 2020;). Where there is a decision to be made, AI can help inform it efficiently and quickly.
“Decision making” AI models are generating insights that humans can leverage to supplement their judgment. To ensure ethical AI decision making, humans must: avoid overvaluing AI insights, maintain unbiased algorithms, and responsibly employ AI that promotes citizens’ wellbeing. Modern computer processing and AI is built upon the same fundamentals as early computing; at their core they are mathematical devices that use a series of 1s and 0s, or binary, to represent information. Computer “thinking” today relies on logic developed in the 1850s by English mathematician George Boole, in which Boolean logic functions are defined by specifying AND, OR, and NOT (Computer History Museum). Logical by design, computers require us to translate our thinking into computational logic in order to communicate. Logic-based “decision-making” applied to vast amounts of data leads many to believe AI insights are immune to things like bias or misinterpretation. We have come to find that this misconception is particularly harmful, particularly as AI is increasingly adopted in everyday life. When we attempt to automate decision-making through AI, we should proceed with caution; AI findings should inform decisions, not make them.
The technology field is advancing so rapidly that regulation can be perpetually outdated. To effectively regulate a field, a strong understanding of the inner workings of the subject is essential. In the technology field, this results in a limited group of specialized individuals holding all regulatory responsibilities, or a larger group of ill-equipped individuals attempting to regulate the unknown. Often, the top experts in technology fields, who have the capability to accurately regulate technology, are employed in private organizations. While these individuals may maintain personal ethics, there is a strong likelihood that their professional interests also reflect those of a corporation. As AI technology becomes more ubiquitous, the government and public sector move to adopt AI either internally or through a third-party contract, and the question of who owns regulatory responsibility remains contested.
While the individuals building AI technology work to avoid encoding bias into AI, governments have a constitutional responsibility to promote and protect the best interests of the citizen. Thus, policy tools should be exercised alongside technological safeguards, to ensure that AI models that affect people’s lives are being built and applied responsibly. The following will explore AI through technological and socio-political lenses to understand how AI works, its potential risks and ethical considerations, and some fundamental ways in which government stakeholders can safeguard constituents from bias, inaccuracies, and unethical practices in AI.
Big data refers to the plethora of data created by humans and our devices, often collected by information technologies such as sensors, cell phone GPS signals, photographs, and RFID technology (Herschel & Miori, 2017). Many smart cities have implemented Multi-Modal Sensors and surveillance systems, which can capture raw data from the environment discreetly. Once mass amounts of raw data are collected, researchers extract trends and generate predictions through Artificial Intelligence, or AI. AI works by processing large quantities of data through algorithms, which, simply put, are the set of instructions given to the computer on how exactly to handle the data. Intelligent iterative processing allows software to learn from the patterns in the data. An AI model is then the output by algorithms, consisting of a prediction (“Artificial Intelligence: What it is and Why it Matters,” n.d). The AI development lifecycle consists of planning and design, collection and processing of data, building, training, validating, deploying the model, and then monitoring the processing (“OECD AI Principles”). AI is most notably used to inform decision making where there may be uncertainty, a lack of precedence, or complexity that would obscure insights to humans. So, while there is a team of humans overseeing the process, AI is often designed to mimic the neural networks of the human brain, also known as “deep learning,” where it can work semi-autonomously (Wang, 2003).
AI can save us time and money while supposedly providing certainty in the ambiguous. Breakthroughs in artificial intelligence technology have coincided with computer processing growth, faster Internet, and a surplus of data from IoT devices (Coeckelbergh, 2020). The allure of supplementing decision-making with AI is that it appears to replace human faults like bias and irrationality with data and statistics. Indeed, forms of artificial intelligence have potential to make life in the public sector safer and smarter (“Artificial Intelligence: What it is and Why it Matters,” n.d). According to a study conducted by Deloitte, using AI in everyday tasks can save U.S. Government employees somewhere between 96.7 million to 1.2 billion hours a year, resulting in savings between $3.3 billion to $41.1 per year (Eggers et al., 2017). A decrease in menial work could lead to re-employing workers’ time and efforts in more productive and rewarding efforts, avoiding human employee displacement by AI. For example, Australian government agencies have widely incorporated AI-based chatbots to redirect and answer simple questions. This simultaneously offers citizens ease in access to information and frees up government personnel from answering repetitive simple questions; studies have found that these chatbots have resolved around 81% of enquiries at first contact (Artificial intelligence in government, 2021). AI can be useful in existing processes of government including (but not limited to) facilitating elections, digitizing benefits or social services, tracking metrics such as public health and crimes, predicting traffic patterns, identifying fraud, hiring and recruiting employees, and digitizing repetitive administrative tasks (Artificial intelligence in government, 2021). The potential to incorporate deep learning in provisions of government services is high, as government processes are often criticized as slow and ineffective and elicit low levels of citizenship participation.
UK-based policy research organization, Oxford Insights, built the first AI readiness index, which assessed how prepared governments are for implementing AI innovation, based on 33 key metrics. These metrics can be sorted into four high-level clusters: economy and skills, public service reform, governance, and digital infrastructure. Oxford Insights found high levels of preparedness for AI integration in the U.S., largely attributed to private sector innovation in ‘Silicon Valley’ (Shearer et al., 2020). Other high-scoring countries included the UK, Finland, Germany, Sweden, Singapore, The Republic of Korea, Denmark, the Netherlands, and Norway. The U.S., Singapore, and the United Arab Emirates scored the highest in government adoption of AI. In 2020, there was a drastic shift towards AI utilization in global governments, as COVID-19 prompted shifts towards automation in research, virtual healthcare or education, and delivery of public services. Many government and non-government organizations utilized deep learning AI in detecting COVID-19 outbreaks, contact tracing, identifying asymptomatic cases, assessing trends in the viral spread, and testing vaccines (Zhou et al., 2020). The OECD (Organisation for Economic Co-operation and Development) Principles on Artificial Intelligence was developed by 50 members of governments, academia, business, international bodies, and civil society, and signed by 36 countries. These guidelines are intended to help governments and other actors implement trustworthy and effective AI systems, with a long-term goal of making communities safer, more effective, and more livable through trustworthy AI (“Forty-two countries adopt new OECD principles on artificial intelligence,” 2019). In the U.S., the AI boom is largely private, requiring governments or other private industries to build partnerships in order to utilize AI; in 2019 the United States was the world’s largest investor in privately owned AI companies (Arnold, 2020). However, AI is becoming immersed in everyday life all over the world. Singapore, known for pioneering AI innovation, enacted a S$500 million grant that facilitates AI innovation projects in logistics, transportation, manufacturing, retail, housing, healthcare, safety and security, research and policy plans, and education (Singapore, Info-communications Media Development Authority, 2020). The UAE, the third most prominent AI adopter, according to Oxford Insights, has a comprehensive AI strategy, with a core goal to boost government performance. Key initiatives include training government officials in AI technology, shifting transportation to be 25% autonomous by 2030, fully integrating AI in medical and security fields, going paperless in government, replacing immigration officers in airports with an AI system, and launching comprehensive AI policy guidelines (Dubai World Trade Centre, 2020). Both public and private applications of AI have the potential to impact human lives in tangible ways, often without clear visibility into how or when they are operating. The public sector has the opportunity and obligation to intervene in AI, beyond funding innovation or implementing early adoption.
Concrete metrics of profit or efficiency can help paint a picture of an AI model’s impact. Some AI applications are easier to analyze than others due to the type and quantity of relevant metrics that can be collected. However, qualitative and quantitative analysis are equally as important in measuring a cause and effect, particularly when humans are impacted. It is a challenge to accurately assess the multifaceted impacts of AI in complex environments, or with many end users, particularly for qualitative analysis without universal metrics. How can we impose ethical safeguards on AI without universal agreement on what an ethical AI model looks and behaves like? Many Americans fear technology’s increasing automatization due to a fear of infringement to their personal privacy. However, biased AI is an arguably more insidious risk that we face in the digital age. Bias can simply mean a deviation from an accurate statistical norm. In AI, bias is expected, and it would be simplistic to seek to eliminate it completely (Danks & London, 2017). AI becomes susceptible to bias though a few key ways in which we build and interact with it. It is pertinent that we understand how AI can become biased to begin to standardize assessment of the health and impact of AI models. Parameters for assessing AI health would then be central to an ethical AI maintenance and regulation legislation.
I. Data Cleansing
Today, we are generating so much new data that 90% of the data in the world has been generated in the last two years (Herschel & Miori, 2017). The intense velocity of this data collection means that by nature, big data is often incomplete, unstructured, originating from many unstandardized sources, and just generally messy. Aggregated big data provides us with a precise, time-sensitive capture of the world around us, enabling countless opportunities for analytics. However, big data sets being used in decision making are often incomplete, biased, or missing context, making them prone to misleading results (Herschel & Miori, 2017). In extremely large data sets, cleansing the data set of these errors is impossible to do manually, so the process must be automated to some extent. This includes merging separate databases, verifying information, reassembling structure, documenting, elementizing, and standardizing. This process must be done with care, as simple oversights may alter the very nature and meaning of data sets. Statistical outliers may be deemed erroneous but hold actual insight. Data points might lose their context or be incomplete. Due (in part) to this, raw data doesn’t completely capture the full story with true accuracy.
II. Framing Complex Problems
Oftentimes AI is utilized to routinize otherwise time-intensive but menial tasks. These AI applications are typically low risk for bias to creep into the model, or significantly alter the output of the process. Risk of bias increases as we attempt to use AI for problem-solving very complex problems, with many factors influencing the solutions. Translating a multifaceted problem-solving process into a computer simplifies complex, unpredictable issues into standardized, logistical solutions. As Neera Jain, a Purdue professor who studies human–machine interaction sees it, "Humans are good at seeing nuance in a situation that automation can't” (Waddell, 2019). This highlights a knowledge gap in automated complex decision making that traditionally would rely on circumstantial consideration.
AI has been recently utilized in the criminal justice system for predicting criminal recidivism to determine how long of a sentence an individual should receive, a practice which has faced criticism for condensing a nebulous issue into a structured set of algorithms. Most notably, Northpointe, Inc’s COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) AI model, used to predict recidivism and sentencing times in multiple U.S. state courts, came under criticism after a 2016 investigation by ProPublica found that the software’s “risk scores” were completely inaccurate, and racially biased against Black individuals. This software and others like it utilize AI to perform risk assessments on offenders, following a draft by the American Law Institute for the Model Penal Code allowing the use of “actuarial risk assessment tools”. Many of these software packages include a questionnaire to be filled out by an individual, collecting information like sex, age, previous criminal activity, education level, etc., to predict recidivism (Warrier, 2020). Attempting to translate a complex sociological issue that affects real people into a set of data points and predictions ignores many outside factors and contextual evidence that would be considered in a traditional case-by-case process. Furthermore, these AI risk assessment processes have been shown to have little to no validity checking to account for this once a risk score was assigned. Nonetheless, in July 2016 the Wisconsin Supreme court ruled that algorithmic risk scores like COMPAS may be considered by judges during sentencing, but with adequate warnings given on the model’s limitations (COMPAS Software, 2021).
It is difficult to create an AI model that assesses individuals uniformly, while still being equally fair towards each individual. How the AI developers determine which factors are noteworthy in the algorithm will completely shift the outcome. Furthermore, missing or incomplete data, “unknown unknowns,” make it difficult to predict downstream effects until it is already in effect. Then, developers must retroactively identify the source of bias, and determine how to get rid of it (Hao, 2019). There is no true set standard of what is fair or unbiased. Daniel Acuna explains this by noting that algorithms may be biased for one person, and not biased for another, or bias may even be positive in some cases. Andrew Selbst, a Postdoctoral researcher at the Data & Society Research Institute explains, “‘Fixing’ discrimination in AI systems is not something that can be solved easily. It’s an ongoing process, just like discrimination in any other aspect of society” (Hao, 2019). Simply put, we cannot expect to make a complex issue any less complex through the introduction of AI.
III. Human Bias
There is a common misperception that AI is inherently fairer than humans because a logical, data-driven algorithm is impartial to emotions, irrationality, or predispositions. While there is some truth to this notion, AI will encode bias in its surroundings into its learning by default.
Google Machine Learning researchers found that their natural language processing model programmed to detect negative language exhibited a skewed negative association towards females and African American names when it was trained on news articles (Packer et al., 2018). Our datasets often reflect and reinforce human biases and stereotypes; AI trained on skewed datasets will only amplify these biases. This issue is further magnified with a smaller dataset, or when deployed in an environment with a more significantly biased historical record. Back in 1988, the UK Commission for Racial Equality found St George’s Hospital Medical School guilty of using a biased computer program to select candidates for admissions interview. The program favored applicants who were male with European-sounding names, mimicking the admissions panel’s historical selections; the program had a 90-95% accuracy rate (Manyika et al., 2019). This instance serves as a glimpse into the future, rather than a cautionary tale, as organizations utilizing AI for their hiring or application process has exponentially increased (Bertrand & Mullainathan, 2003) (Dastin, 2018).
Flawed data sampling in which groups are over-or-underrepresented results in an increased likelihood of classification errors, which can also result in AI bias. Researchers at MIT’s Media Lab discovered that facial analysis technology misclassified women and people with darker skin tones at higher rates because the test datasets used were overwhelmingly composed of male, lighter-skinned subjects (Buolamwini & Gebru, 2018). Automatic Speech Recognition (ASR) services like YouTube’s closed captioning, Amazon’s Alexa, Google’s Assistant, and Microsoft’s Cortana had an increased error rate for non-white American speakers and speakers born outside of a NATO country, due to discrepancies in ASR service training data (Tatman & Kasten, 2017) (DiChristofano et al., 2022. The National Institute of Standards and Technology (NIST) conducted an independent study on misclassifications caused by skewed samples and concluded that these findings were accurate. Two of the facial recognition AI models assessed in the NIST study were later used by law enforcement in Detroit, Michigan, to identify and arrest Robert Williams, a Black man, for a crime he did not commit (Hill, 2020). Williams was later released from jail, and his record was expunged, but he still suffered from a traumatic arrest in front of his family and missed work.
Behind each step of the AI system lifecycle lies a team of people, each with their own set of biases. Research in human psychology reveals that every person has some degree of innate cognitive bias; while it is possible to notice and interrupt our own implicit biases, it is an ongoing, disruptive process that requires attention and care (Steinhauser, 2020). This is made evident by an industry that is predominantly male, white, heterosexual, and English-speaking (Rangarajan, 2018). It is likely that homogenous teams of developers, dogfooders, auditors have similar unconscious blind spots that may cause them to miss AI that is consuming and amplifying bias. It is impossible to create AI fully impartial to human subjectivity; there is human interaction, and therefore potential for human bias, at each step of the AI development lifecycle.
IV. Private Innovation of AI Technology
AI innovation today is driven by the private sector. As of 2019, privately owned technology companies disclosed $40 billion in investments, with 64% of these investments in U.S. companies. There is a unique opportunity for the U.S. to leverage private sector AI innovation for security in the global economy and advanced domestic innovation. However, privately-owned U.S. technology companies also have the unique ability to influence national and international public interests throughout AI applications in varying industries. Potential consequences of this influence have led the public sector to consider things like sanctions, export policies, and security protocols, for the protection of the U.S. technology industry, American public, and national security (Arnold et al., 2020). Beyond shifting power dynamics, this ethical dilemma can also enable bias in individual AI applications.
The motivation for purchasing and interacting with an AI model may differ from the motivation for the development and regulation of the AI model. A private company is incentivized to measure AI health by its accuracy and ability to deliver on the client’s requests, and assessment of their social impact is an additional consideration. Many of the largest technology companies developing AI have value-based guidelines for guiding ethical conduct, but they are often not binding, and require voluntary compliance (Medeiros, 2020). In the field of AI, both the AI developers and the regulatory individuals work within the private organization. Thus, there are no objective regulatory processes with a commitment to public welfare to correct for the risk of conflicting standards for ethical AI (Ghazavi & Wolff, 2020). Wolff proposes that AI Ideally, internal AI engineers and regulatory individuals at private companies have intrinsic motivations for maintaining ethical AI. However, individual pressure or even retribution for critical regulation of an AI model creates a power imbalance that hinders objective maintenance of AI ethics. The most notorious instance of this occurred in late 2020, when a leading ethical AI researcher at Google was fired after writing a critical review of Google’s AI technology. Timnit Gebru, one of the most high-profile Black women in the field of ethical AI, and a powerful voice in bias and fairness of AI, suspects that she was unfairly let go after criticizing the equal representation of races in Google’s language processing AI (Tiku, 2020). Many in Silicon Valley see this controversy as underpinning a larger trend of lax internal self-policing that can enable biased AI.
The fact that AI is largely developed by private corporations also affects the applicability and portability for widespread use. When the same few private companies implement their AI software in many different places, the biases will reappear uniformly, having a greater effect than if AI algorithms were created specifically for their area. Andrew Selbst, at the Data & Society Research Institute, calls this the “portability trap.” Technology companies build AI intended to be applied for different tasks in different contexts, but it is difficult to effectively implement a system designed on a small scale to different large-scale settings without missing some social context. “You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different,” says Selbst (Hao, 2019). For example, the criminal recidivism AI models mentioned previously are built once at a technology company to mitigate a widespread systemic issue but fail to account for the unique circumstances of the many cities in which they are implemented. Static 99, an AI model built to predict recidivism in sex offenders was trained in Canada, where only 3% of the population is Black, but applied in the United States, where 12% of the population is Black (Heaven, 2020). Differences between socioeconomic conditions between even just cities and states can create algorithms that are effective and unbiased in one place, but ineffective and biased in another. This is why Wolff claims that AI has the ability to “automate[s] inequality;” AI models are largely produced by a few companies, and then are purchased for use by third-parties, who don’t necessarily understand the full extent of the inner workings of the algorithms.
V. The Black Box Model
AI developers are responsible for the processes of the product they create, and third-party stakeholders are responsible for the consequences of the AI models they adopt. However, developers of AI often don’t know themselves which data points and algorithms their AI models are pulling from to make assessments. As AI grows more advanced, particularly as machine learning deep neural networks become more autonomous and complex, the inner workings of AI become an impenetrable “black box.” The “black box model” simply refers to an AI model which is so complex that its inner workings are opaque to the human eye. Deep learning neural nets build a hierarchy of interconnected layers, modeled after the brain's neurological makeup. The digital “neurons,” or nodes, receive raw inputs, assess inputs mathematically, and pass the outputs on to the next layer of nodes. These models can have hundreds of layers, which can create a prediction of the nature of the input. If incorrect, the model will take this into account, and self-adapt to come to a correct conclusion (Bleicher, 2017). This is how AI has been able to beat humans at games of Chess and Jeopardy! or respond in human-like diction through devices like Apple’s Siri. This complex process requires the computer to self-learn, and thus the development team only builds an initial structure of the algorithm and feeds the AI test data to refine its learning. As the AI model becomes more and more opaque, the nature of the algorithm grows more significant. As Daniel Acuna sees it, we will eventually pass a threshold where processes of AI are no longer understandable to the team building it. In certain circumstances, like in online language translation, Acuna believes that this has already occurred. However, understanding the inner workings of AI-based online translators is less important because if they fail, the outcome is not particularly detrimental. However, as we move to more consequential decisions, we must be mindful that we may not know what the algorithm is utilizing to feed insights, or how to regulate this process (D. Acuna, personal communication, September 9, 2020). Cathy O’Neil (2017), accomplished data scientist and mathematician explains that “if people being evaluated are kept in the dark, the thinking goes, they’ll be less likely to attempt to game the system. Instead, they’ll simply have to work hard, follow the rules, and pray that the model registers and appreciates their efforts. But if the details are hidden, it’s also harder to question the score or to protest against it” (p. 8). Thus, black box AI makes it much more difficult to target sources of bias and regulate against them.
In 2007, Washington D.C.’s new mayor Adrian Fenty implemented a plan to improve underperforming public schools by utilizing AI, highlighting a national trend of school districts using data science to optimize the education system. The D.C. school system implemented a teacher assessment tool called IMPACT, which is based on a Princeton Mathematica research team’s evaluation of measuring students’ educational progress based on teachers’ success. The algorithms were extremely complex to account for students’ varying backgrounds and learning styles. At the end of the 2009-2010 school year the lowest performing 2% of teachers were fired, and next year the bottom 5% were fired, resulting in the firing of hundreds of teachers overall. This AI software identified a teacher as underperforming and they were automatically fired without any insight on what led to this conclusion, due to the opaque nature of the black box model. Teachers later guessed that standardized scores were weighed heavily in the evaluation model. The Washington Post and USA Today estimated that as many as 70% of teachers may have corrected answers on standardized tests (O’Neil, 2017, p. 10). This AI model was later found to be biased against teachers in lower-funded schools, as it did not account for external socioeconomic factors contributing to a student’s standardized test score. However, because the AI model was so complex, no one could identify this bias, and well-liked teachers were blindsided by the AI’s inaccurate assessment of them. In the end, those assessed by this black box model still attempted to game the system, as the biased model no longer accurately aligned with a true measure of a good teacher, but valued some unknown variables that they had to identify and maximize. Thus, the black box model introduces risk on two fronts: bias is undetectable, and those affected by the model are incentivized to attempt to game a system for which the rules are unknown.
VI. The Machine Heuristic and Toxic Feedback Loops
People often display higher levels of trust in ability for computers and AI than a human counterpart. Researchers at the Penn State Institute of CybersScience (ICS) coined this phenomenon as the “machine heuristic,” after discovering that people are more likely to give their credit card information to a website than a human travel agency (Hu & Shyam Sundar, 2010). In 2016, a pharmacist in Canada gave a patient the incorrect medication because a new automated prescription dispensing software made a mistake. Despite noticing the mistake, she trusted the AI insight and gave the patient the incorrect medication, resulting in heart complications for the patient (Canada, ISMP, 2016). This highlights a larger cultural shift in the perception of computers as more ethical, trustworthy, and accurate than humans. As previously discussed, AI has inaccuracies and biases that can be difficult to spot. However, the machine heuristic suggests that even when we are aware of AI inaccuracies, we feel compelled to trust the AI over ourselves or others. The effects of AI bias are exacerbated when end-users treat AI-automated insights as a conclusive truth. As Ryan Kennedy, a University Houston Professor who researches automation and trust explains it, “When people have to make decisions in relatively short timeframes, with little information —this is when people will tend to just trust whatever the algorithm gives them” (Kennedy et al., 2018). As the applications for AI continue to expand, as does our reliance on their accuracy; trust in AI is a precursor to the widespread adoption of AI. However, expanded adoption of AI also increases the complexity of development and implementation, in order to account for broader datasets, environments, end-users, and goals. Thus, there is a corresponding relationship between AI adoption and reliance, trust, and potential for bias.
Blind trust in AI may cause real harm when we act on an inaccurate insight. However, increased trust can also encode bias into the algorithm, with compounding effects over time. Machine learning, a complex form of AI, is structured as a self-adjusting loop so it can continuously correct itself as it learns from input. End-users treating a skewed or biased insight as accurate encourage the model to confirm this bias as accurate and generate similar outputs. This process can create a self-fulfilling prophecy in many real-world applications. In her book Weapons of Math Destruction, Cathy O’Neil explains that these destructive AI models “define their own reality and use it to justify their results… [They are] self-perpetuating, highly destructive, and very common” (2017, p. 7). These so-called “toxic feedback loops” occur when an AI model creates an outcome that affirms predictions, rather than predicting independent outcomes.
If an AI model used for menial tasks or in predictable environments makes a mistake, the outcome can often be easily seen, and adjusted, with potential loss being a decrease in profits or wasted time. However, when an AI model that directly affects people is biased, there are so many situational variables that it is difficult to determine the extent of the impact. For example, law enforcement agencies in the U.S. have been utilizing AI to predict where future crimes will occur. Startups including PredPol, HunchLab, and CompStat predict crime hot spots by monitoring geospatial crime data, incorporating this into patterns, and predicting where crimes will occur next. Local and state police departments purchase this software and utilize AI to determine where to allocate resources and personnel (O’Neil, 2017, p. 85). These softwares work under the public-policy “broken-windows policing” theory, which purports that low-level crimes and misdemeanors create an atmosphere of criminal activity and could indicate more serious crimes in the vicinity. However, many petty crimes are only logged because police happened to be there, scouting these often-low-income neighborhoods deemed crime-heavy, and creating a self-fulfilling prophecy for petty criminals (Babuta & Oswald, 2019). Predictive Policing AI parallels the “stop and frisk” policy implemented by Mayor Bloomberg in New York City. An overwhelming number of these stop and frisk encounters were found to disproportionately target African American and Latino men (around 85%). The NYCLU sued the Bloomberg administration, citing that this racist policy disproportionately incarcerated BIPOC, yet still today, a black person is five times more likely to be stopped without just cause as a white person, according to U.S. Department of Justice records (Bureau of Justice Statistics, 2020). This example highlights the long-term effect of associative policing, but it also serves as an example for how historical data can be tainted by systemic racism. Associations of Black males or BIPOC with crime populates algorithms, thus perpetuating the marginalization of minorities, despite the AI model not logging individuals’ race. This algorithm also disproportionately criminalizes poverty, as tracking petty-crimes is prioritized over larger crimes such as rape, assault, and burglary, which inadvertently targets lower-income communities. Implementing this AI technology in a real-world setting perpetuates a toxic feedback loop of the systemic criminalization of poverty and skin color, as it is never possible to tell whether a future crime was truly prevented through this technology. In criminal recidivism AI models, studies have even shown that the longer a criminal spent in prison, the greater chance that they would fail to find employment upon release, assimilate into society, and would be more likely to require public assistance or commit another crime (Mueller-Smith & Schnepel, 2021). According to these models, non-white, low-income prisoners are more likely to commit more crimes and should receive higher sentences. However, these individuals are also more heavily policed and punished by circumstance to a system that is statistically likely to only increase their recidivism, creating a self-fulfilling prophecy between the model’s prediction and outcome.
When applied to real people’s lives, there are no factors to control to see that the source of effects is the misaligned AI model. If a model incorrectly judges a person as “high risk” for criminal recidivism and sentences them to a longer period of time in prison, they are more likely to return to prison due to the increase of time that they were just sentenced to prison (Teich & Research, 2018). Thus, the model claims this decision as a correct prediction, when in reality it played a hand in creating this result. This is one example of how a toxic feedback cycle works (O’Neil, 2017, p. 25). Therefore, the risk of bias increases as AI grows in real-world application. Human lives are difficult to predict through a model, and it is nearly impossible to discern if decisions were correct once relinquished into real applications. Often, AI models are tested by splitting a dataset into half, one half to train and then one half to test. Thus, much of AI models are built using one dataset. It is then difficult to predict how it will work when deployed into the real world (Hao, 2019).
VII. Inaccurate Assessments of Intersectional Identities
AI tasked with identification and assessment of people often rely on mathematical generalizations and stereotypes, making them unable to accurately capture the unique intersectionality of individuals. Former NSA chief General Michael Hayden claimed, “We kill people based on metadata” (Castells, 2009). Since 2008, the National Security Agency has launched “precision” drone attacks, and anonymously tracked perceived terrorists, based on their cellphone and satellite data. An individual’s metadata (data points from one’s online activity) can be compared against a preexisting pattern of what a criminal or terrorist says or does (Cheney-Lippold, 2017, p. 39). These characteristics could label a person as a terrorist based on what their metadata makes them out to be. In life-or-death applications like this, it is of utmost importance that algorithms working to identify people are correct.
University of Michigan Professor John Cheney-Lippold believes that the role of AI is to capture the essence of a population, but it lacks the ability to capture human intersectionality. Lippold (2017) coined the term “measurable types,” as “a data template, a nexus of different datafied elements that construct a new, transcoded interpretation of the world. These templates are most often used to assign users an identity, an AI identification that compares streams of new data to existing datafied model” (p. 49). Thus, in the example of the terrorist model, data makes one appear as if they are a terrorist, building a separate, digitized identity distinct from our real, living identity. Cheney-Lippold (2017) explains the process of identifying us through our measurable types via the metaphor of finding a needle in a haystack: the NSA aggregates a set of datafied elements, then constructs an image of a terrorist through data. Thus, the NSA is not looking for a terrorist, or a needle, but a measurable type of a terrorist, a datafied representation of a needle (p. 44). The problem with this is that the human experience is more complex than what could be reduced to a set of data points. All of the data points that capture who we are and what we do only create an abstraction of self and fit us to a measured type that fails to accurately represent the complex, contradictory nature of humans. And yet, our lives are being assessed by algorithms, treating us as if we are the product of our inaccurate datafied abstract selves. Science and technology scholar Sheila Jasanoff (2004) explains this process as, “the simplifying moves that are needed to convert the messy realities of people’s personal attributes and behaviors into the objective, tractable language of numbers” (p. 27). This simplification creates an opportunity for bias.
While most people will not be suspected as a terrorist based on their internet history, inaccurate assessments of individuals based on our data still can affect people in everyday ways. Many university admissions offices and corporation hiring departments process applications and personality quizzes with AI to filter out “good” and “bad” applicants (O’Neil, 2017, p. 114). In the public sector, policy is often influenced by datafied assessments of the population. For example, the U.S. government’s public health response to the 2009 Flu Epidemic was largely based on Google Flu Trends, an AI software that predicted how the flu outbreak would occur in the U.S., which was later found to be inaccurate (Cheney-Lippold, 2017, p. 123). In Australia, the Victoria State Government’s “syndromic surveillance” program tracked patients in hospitals to identify and create policy for six new public health concerns, while in Quebec, economic policy is fine-tuned for each local subregion by differences in citizens’ data (Patel et al., 2020). During the COVID-19 pandemic, datafied assessments of the public were used to determine where and when to enforce stricter quarantining or social distancing measures, and how to allocate stimulus funding (Greenman, 2020). While these policies are for furthering the public good, standardized data points can be devoid of context, individuality, and the option for unique self-identification, and yet they are used as substitutes for “us”. The philosopher Antoinette Rouvroy (2013) writes, “this ‘algorithmic governmentality’ simply ignores the embodied individuals it affects and has as its sole ‘subject’ a ‘statistical body’ that is, a constantly evolving ‘data body’ or network of localisations in actuarial tables” (p. 11). Datafied assessments of self fail to constantly evolve in the way that is in human nature, thus perpetually shaping inaccurate policies.
Eliminating all bias from an AI model is impossible, and even harmful in some cases. Researchers are working on different solutions for preventing destructive algorithmic bias, while still allowing AI to perform autonomously and naturally. This section will provide a few solutions which have been proposed and utilized in pursuing more ethical, effective, and accurate algorithms, and how legislation can enable these solutions.
I. Reduce Opacity
A major source of toxic feedback loops lies in the opacity of autonomous and semi-autonomous AI. Many AI models, like criminal recidivism models and teacher assessment models, are seen as permanent and final decision makers. People cannot see how these decisions were made and if they were biased or untrustworthy. One means of preventing bias is attempting to reduce opacity, and peer into the data and metrics informing decisions.
A widely proposed policy solution in reducing opacity is forcing AI models to be “explainable”, a measure most notably implemented in Europe’s General Data Protection Regulation (GDPR). Article 15(1)(h) and Recital 71 of the GDPR explain that individuals whose data is used in an AI model have the right to “meaningful information about the logic involved”. However, it is still unclear how to fully enforce explainable AI legally and ethically without compromising the full ability of the AI model. Researchers at Microsoft proposed a form of “classifier interrogation” which creates a simulation that labels data and explores the space of parameters that may cause bias (McDuff et al., 2019). In 2016, the LIME method, short for Local Interpretable Model-Agnostic Explanation was introduced (Ribeiro et al., 2016). This tool, and other subsequent explainable AI tools can be applied to any machine learning algorithm to supply classifiers to data points. This serves to explain predictions, generating a level of trust in a decision.
AI researchers are seeking to apply cognitive psychology methods to the AI field to develop a technique of detecting AI bias. The field of Psychophysics in Cognitive Psychology is particularly relevant in AI because it seeks to ignore verbal and other cognitive responses, and rather use simple behavioral responses to uncover inner workings of the mind. Offering a two-alternative forced choice task is proposed in measuring biases and the strength of those biases in an AI model (Liang & Acuna, 2019). Lizhen Liang and Daniel Acuna are conducting research at Syracuse University, attempting to “psychoanalyze AI” by equating the black box model to the mysterious and complex human mind (Liang & Acuna, 2019). By utilizing experimental psychology methods such as signal detection theory, Liang and Acuna are working to uncover and extract biases from word embeddings and sentiment analysis predictions. This work serves to uncover bias in more complex algorithms, as they hypothesize that experimental psychology has a high level of application to examine AI decisions.
Another emerging tool for understanding opaque AI models is attempting to build real-world simulations in which to deploy the model and working backwards to understand the AI’s process through controlling and isolating factors. For example, researchers at DeepMind have created Psychlab, a complex synthetic environment that can test AI models as if they were being implemented in a human world. This method utilizes psychophysics and cognitive psychology to test algorithms and help scientists ethically analyze AI by isolating controls and tasks. The Psychlab environment attempts to compare agent and human cognitive abilities, and Psychlab tasks can be versions of classical behavior paradigms to explore specific cognitive or perceptual facilities (Leibo et al., 2018).
Despite new methods of peering into the AI black-box model are emerging, this is still a fairly new and experimental technology being pursued by a few AI experts, so it is difficult to utilize this research in creating policy proposals just yet. The current international trend in AI policy that the U.S. Government could reasonably adopt is forced explainability. Explainability measures are a natural tool for preventing bias, however they significantly reduce the accuracy of AI, as they decrease complexity to be more understandable. Explainability also requires a limit to the size and independent autonomy of the data sets and AI model, which may hinder the adoption of AI models on any large-scale (Shokri et al., 2020). In the meantime, governments can build upon current AI policy initiatives to promote innovation in AI testing by specifically allocating educational grants, creating mandated minimum AI testing requirement policy, designating a portion of the current budget towards research in AI testing technologies, and working to leverage partnerships with corporations and universities to encourage more newcomers to this vein of AI research (Future of Life Institute, 2020).
II. Decrease Moral Value Misalignment in AI Algorithms
According to AI philosophers, there is an existential risk in creating technology as a substitute for human decision making, when human decision making supposedly considers values like morality and equality. Vafa Ghazavi, a political theory researcher under the supervision of Jonathan Wolff at Oxford University, claims that “AI poses a distinctive moral challenge,” in that it makes decisions but does not have innate moral values. According to Ghazavi’s Value Misalignment Theory, failure arises when we inadvertently create technology with values that don’t align with our own. Often, AI is incentivized to make logical decisions with the information it has access to, and it may even have external incentives programmed within it, like producing a profit. Ghazavi argues that AI amplifies societal disadvantages that may not align with the wider ethical and moral values of the teams encoding them, or the people utilizing them. This effect is only amplified by a lack of accountability due to the black box model.
Some ethicists propose that AI algorithms should be encoded with values that align with ours, and possess the ability to prioritize these values, if the model is to be utilized in a range of locations with varying socioeconomic settings (Ghazavi & Wolff, 2020). Others argue that it is impossible to accurately encode moral theories into an algorithm because morals are uniquely human and subjective. Computer scientist Stuart Russel proposes an alternate path to developing AI with morals, by asking machines to defer to humans and ask for guidance at every possible turn (Wolchover, 2015). In creating AI with humility, it would only learn human preferences from direct observation, and only seek to maximize the realization of said preferences. Russel’s theory of “inverse reinforcement learning” would create a purely altruistic intelligence that has no objective but to replicate human preferences, and would default to doing nothing in cases of uncertainty. Thus, AI could be created with respect to value pluralism and human agency. This concept draws parallels to Europe’s GDPR Explainable AI policy, in that where we can’t program in an ethical philosophy, we can make it a standard practice for AI developers to question how their technology will be used, and its potential impacts throughout each stage of the development lifecycle. If this fails to bake in human morality, at least we will have multiple opportunities to intervene against bias and blind trust in AI.
While we may not have a definite consensus on the role of morals in AI, we do know that AI models are more likely to accurately fit the society in which it is implemented if it does not succumb to the “portability trap.” While continuing to fund and support research in fields like human-defaulting and explainable AI, AI regulatory legislation can empower more localized, region-specific AI models to increase likelihood of fairness. Potential regulatory requirements might include a case study analysis or impact assessment for AI that has considerable impact on the general public, or minimum accuracy in testing and training datasets on reflecting an area’s demographics. Purchase of AI applications from a private company for public use (especially when the buyer is a government entity) may necessitate training sessions on the AI model and development, or continued oversight to ensure base level understanding of potential implications and responsible deployment. AI regulation may require that technology companies clearly convey that their software provides “best-fit” insights, not decisions, and develop AI models that default to human decision if an insight is not supported by a strong level of confidence. And regulatory bodies could serve to empower communities to have a degree of independent autonomy over their regulations and oversight. City or state-wide interest committees can draw members from different demographics and institutions to discuss the role of AI governance in their community, or appropriate funding for local technology development grants.
III. Reduce Dataset and Data Collection Bias
Bias in an algorithm often stems from biased data and is reinforced through training and testing with this same dataset. Bias can be prevented most easily from algorithms through ensuring that the data and data collection methods are unbiased.
Dataset bias researchers Antonio Torralba and Alexei Efros have theorized that in training object recognition AI models, they incite bias when programmers test and train algorithms on the same dataset (Torralba & Efros, 2011). Torralba and Efros have found that there is a natural performance drop that occurs when AI is implemented in real-world settings after testing and training. Thus, having varied datasets that accurately encapsulate the world around us is necessary in building unbiased and accurate AI. The widely used MS-CELEB-1M data set disproportionately gathers data from people from North America and Western Europe (UK and Germany), and over 75% of people surveyed are men (Guo et al., 2016). Overrepresentation of Caucasian men is not an abnormality in datasets; many other commonly used datasets are similarly stratified (Rothe et al., 2015). This discrepancy is most evident in datasets and AI models utilizing facial recognition technology (Buolamwini & Gebru, 2018).
Often, datasets are collected through random sampling, and resampling can help make for fairer models (Li & Vasconcelos, 2019). After building a dataset, data scientists might analyze classes, and calculate if groups are similar to proportional makeups of larger society. Data scientists also often cleanse data by calculating averages and eliminating missing or extreme values. This process must be done with the intent to represent general populations as accurately as possible. Thus, resampling may need to occur with reweighting of some demographics over others, and an understanding of the implications of missing, underrepresented, or overrepresented values. Because it is often unclear if a dataset is biased, researchers at Google have proposed a method of reweighting data in a dataset through labeling to attempt to create an unbiased model with a biased dataset (Jiang & Nachum, 2019). This framework scores data points on a series of metrics to determine how close a classifier is to full fairness, and then processes the data to hold different levels of weight when filtered through an algorithm.
Research teams collecting data also have a responsibility to adhere to data collection ethical guidelines. For academic research, data collection must abide by principles of research ethics, which mandate that collecting data and performing research must minimize the risk of harm, obtain informed consent, protect anonymity and confidentiality, avoid deception, and provide the right to withdraw (Lund Research Ltd., 2012). In professional data science, ethical guidelines are not extremely succinct. The U.S. Federal Policy for the Protection of Human Subjects or the “Common Rule,” outlines provisions of informed consent and other ethics in data collection (U.S. Department of Health & Human Services, Office for Human Research Protections, 2018). However, it doesn’t address ethics of data collection in all settings, so many researchers feel that these guidelines don’t apply to them (Baron, 2019). It is generally up to the researchers’ discretion to ensure ethical data collection and maintenance. “Informed consent” also poses an ethical gray area, as internet services can be withheld if individuals do not agree to relinquish the right to their data privacy, without fully understanding what they are agreeing to. Standards for ethical guidelines in academic, government, individual, and private research need to be more succinct to reduce potential bias or discrimination.
To promote more accessible datasets, initiatives in U.S. policy can seek to facilitate open access and sharing of datasets between the government, academia, and private institutions, while continuing to outline individual privacy rights and protections to protect the safety of the people. Currently, the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence is working to create dataset and data sharing legislation. In recent legislation, House Resolution 153 outlined individual data protection guidelines, and the Algorithmic Accountability Act directs the Federal Trade Commission to conduct impact assessments on any entity that uses personal data in AI (Future of Life Institute, 2020). In July 2022, Congress passed the CHIPS and Science Act, which established a National Secure Data Service (NSDS) to facilitate accessible, quality government data (Yabsley, 2020). Looking towards the future, more concise policies on data collection and dataset access for academia and private institutions could be beneficial in developing AI technology with less risk of dataset bias. More open datasets and data sharing channels throughout the U.S. could help make AI models more portable, as they can be retrained and retested with accessible datasets that accurately represent specific communities’ demographics. However, the public sector could make the greatest impact on AI portability (and AI bias as a whole) by use of big data. Increasing the velocity of data we capture can help create more timely, accurate, and specific standardized datasets. We have seen the power of big data for good, in bringing AI to things like public transport, public health, environmental protection, and public safety (Singh et al., 2016) (Government of Singapore, 2021) (Hanson et al., 2020). However, many Americans distrust data collection; a survey by Pew Research Center found that 81% of Americans believe that the benefits of private data collection outweigh the benefits, and 66% of Americans believe the same for government data collection (Auxier et al., 2019). Implementing wide-scale big data collection brings up countless threats, ethical dilemmas, and hurdles. Big data stores are a target for cybersecurity attacks, individuals may lose personal autonomy and privacy by relinquishing control of their data. Still, surveys have also found that Americans also overwhelmingly prefer the experience of data-driven customization (Morgan, 2020). These findings indicate that apprehension lies in the privacy and ethical risks surrounding data collection. Before moving forward towards any strides in data collection and sharing, AI governance should focus on solidifying cybersecurity networks, data privacy laws, data collection and ownership laws. Only then can we look towards data collection as a means of reducing bias and inaccuracies in AI or research and making societies more intelligent and human centric.
Steps must also be made to ensure that the individuals working with and creating AI technologies accurately represent the population in which their model is set to be implemented. Namely, these teams should take measures in ensuring that they have accurate representation of women, people of color, disabled, and otherwise underrepresented individuals in positions of influence. Making STEM fields more inclusive is easier said than done; according to the National Science Board, the STEM workforce in the U.S. is 89% white and 72% male, while the overall national workforce is 78% white and 53% male (National Science Board, 2018). Many low-income schools with higher proportions of students of color have less STEM funding and resources, discouraging or preventing students from pursuing higher education in STEM fields. Additionally, students who are racial minorities, low-income, LGBTQ+, disabled, or otherwise underrepresented face other outside hardships which may serve as a further barrier to enter the specialized technology field. According to a 2019 study, women and people of color published as many academic papers as white males when they felt “accepted by their faculty and peers, had clear departmental expectations and felt prepared for their graduate courses,” indicating that STEM can be more inclusive through workplace culture and more representation (Langin, 2019). Another 2013 study found that many minorities faced discrimination while pursuing a STEM education, leading many to doubt their abilities (Grossman & Porsche, 2014). However, diversity in a professional team indicates a higher likelihood of outperforming a homogeneous team, even if the homogenous team has “greater relative ability,” according to a study conducted by the Proceedings of The National Academy of Science (Hong & Page, 2004). Some proposed policy measures to increase diversity in STEM fields are training in implicit bias in schools and workplaces, pushing for additional grants for STEM education in underfunded public schools, extra educational support and resources for underrepresented individuals, and scholarships for minority students pursuing STEM higher education (Rollins, 2020). The American Association of Colleges and Universities has proposed an “Inclusive Excellence” model, but many argue that schools need to be doing more to provide more resources for students who may be at a disadvantage, starting with early-elementary and secondary education (Killpack & Melón, 2016). In STEM fields, there are higher barriers to entry, and thus privilege stratifies students more dramatically. However, policy initiatives to encourage diversity in STEM fields might lead to more effective and less biased AI models, as a team with more varied backgrounds are less likely to have uniform cognitive biases.
IV. Increase Technology Literacy within Government Agencies
For our government agencies to adopt more innovative technologies and create effective AI policy for individual rights, increased technical literacy is necessary. The United States Congress is responsible for creating legislation on ethical policies and guidelines for AI use, but they do not have the high-level technical knowledge of AI to effectively fulfill this duty. Various accounts of U.S. politicians using flip phones, avoiding email, or writing letters on an IBM Selectric II can seem inconsequential (Gedye, 2019), but reliance on outdated technology poses serious consequences when tasked with legislating on innovative technology. When experts on quantum computing testified before Congress on the impacts of this technology, Illinois Representative Adam Kinzinger joked, “I can understand about 50 percent of the things you say” (Potential Applications of Quantum Computing, 2018). Google CEO Sundar Pichai testified before Congress on Google’s data and privacy ethics, but important discussion was delayed by members of Congress stumbling upon basic fundamentals of technology (Gedye, 2019). The governance of ethical issues in technology is largely left to lawmakers without the specialized knowledge to make the most informed decisions. The Office of Technology Assessment was intended to help Congress legislate on technical issues, but this organization was terminated in 1995, at the start of the digital era (West, 2021). Other government agencies, like the Government Accountability Office (GAO) are growing their specialized technology expertise teams (U.S. Government Accountability Office, 2019). However, foundational technology literacy should be a baseline requirement for anyone legislating on technology issues. Back in 2012, Rep. Bill Foster (D-Illinois), formerly a particle physicist, noted that only about 4 percent of federal lawmakers have technical backgrounds (Bloudoff-Indelicato, 2012). However, this has not appeared to hinder Congress, as representatives from both parties have used a lack of technical knowledge as a personal shield when voting against expert opinion or failing to take a stand against contentious issues (Zetter, 2016). Currently, the primary occupational background for members of congress is law, politics, and business, and the average age of the 117th Congress is 58.4 years for Representatives and 64.3 years for Senators (Carey & Stalbaum, 2022). While 96% of Congress is college-educated, a study conducted by the U.S. Department of Education found that one of the largest indicators of digital illiteracy was a higher age (Mamedova & Pawlowski, 2018). Many members of our highest level of government are not equipped with the highly specialized knowledge it takes to create policy surrounding technology ethics, implement technology in government, or notice AI bias in effect. This effect only trickles down to smaller local and regional governments.
Currently, AI policy initiatives in the U.S. are spearheaded by The American AI Initiative, who rely on the National Institute of Standards and Technology (NIST) to create AI policy. However, AI policy requires input and agreement from all executive government agencies, which would necessitate that all government employees who interact with AI have some background knowledge. The first step towards building concise technology policy is training current lawmakers in the basics of how technology works, and how technology could harm and improve their constituents’ lives. The implementation of ICTs, Information Communication Technologies, into government is called e-technology. However, studies have found that while ICTs can help innovate government processes, insufficient technical training is a large barrier to innovation (Chohan & Hu, 2020). The International Journal of Electronic Government Research publishes an annual guide to improving technology literacy in government leadership (Streib & Navarro, 2008). Other researchers have been building training programs for both government employees (Bose, 2004) and citizens (Chohan & Hu, 2020) to learn how to interact with technology, mitigate the digital divide, and build a more innovative government. Particularly in the wake of the COVID-19 pandemic, many cities have begun to look towards implementing information technologies to become “Smart Cities”. The Smart Cities Council (2020), Strategy of Things (2020), and The Global Digital Health Network (2020) are just a few of many independent Smart City organizations that have built training frameworks for local governments to follow in implementing smart city technologies like contact tracing and broadband connection. Similarly, corporations like Microsoft (2020), Intel (2020), Deloitte (2020), and IBM (2020) have each issued public reports, technology resources, and guidelines for governments to follow in implementing new technology to mitigate the effects of COVID-19. Overall, increasing training programs for government employees on technology can help them more easily adopt innovative technology, as well as create more informed policy on AI and technology ethics. Federal or state governments could build communication platforms where local governments could share advice on how they utilized AI, complete with training guides.
The next step to enabling effective technology policy is attracting individuals with high level knowledge on technology and AI to the public sector. As previously mentioned, most of the individuals with specialized AI and technology knowledge work for private companies. One way to increase ethical adoption of AI technologies is incentivizing AI and technology experts to work within the public sector, where they would be less likely to be influenced by private interests. Government agencies have increased their technology talent in recent years by offering security and reliability in contrast to competitive workplace environments commonplace in many technology companies (Metzger, 2019). Individuals are also drawn to work in the private sector because they see a larger impact of their work on people’s lives. To attract more top technology talent in government, the Public Sector needs to offer mission-driven projects, be able to offer competitive salaries, and be willing to invest in innovative, new technologies. According to Deloitte’s Millennial Survey of 2015, 60% of Millennial respondents claimed that they chose to work for their current employer partly because of the organization’s “sense of purpose.” However, participants in Deloitte’s 2022 survey found that other issues like compensation and workplace culture were more important to Millennials and Gen Zs (Deloitte, 2022). People may be more willing to work in government if their government is investing in innovative technology, and they are assigned to an engaging, fulfilling project. Mission-driven employees are more likely to stay at their organization for 5+ years and have higher levels of trust and fulfillment (Craig, 2018). However, government agencies often do not have the budget or resources to invest in the most innovative technology, or compensate technology experts with competitive benefits and salaries. While a prioritization in government-led technological funding could be beneficial, a simpler solution to this is increasing private-public innovation partnerships. For example, The Defense Innovation Unit has reservists working out of SpaceX (Metzger, 2019), and many people work for companies like Deloitte within the Government Accountability Office. Partnerships like these enable AI projects that have the financial flexibility and personnel of private corporations, while enabling government innovation. If more government agencies paired up with private companies, the public sector could benefit from the benefits of the private sector, without having to compete with them for talent.
V. Third Party AI Legislation and Regulation
The OECD AI Policy Observatory, which builds upon OECD’s Recommendation on Artificial Intelligence (“OECD AI Principles”), reports that AI legislation in the U.S. is developed by a combination of different types of governmental organizations (2021). This includes the Office of Science and Technology Policy, the Department of Defense, the National Institute of Health, and the Department of Commerce. The American Artificial Intelligence Initiative, launched by Executive Order, directs the National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence to fulfill initiatives on supporting AI growth in the U.S. (Exec. Order No. 13859, 2019). This team consists of White House Officials who conduct research and prepare proposals to work with other organizations within and outside of government (The White House, 2020). AI is such a diverse field that ethical guidelines and effective legislation is best shaped through the collaboration of many different stakeholders, including academic institutions and other third-party organizations. Ashkan Soltani, former chief technologist to the Federal Trade Commission believes that we need government collaboration with nonpartisan technical experts, who do not represent lobbyists or private companies that stand to profit from lawmakers' decisions. Soltani purported that experts can “basically be an encyclopedia for how things work, and can really help policymakers get to a good outcome…We had that in the OTA [Office of Technology Assessment] and that went away, and I think that was a huge mistake” (Zetter, 2016). While some lawmakers do refer to technical experts on an individual level, forging an organizational relationship in which policy defaults to unbiased experts would enable more effective AI legislative practices.
COVID-19 response plans around the world and within the U.S. can highlight the effectiveness and impact of cross-sector collaboration. Globally, governments that were most successful in mitigating the viral spread of COVID-19 often created task-force style response teams quickly, consisting of some blend of policy makers, public health officials, academics, technology experts, and other stakeholders (Park et al., 2020). Previously existing smart city organizations also led by example in problem solving through partnerships on a smaller scale. The city of Tempe partnered with Arizona State University (ASU) to develop a specialized Wastewater Analytics Team to test wastewater for COVID-19 genome copies (Park et al., 2020) (City of Tempe, 2020). Louisville, Kentucky mayor Greg Fischer leveraged relationships with private local companies, Microsoft, and educational institutions to create the COVID-19 Reskilling Initiative, which offers virtual e-learning courses to encourage citizens to utilize self-isolation or unemployment to build employable skills, growing Louisville's technology workforce (Microsoft Future of Work Initiative, 2021). Successful governments faced with a complex public health crisis best served constituents when they called upon external partnerships. We can carry this notion to problem-solve for other complex issues (such as AI) in times of relative peace.
Beyond legislation, ongoing regulation and audit in the AI field is significantly lacking. Jonathan Wolff, Blavatnik Chair in Public Policy at Oxford University, has come to this conclusion after researching ethics of risk and regulation policy for nearly 20 years.Wolff has found that in regulation and audit in particular, as the specialization of knowledge within a field increase, the number of people able to perform regulation sharply decreases. Thus, regulation and audit is significantly underutilized in technology, as there are less concise standardizations in the rapidly advancing field, and higher barriers to entry in technology (Ghazavi & Wolff, 2020). The U.S. National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence and organizations like the American National Standards are working to create a national framework for AI regulatory standards (Future of Life Institute, 2020). The American Institute of Certified Public Accountants (AICPA), the standards setter for public accounting firms, and the Institute for Internal Auditors (IIA) currently have no professional guidelines for auditing AI technologies (Bone, 2020). External regulation of advancing technology is not standard practice, and yet AI is being utilized with the understanding that the technology companies themselves are conducting ethical regulatory processes. In other industries, audit and regulation is performed by specialized external experts. This has not fully transitioned for technology fields, as there are very few individuals with specialized knowledge in both the most advanced technology and in audit. Wolff makes the claim that positive and effective regulation can give people peace of mind and trust, but industry has a financial incentive to avoid strict regulation. Thus, the public sector can enable effective AI ethics regulation by continuing to standardize industry-wide regulation guidelines and practices. External agencies could adopt and enforce this framework, which would help legitimize and develop the specialized field of technology regulation. In turn, third party regulatory organizations could continuously advise specialized government committees on adapting their framework to anticipate technological advances. Limited policy intervention in AI regulation could create a standard level of oversight on private companies, defined communication pathways between AI stakeholders, and an industry-standard definition of healthy AI.
AI bias is a complex topic. Bias stems from many different sources and is perpetuated by many aspects of AI. While recent years have brought about extraordinary advances in the AI field, it is still a very new science. Many fear the unknown risks that implementing a new technology into a society can bring. While these fears are not unfounded, there is also incredible potential for innovation, particularly in the public sector. Therefore, to best facilitate such a complex science like AI in the public sector, a combination of protocols, guidelines, and changes are necessary. This paper seeks to recommend a hybrid approach to making AI less biased and, therefore, work better for all that it affects.
Arnold, Z. (2020, September 29). What investment trends reveal about the global AI landscape. Retrieved March 09, 2021, from https://www.brookings.edu/techstream/what-investment-trends-reveal-about-the-global-ai-landscape/
Arnold, Z., Rahkovsky, I., &; Huang, T. (2020). (rep.). Tracking AI investment. Retrieved October 21, 2022, from https://doi.org/10.51593/20190011.
Artificial Intelligence: What it is and why it matters. (n.d.). Retrieved March 09, 2021, from https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html
Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M., & Turner, E. (2019, November 15). Americans and privacy: Concerned, confused and feeling lack of control over their personal information. Pew Research Center: Internet, Science and Tech. Retrieved October 21, 2022, from https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
Babuta, A., & Oswald, M. (2019). Data analytics and algorithmic bias in policing. UK Centre for Data Ethics and Innovation. https://www.gov.uk/government/publications/report-commissioned-by-cdei-calls-for-measures-to-address-bias-in-police-use-of-data-analytics
Baron, J. (2019, May 9). Researchers have few guidelines when it comes to using your data ethically. Forbes. Retrieved March 9, 2021, from https://www.forbes.com/sites/jessicabaron/2019/05/09/researchers-have-few-guidelines-when-it-comes-to-using-your-data-ethically/?sh=2299a4056fe8
Bertrand, M., & Mullainathan, S. (2003). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, 94(991). https://doi.org/10.3386/w9873
Bleicher, A. (2017, August 9). Demystifying the black box that is AI. Scientific American. Retrieved March 9, 2021, from https://www.scientificamerican.com/article/demystifying-the-black-box-that-is-ai/
Bloudoff-Indelicato, M. (2012). Physicist elected to Congress calls for more scientists-statesmen. Nature. https://doi.org/10.1038/nature.2012.11839
Bone, J. (2020, July 27). Auditing artificial intelligence. Corporate Compliance Insights. Retrieved March 9, 2021, from https://www.corporatecomplianceinsights.com/auditing-artificial-intelligence/
Bose, R. (2004). Information technologies for education & training in e-government. 203–203. https://doi.org/10.1109/ITCC.2004.1286632
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html
Bureau of Justice Statistics. (2020, November 16). Estimated number of arrests by offense and race, 2019. OJJDP Statistical Briefing Book. https://www.ojjdp.gov/ojstatbb/crime/ucr.asp?table_in=2
Canada, ISMP. (2016). Understanding human over-reliance on Technology (5th ed., Vol. 16). Retrieved March 16, 2021, from https://www.ismp-canada.org/download/safetyBulletins/2016/ISMPCSB2016-05_technology.pdf
Carey, S., & Stalbaum, M. (2022, June 15). 117th United States congress: A survey of books written by members: Introduction. Library of Congress Research Guides. Retrieved October 21, 2022, from https://guides.loc.gov/117th-congress-book-list
Castells, M. (2009). Communication power. Oxford: Oxford University Press.
Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves.
Chohan, S. R., & Hu, G. (2020). Strengthening digital inclusion through e-government: Cohesive ICT training programs to intensify digital competency. Information Technology for Development, 0(0), 1–23. https://doi.org/10.1080/02681102.2020.1841713
City of Los Angeles. (2021, April 26). COVID-19: Keeping Los Angeles safe. COVID-19: Keeping Los Angeles Safe. https://corona-virus.la/data
City of Tempe. (2020). Innovation in advancing community health and fighting Covid-19. Retrieved March 29, 2021, from https://covid19.tempe.gov/
Coeckelbergh, M. (2020). AI Ethics. The MIT Press.
Computer History Museum. (n.d.). How do digital computers "think"? Computer History Museum: Revolution: The First 2000 Years of Computing. Retrieved October 21, 2022, from https://www.computerhistory.org/revolution/digital-logic/12/269
Craig, W. (2018, May 29). The importance of having a mission-driven company. Retrieved March 10, 2021, from https://www.forbes.com/sites/williamcraig/2018/05/15/the-importance-of-having-a-mission-driven-company/?sh=43f7512a3a9c
Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 4691–4697.
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved October 21, 2022, from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Deloitte. (2020). Connecting for a resilient world. Retrieved March 10, 2021, from https://www2.deloitte.com/global/en/pages/about-deloitte/articles/covid-19-insights-collection-by-topic.html
Deloitte. (2022). The Deloitte global 2022 gen Z and millennial survey. Retrieved October 21, 2022, from https://www2.deloitte.com/global/en/pages/about-deloitte/articles/genzmillennialsurvey.html?icid=wn_
DiChristofano, A., Shuster, H., Chandra, S., & Patwari, N. (2022). Performance disparities between accents in automatic speech recognition. ArXiv, abs/2208.01157.
Domingos, P. (2017). The master algorithm: How the quest for the ultimate learning machine will remake our world. London: Penguin Books.
Dong, E., Du, H., & Gardner, L. (2020). An interactive web-based dashboard to track COVID-19 in real time. The Lancet Infectious Diseases, 20(5), 533–534. https://doi.org/10.1016/S1473-3099(20)30120-1
Dubai World Trade Centre. (2020). AI Everything. Retrieved March 11, 2021, from https://ai-everything.com/home
Eggers, W., Schatsky, D., & Viechnicki, P. (2017, April 16). How artificial intelligence could transform government. Deloitte Insights. https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/artificial-intelligence-government-summary.html
Exec. Order No. 13859, 84 Fed. Reg. 3967 (2019, February 14).
Forty-two countries adopt new OECD principles on artificial intelligence. (2019, May 22). Retrieved March 09, 2021, from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm
Future of Life Institute. (2020, March 31). AI policy – United States. Retrieved March 09, 2021, from https://futureoflife.org/ai-policy-united-states/?cn-reloaded=1
Gedye, G. (2019, April 7). How Congress got dumb on tech-and how it can get smart. Washington Monthly. Retrieved October 21, 2022, from https://washingtonmonthly.com/2019/04/07/how-congress-got-dumb-on-tech-and-how-it-can-get-smart/
Ghazavi, V., & Wolff, J. (Hosts). (2020, February 10). Values and AI: View from public policy (No. 3) [Audio podcast episode]. In Ethics in AI. University of Oxford. http://media.podcasts.ox.ac.uk/philfac/ethics-in-ai/S3_C_Wolff_Ghazavi-audio.mp3
Global Digital Health Network. (2020). Coronavirus solutions (Responses). Retrieved March 10, 2021, from https://docs.google.com/spreadsheets/d/15hkhdtGNzx7oHkO8Y2MOiY83JsHjqxL4MhMGvlA_J6I/edit#gid=1323340101.
Government of Singapore. (2021, February 26). Smart nation sensor platform. Smart Nation Singapore. https://www.smartnation.gov.sg/what-is-smart-nation/initiatives
Grossman, J. M., & Porche, M. V. (2014). Perceived gender and racial/ethnic barriers to STEM Success. Urban Education, 49(6), 698–727. https://doi.org/10.1177/0042085913481364
Gu, Y. (2021, March 7). COVID-19 Projections Using Machine Learning. https://covid19-projections.com/
Guo, Y., Zhang, L., Hu, Y., He, X., & Gao, J. (2016). MS-Celeb-1M: A dataset and benchmark for large-scale face recognition. ArXiv:1607.08221 [Cs]. http://arxiv.org/abs/1607.08221
Hanson, K., Nazary, S., Zhu, G., Stewart, W., & Altun, E. (2020). COVID-19: Re-imagining life in a post-pandemic world. Intel. https://www.intel.com/content/www/us/en/connected-transportation-logistics/resources/reimagining-life-cities-ebook.html
Hao, K. (2019, April 04). This is how AI bias really happens-and why it's so hard to fix. Retrieved March 09, 2021, from https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/
Heaven, W. D. (2020, July 17). Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
Herschel, R., & Miori, V. M. (2017). Ethics & big data. Technology in Society, 49, 31–36. https://doi.org/10.1016/j.techsoc.2017.03.003
Hill, K. (2020, August 3). Wrongfully accused by an algorithm. The New York Times. https://www.nytimes.com/2020/08/03/podcasts/the-daily/algorithmic-justice-racism.html
Hong, L., & Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences of the United States of America, 101(46), 16385. https://doi.org/10.1073/pnas.0403723101
Hu, Y., & Shyam Sundar, S. (2010). Effects of online health sources on credibility and behavioral intentions. Communication Research, 37(1), 105-132.
IBM. (2020, March 23). IBM's response to Covid-19. Retrieved March 10, 2021, from https://www.ibm.com/impact/covid-19/
Intel. (2020). Smart city & transportation Technology Response guide to Covid-19. Retrieved March 10, 2021, from https://www.intel.com/content/www/us/en/connected-transportation-logistics/resources/reimagining-life-cities-ebook.html
Jasanoff, S. (2004). Ordering knowledge, ordering society. States of Knowledge: The Co-Production of Science and the Social Order. Ed. Sheila Jasanoff. Routledge, 2004.
Jiang, H., & Nachum, O. (2019). Identifying and Correcting Label Bias in Machine Learning.
ArXiv:1901.04966 [CS, Stat]. http://arxiv.org/abs/1901.04966
Kang, C., & Frenkel, S. (2018, April 4). Facebook says Cambridge Analytica harvested data of up to 87 million users. The New York Times. Retrieved March 9, 2021, from https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html
Kennedy, R., Waggoner, P., & Ward, M. (2018). Trust in public policy algorithms (SSRN
Scholarly Paper ID 3339475). Social Science Research Network. https://doi.org/10.2139/ssrn.3339475
Killpack, T. L., & Melón, L. C. (2016). Toward inclusive STEM classrooms: What personal role do faculty play? CBE life sciences education, 15(3), es3. https://doi.org/10.1187/cbe.16-01-0020
Kim, J., Ah-Reum An, J., Jackie Oh, S., Oh, J., & Lee, J. (2021). Emerging COVID-19 success story: South Korea learned the lessons of MERS. Our World in Data. Retrieved March 29, 2021, from https://ourworldindata.org/covid-exemplar-south-korea#licence
Lalmuanawma, S., Hussain, J., & Chhakchhuak, L. (2020). Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: A review. Chaos, solitons, and fractals, 139, 110059. https://doi.org/10.1016/j.chaos.2020.110059
Langin, K. (2019, January 16). A sense of belonging matters. That’s why academic culture needs to change. ScienceMag. Retrieved March 9, 2021, from https://www.sciencemag.org/careers/2019/01/sense-belonging-matters-s-why-academic-culture-needs-change
Leibo, J. Z., d’Autume, C. de M., Zoran, D., Amos, D., Beattie, C., Anderson, K., Castañeda, A. G., Sanchez, M., Green, S., Gruslys, A., Legg, S., Hassabis, D., & Botvinick, M. M. (2018). Psychlab: A psychology laboratory for deep reinforcement learning agents. ArXiv:1801.08116 [Cs, q-Bio]. http://arxiv.org/abs/1801.08116
Li, Y., & Vasconcelos, N. (2019). REPAIR: Removing representation bias by dataset resampling. ArXiv:1904.07911 [CS]. http://arxiv.org/abs/1904.07911
Liang, L., & Acuna, D. E. (2019). Artificial mental phenomena: Psychophysics as a framework to detect perception biases in AI models. ArXiv:1912.10818 [CS]. https://doi.org/10.1145/3351095.3375623
Lund Research Ltd. (2012). Principles of research ethics: Lærd dissertation. Retrieved March 29, 2021, from https://dissertation.laerd.com/principles-of-research-ethics.php
Mamedova, S., & Pawlowski, E. (2018). A description of U.S. adults who are not digitally literate (Rep. No. NCES 2018-161). U.S. Department of Education.
Manyika, J., Silberg, J., & Presten, B. (2019, October 25). What do we do about the biases in AI? Harvard Business Review. Retrieved October 21, 2022, from https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
McDuff, D., Ma, S., Song, Y., & Kapoor, A. (2019). Characterizing bias in classifiers using generative models. arXiv preprint arXiv:1906.11891.
Medeiros, M. (2020, November 16). Public and private dimensions of AI technology and security. Centre for International Governance Innovation. Retrieved October 21, 2022, from https://www.cigionline.org/articles/public-and-private-dimensions-ai-technology-and-security/#footnote1
Metzger, M. (2019, August 12). The government's challenge is not attracting top tech talent-it's keeping it. Retrieved March 10, 2021, from https://www.nextgov.com/ideas/2019/08/governments-challenge-not-attracting-top-tech-talentits-keeping-it/159095/
Microsoft. (2020). Understanding our progress against COVID-19. Retrieved March 10, 2021, from https://www.microsoft.com/en-us/ai/ai-for-health-covid-data
Microsoft Future of Work Initiative. (2021). Louisville future of work initiative. Retrieved March 29, 2021, from https://www.futurelou.com/
Morgan, B. (2020, February 18). 50 stats showing the power of personalization. Forbes.
Retrieved October 21, 2022, from https://www.forbes.com/sites/blakemorgan/2020/02/18/50-stats-showing-the-power-of-personalization/?sh=1017b5d02a94
Mueller-Smith, M., & Schnepel, K. (2021). Diversion in the criminal justice system. The Review of Economic Studies, 88(2), 883–936. https://doi.org/10.1093/restud/rdaa030
Murray, S. (2020, October 28). Testing sewage to home in on Covid-19. Massachusetts Institute of Technology. https://news.mit.edu/2020/testing-sewage-for-covid-19-1028
National Science Board. (2018). Women and minorities in the S&E workforce (Vol. 2018-1, Rep.). VA: National Science Board.
OECD.AI (2021), powered by EC/OECD (2021), STIP Compass database, accessed on 21/02/2020, http://oecd.ai.
O’Neil, C. (2017). Weapons of math destruction. Penguin Books.
Packer, B., Halpern, Y., Guajardo-Céspedes, M., & Mitchell, M. (2018, April 13). Text embedding models contain bias. Here's why that matters. [web log]. Retrieved October 21, 2022, from https://developers.googleblog.com/2018/04/text-embedding-models-contain-bias.html.
Park, J., Su, L., Fielder, L., & Weatherhead, M. (2020, May 29). How data-driven cities respond swiftly and effectively to covid-19. Retrieved March 29, 2021, from https://whatworkscities.medium.com/how-data-driven-cities-respond-swiftly-and-effectively-to-covid-19-4de7a96d53e3
Patel, J., Manetti, M., Mendelsohn, M., Mills, S., Felden, F., Littig, L., & Rocha, M. (2021, March 24). AI brings science to the art of policymaking. BCG Global. https://www.bcg.com/publications/2021/how-artificial-intelligence-can-shape-policy-making
Perez, J. (2020, March 23). Innovating to fight COVID-19: Four Ways drones are contributing. https://enterprise-insights.dji.com/blog/innovating-to-fight-covid-19-four-ways-drones-are-contributing
Potential Applications of Quantum Computing. (2018). C-SPAN. https://www.c-span.org/video/?445771-1/house-panel-explores-benefits-quantum-computing
Rangarajan, S. (2018, June 25). Bay Area tech diversity: White men dominate Silicon Valley. Reveal. Retrieved October 21, 2022, from https://revealnews.org/article/heres-the-clearest-picture-of-silicon-valleys-diversity-yet/
Rollins, M. (2020, October 13). Diversity in STEM: What is it, why does it matter, and how do we increase it? Retrieved March 09, 2021, from https://caseagrant.ucsd.edu/blogs/diversity-in-stem-what-is-it-why-does-it-matter-and-how-do-we-increase-it
Rothe, R., Timofte, R., & Gool, L. V. (2015). DEX: Deep EXpectation of apparent age from a single image. 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), 252–257. https://doi.org/10.1109/ICCVW.2015.41
Rouvroy, A. (2013). The end(s) of critique: Data behaviourism versus due process. In Privacy Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology (pp. 143-167). Taylor & Francis. https://doi.org/10.4324/9780203427644
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the Predictions of Any Classifier. ArXiv:1602.04938 [CS, Stat]. http://arxiv.org/abs/1602.04938
Shearer, E., Stirling, R., & Pasquarelli, W. (2020). Government AI readiness index 2020. Retrieved March 09, 2021, from https://www.oxfordinsights.com/government-ai-readiness-index-2020
Shokri, R., Strobel, M., & Zick, Y. (2020). On the privacy risks of model explanations. ArXiv:1907.00164 [Cs, Stat]. http://arxiv.org/abs/1907.00164
Singapore, Info-communications Media Development Authority, Singapore Digital. (2020).
Model artificial intelligence governance framework. Retrieved March 11, 2021, from https://www.pdpc.gov.sg/help-and-resources/2020/01/second-edition-of-model-artificial-intelligence-governance-framework
Singh, R. P., Javaid, M., Haleem, A., & Suman, R. (2020). Internet of things (IoT) applications to fight against COVID-19 pandemic. Diabetes & metabolic syndrome, 14(4), 521–524. https://doi.org/10.1016/j.dsx.2020.04.041
Smart Cities Council. (2020). 2020 COVID-19 mitigation roadmap (US). Retrieved March 10, 2021, from https://scc.smartcitiesactivator.com/shared/3/#/projects/Mzg5MHwwOTQzMmI4YjUyZTYxY2VhZmJh%0A
Steinhauser, K. (2020, March 16). Everyone is a little bit biased. Retrieved March 09, 2021, from https://www.americanbar.org/groups/business_law/publications/blt/2020/04/everyone-is-biased/
Strategy of Things. (2020, September 08). COVID-19 resources page for smart cities, innovation and technology companies. Retrieved March 10, 2021, from https://strategyofthings.io/covid-19
Streib, G., & Navarro, I. (2008). City managers and e-government development: Assessing technology literacy and leadership needs. International Journal of Electronic Government Research (IJEGR), 4(4), 37-53. doi:10.4018/jegr.2008100103
Tatman, R., & Kasten, C. (2017). Effects of talker dialect, gender & race on accuracy of Bing Speech and YouTube automatic captions. Proc. Interspeech 2017, 934-938. doi:10.21437/Interspeech.2017-1746
Teich, D. A., & Research, T. (2018, January 24). Management AI: Bias, criminal recidivism, and the promise of machine learning. Forbes. Retrieved from https://www.forbes.com/sites/tiriasresearch/2018/01/24/management-ai-bias-criminal-recidivism-and-the-promise-of-machine-learning/?sh=fba03237c8a2
The White House. (2020). National Science and Technology Council. Retrieved March 10, 2021, from https://www.whitehouse.gov/ostp/nstc/
Tibken, S. (2018, April 11). Questions to Mark Zuckerberg show many senators don't get Facebook. CNET. Retrieved March 9, 2021, from https://www.cnet.com/news/some-senators-in-congress-capitol-hill-just-dont-get-facebook-and-mark-zuckerberg/
Tiku, N. (2020, December 23). Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Washington Post. Retrieved March 29, 2021, from https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/
Torralba, A., & Efros, A. A. (2011). Unbiased look at dataset bias. Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, 1521–1528. https://doi.org/10.1109/CVPR.2011.5995347
U.S. Department of Health & Human Services, Office for Human Research Protections. (2018). Federal Policy for the Protection of Human Subjects ('Common Rule'). https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/revised-common-rule-regulatory-text/index.html
U.S. Government Accountability Office. (2019, January 29). Our new science, technology assessment, and analytics team. U.S. GAO WatchBlog. Retrieved October 21, 2022, from https://www.gao.gov/blog/2019/01/29/our-new-science-technology-assessment-and-analytics-team
Waddell, K. (2019, October 19). In AI we trust - too much. Retrieved March 16, 2021, from https://www.axios.com/ai-automation-bias-trust-62ee0445-1fda-4143-b3d8-7d7ee8e328f6.html
Wang SC. (2003) Artificial Neural Network. In: Interdisciplinary computing in Java programming. The Springer International Series in Engineering and Computer Science, vol 743. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0377-4_5
Warrier, R. (2020, April 14). Analysing The Use of Artificial Intelligence in Criminal Sentencing through the Loomis decision. The Criminal Law Blog. https://criminallawstudiesnluj.wordpress.com/2020/04/14/analysing-the-use-of-artificial-intelligence-in-criminal-sentencing-through-the-loomis-decision/
West, D. M. (2021, February 10). It is time to restore the US Office of Technology Assessment. Brookings. Retrieved October 21, 2022, from https://www.brookings.edu/research/it-is-time-to-restore-the-us-office-of-technology-assessment/
Wolchover, N. (2015, April 21). Artificial intelligence aligned with human values. Quanta Magazine. https://www.quantamagazine.org/artificial-intelligence-aligned-with-human-values-qa-with-stuart-russell-20150421/
Yabsley, J. (2022, July 28). Congress authorizes establishment of National Secure Data Service to improve data analytics. Data Foundation Coalition Initiative. Retrieved October 21, 2022, from https://www.datacoalition.org/press-releases/12886810
Zetter, K. (2016, April 21). Of course Congress is clueless about tech—it killed its tutor. Wired. Retrieved October 21, 2022, from https://www.wired.com/2016/04/office-technology-assessment-congress-clueless-tech-killed-tutor/
Zhou, Y., Wang, F., Tang, J., Nussinov, R., & Cheng, F. (2020). Artificial intelligence in COVID-19 drug repurposing. The Lancet. Digital health, 2(12), e667–e676. https://doi.org/10.1016/S2589-7500(20)30192-8