— By Shreya Ahlawat
The use of artificial intelligence has become widespread in recent times, particularly in fields like healthcare, justice systems and recruitment. While there is no denying that AI has been immensely helpful, we do have to question whether they can be wholly relied upon to make sensitive and crucial decisions. Computers pick up on biases the same way children do — through interactions with us. Like a child, computers will pick up on words and phrases that appear or are associated together. Basically, the algorithm is only as good as the data you put into it. If the data put into an algorithm is inherently biased, the AI will be biased, too. In times when more and more companies are relying on AI for various purposes, it is important to make note of these flawed algorithms. Distorted input data, false logic or just prejudiced programmers mean AIs not only reproduce but also magnify human biases. Rushed or incomplete training algorithms are another reason for biases in programmes. For instance, a chatbot designed to become more personalized through conversations can pick up bad language, unless programmers take time to train the algorithm not to. Unsurprisingly, AIs being biased are certainly harmful for everyone since AIs are now used in police departments, hiring tools, the justice system, etc. People can be falsely accused, overlooked in a job search, and even unfairly prosecuted. Extensive evidence suggests that AI models can embed human and societal biases and deploy them at scale. Instances of AIs used for job selection by HR in America leaving out women and minorities aren’t uncommon. This is largely because they used keywords that target men more than women, and went by past employee records that were mainly of Caucasian men. Julia Angwin and others at ProPublica have shown how COMPAS, used to predict the tendency of a convicted criminal to reoffend in Broward County, Florida, incorrectly labeled African-American defendants as “high-risk” at nearly twice the rate it mislabeled white defendants Besides just being biased, AIs, as discussed above, can also amplify these biases. More than 90 million US smartphone owners use voice assistants at least once a month. Interacting with these devices on a regular basis can affect us and our ideas. One such instance is how tech companies commonly gender their voice assistants like Siri, Alexa, and Cortana as female since research shows that when people need help, they prefer to hear it delivered in a female voice, whereas male voices are preferred when it comes to authoritative statements. And companies probably design the assistants to be constantly cheerful and courteous — even when users are outright mean and use crude language — because that sort of behaviour maximizes a person's desire to keep using the device. The more we’re exposed to a certain gender association, the likelier we are to adopt it; therefore, the more our tech teaches us to associate women with submissive assistants, the more we’ll come to view actual women that way, and consequently, reinforcing the already skewed gender dynamics. Thus, AIs have to be developed keeping strongly in mind that they will be applied across various demographics and fields; and that they cannot cater only to a particular subsection of society. The responsibility eventually falls on the researchers and creators of the AI to make sure the data used is unbiased and that they also don’t let their own biases intervene. There is little that can be done to control how users affect AIs since we can’t tell every user to modify their behaviour, but programmers can at least curb how much they reinforce a particular bias. To do so, we will have to hold humans accountable for the biases that AIs propagate and amplify. Simply identifying and correcting a faulty algorithm isn’t enough, the human-driven processes behind the wheel need to be improved as well.