Virtual assistants have come under scrutiny for how they are designed to embody gender and race stereotypes. A 2006 virtual assistant known as Ms. Dewey was revealed to be catering to a white, straight male user through lines like “Hey, if you can get inside of your computer, you can do whatever you want to me” or the algorithm which showed her eating a banana when searched for “blow job”. The pushback against ChatGPT and similar conversational AI has highlighted warnings against anthropomorphising digital assistants since we do not need to perceive them as sapient for it to be exploited by profiteers. But what has become clear is that past criticisms of AI abuse were not only correct but anticipated the more dangerous digital landscape that we face now. The real reason that the critique has shifted from “people are too mean to bots” to “people are too nice to them” is because the political economy of AI has dramatically changed, and along with it, tech companies’ sales pitches.
The previous generation of AI was sold to us as perfect servants but as new chatbot search engines impact daily life, they will be sold to us as new confidants and even new therapists. However, on a certain level, it was precisely the degree to which people mistook their virtual assistants for real human beings that encouraged them to abuse them. The desire to see someone as less than human is the basis for dehumanization and as such, violence requires the perpetrator to see the victim as human. The increase of AI assistance has led to new questions about the ethics of their design and impact as they further integrate into the daily lives of people and the political implications that arise. The tech industry has shown that it can design biased algorithms and technology that enable hate speech and misinformation to be spread.
As conversational AI technology evolves to appear ever more human-like, designers will have to strike a balance between the need for natural interaction with it and the ethical implications of perpetuating certain biases through language, personality and conversational style. It is not enough to simply code conversational AI to reject explicit references to offensive content, more importantly improving the technology’s language understanding and the experiences of underrepresented users will require tackling more nuanced issues like stereotypes and power imbalances. The potential for conversational AI to become our confidants and therapists raises the stakes for the industry and further discussion around how we should treat intelligent systems like bots and AI.
Source link