Social media users on popular platform Twitter saw several discrepancies recently on how the platform displays people with different skin tones and values. This has resurged a debate over whether advanced technical computer programmes and mostly algorithms that “learn”, can manifest or amplify real-world biases and issues like racism and sexism.
The race bias problem was first discovered when education tech researcher Colin Madland posted and brought out in the open about how video-calling software platform Zoom cropped the head out of a black person during a video call. The tool failed to detect the head as a human face. When Madland later posted a second photo combination that highlighted the acquaintance as visible, the image display algorithm of Twitter appeared and showed his face in the preview. Madland then appeared as a Caucasian with white skin.
Post that, many users replicated the Twitter platform’s discriminatory manner of prioritising faces and racist bias with the Twitter Photo Crop tool. In one of the tweets by cryptography engineer Tony Arcieri, Twitter featured the front of Republican senator Mitch McConnell, who is a Caucasian, as the preview of a combo photo involving former US President Barack Obama, who is of partly African descent.
Twitter’s photo preview algorithm
A Twitter official acknowledged the issue and stated that the company was looking into it. “Our team at Twitter has conducted the test for bias before shipping the model and haven’t found any evidence of racial or gender bias in the testing process of twitter cover photo. However, it’s clear from these examples that we at Twitter have more analysis work to do. We’re looking into the issue and will continue to share what information we derive from the research and what actions we take.”
Twitter’s chief design officer Dantley Davis responded to a few tweets, detecting variations how the AI-based system responded based on further manipulations of an image. Davis also linked an older blog to disseminate information by Twitter engineers that highlighted how the auto-cropping feature of the system worked. This feature uses neural network Twitter algorithms for image previews, a type of an AI machine learning approach that tries to mimic how human brain processes data.
Twitter’s racist image-cropping algorithm
Several researchers found that such AI-based technologies, which usually rely on artificial intelligence, are more prone to reflecting sociological biases, apart from design flaws. “Automated systems are not entirely neutral. They highlight the priorities, preferences, and prejudices of the people who hold power to mould artificial intelligence,” stated authors of the Gender Shades project.
The researchers used several images of lawmakers from three African and three European countries. They found out that all the three popular software tools most accurately classified white and male faces, followed by white women. According to the research, black women were more prone to be incorrectly typed.
“Whatever biases prevail amidst humans enter the systems too and even worse, they are amplified because of the complex socio-technical systems like Web. As a result, algorithms may create or substantially increase existing inequalities or discriminations,” a research review note by Leibniz University Hannover’s Eirini Ntoutsi stated. This can have significant implications for applications where AI-based tech such as facial recognition tools are used law enforcement and health care.