A recent study of 2,360 data science students, academics, and professionals by software firm Anaconda states that only 15% of institute and college professors said that they’re teaching AI ethics, and just 18% of students specified that they’re learning about the subject.
Emergence and expansion of artificial intelligence (AI)
We live in a world that is engulfed in data. This data is that which gives sustenance to Artificial Intelligence (AI). In the 21st century, AI has become an essential part of the technology industry. AI techniques have gone through a revival after simultaneous advances in computer power, data that were flooded in and its theoretical understanding.
However, with its further advancement over these years, for all its promised benefits, AI has a bias problem. The data which the AI systems use as input can have built-in biases, despite best efforts by AI programmers. This is where ‘Ethics of Artificial Intelligence‘ comes in. It can be concerned with two things. First, the moral responsibility of humans as they design, construct, use and treat artificially intelligent beings. Second, machine ethics concerned with the moral behavior of artificial moral agents.

Only 15% of institute and college professors said that they’re teaching AI ethics.
Despite agreements among private companies, research institutions and public sector organizations, that AI should be ‘Ethical’, there is still a debate about what comprises ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization.
Artificial Intelligence is an immensely powerful technology that is not going anywhere, and the only way for AI is up high into indescribable areas but with ways to standardize and use it ethically without causing too much of harm. It has already become increasingly intrinsic in facial and voice recognition systems. Some of these systems have an undeviating impact on people and positive business implications. The biases and errors that these data systems are vulnerable to are typically introduced by its human makers and can be multifold. Additionally, there can be biases in the data used to train these AI systems itself.
Social biases in AI data systemsÂ
According to a variety of reports since 2018 and 2019, facial recognition algorithms made by Microsoft, IBM and Face++, all had biases when it came to detecting people’s gender. These AI systems were able to detect the gender of white men more accurately than the gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people’s voices than white peoples’. Similarly, Amazon’s .com Inc’s termination of AI hiring and recruitment is another example that exhibits that AI cannot be fair. The algorithm preferred more male candidates than females. This was because Amazon’s system was trained with data collected over the 10-year period that came mostly from male candidates.
There are other such cases of racial and gender biases that sneak into algorithms. The technology operating self-driving cars can more easily recognize white pedestrians than colored pedestrian, which makes them a higher threat for being hit. Another example of this is discrimination can be slid into investment algorithms, making it more intricate for people of color to obtain loans, which is completely obscured.
Although researchers studying more into these rising biases were hopeful that future computer and data scientists could do better than previous generations to learn and be aware of such issues, this new survey has however made them more pessimistic. The fact that only 15% of instructors are teaching AI ethics, and only 18% of students are learning about it, is an astonishing result in itself. The study by Anaconda covered data scientists from more than ten countries. And according to its reports, these low figures are not due to a lack of interest by students. Around half of the respondents mentioned the social impacts of bias were the “biggest problem to tackle in the AI/ML arena today.” Yet, such issues were hardly discussed or studied in those universities.
Students Are Learning About AI Ethics
The neglect of AI ethics extends from universities to industries. Amid growing criticisms over AI’s racial and gender biases, several tech giants are initiating their own ethics schemes, but of questionable intent. These schemes are allocated as philanthropic efforts to make tech serve humanity, but critics allege this as a step to evade regulation and scrutiny through ‘ethics washing’. While organizations can diminish the problem through fairness tools and explainable solutions, neither seems to be gaining mass adoption. Only 15% of respondents said their organization had implemented a fairness system, and just 19% reported they have an explainability tool in place. The study researchers warned that this could have far-reaching consequences saying “Above and beyond the ethical concerns at play, a failure to address these areas proactively poses a strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions”. The survey further discovered concerns around the protection of open-source tools and business training and data drudgery.
With the emerging movements of Black Lives Matter, we can only hope that organizations implement ethical steps with the help of strict government regulations in place.