Is AI Ethical?
By Iness ben Guirat, 2018-2019 Open Internet Leader
Artificial intelligence has increasingly become a hot topic, with lawyers, academia, civil society, and the private sector all looking at the issues from different perspectives. Several questions have been raised and yet after years of research and discussions, people are still arguing about the impacts of AI. We can simply look at Elon Musk’s tweet about AI and the tweets in response to realize that people are concerned about so many aspects of AI. Personal research has shown that every week on Twitter, the hashtag #AI gets over 20 million engagements from users between tweets and retweets.
So what are the main questions and concerns around AI?
Is AI racist, sexist, as well as biased?
There are many examples that show how racist AI can be, from facial recognition not being able to recognize black people, and thinking that Asians are always blinking, to using race-based biased data to predict how many years criminals should spend in prison.
One other critique of AI is that it’s biased because the companies that are building these tools are mostly from the global north and usually do not tend to consider other cultures when designing their tools. Research conducted by Genevieve Bell, a former Vice President at Intel, indicated that human interaction with technology is not culturally universal. It is neither the same nor objective, and we encode culture within and throughout technology at a conscious and unconscious level.” AI is also being accused of being sexist; for example Google’s online advertising system showed high-income jobs to men much more often than women. These examples and more have been the issues at the heart of discussions at conferences, IT companies that develop AI tools, and in the news.
As an IT person by training as well as being an activist for the last few years I must say no: the technology itself is not biased. Technology does not distinguish between genders unless the developer tells them so; technology does not distinguish race unless the developer tells them so, and definitely will not be culturally sensitive unless the developer tells them so. The examples listed above come from biased training data, where one or more groups may be under-represented or not represented at all. Saying AI is racist is like saying prison walls are racist because we have more black people in jail than white people, or like saying IT conferences are sexist because we have fewer women than men. Unfortunately the world itself is biased, racist, and sexist, even without any AI, and in some countries moreso. AI is about training current and past data, AI is giving back what we feed our machines. So if our machines are being fed racist or sexist data, it’s in no way the fault of the machines, but rather those who feed them and most importantly the society structure that created them.
Is AI taking our jobs?
The overly simplistic answer to this question is crystal clear: it’s a YES. Studies estimate that 47 percent of currently existing jobs in America are at high risk of potential automation in the coming decades. However, the deeper question underlying this is: did humans want these jobs in the first place or is it a consequence of class struggle, poverty, and inequality? For example, when we see people who work in textile factories in African countries or south east Asia, whose jobs will be at risk due to the potential future uses of AI, most of these workers come from poor areas, with no college degrees. They may not have many other job options and need these jobs to survive.
One might say it’s still a job that will pay the bills. But won’t the AI let us invest in more creative jobs like art and research?
One of the greatest projects that is investing in the impact of AI in the future of employment is “Plaine commune”, a concept elaborated by the French philosopher, Bernard Stigler. Plaine commune is a neglected area in the suburbs of Paris, but since 2016 it has been a territorial experimentation project aimed at making Plaine Commune a "pilot territory" on questions relating to the future of work, of the contributory economy, of urbanization, of education and research in the context of digital transformation. The main goal of this project is to engage citizens in the digital world. The future is technological, therefore to develop digital local knowledge is a must. It is no secret that the path is hard and we need to think of a new economic plan by building a multi-stakeholder approach with IT companies and industries and to develop new policies. All of this is for good reason, the end goal of which is better human dignity for all.
How can AI help us?
AI is all about data being processed to produce more efficient results either in decision making or pattern recognition, but it can also be used for better societies and a better world in general like freeing up resources for better use, reducing inequality, and making government programs more efficient. Project Cybersyn for example is a Chilean project from 1971–1973 that aimed at constructing a distributed decision support system to aid in the management of the national economy. The project unfortunately failed for political reasons, but we can not hide the fact that for the first year it did produce efficiency and improvement to the country’s economy and it’s still a reference for current and future research on how to use AI in the global economy. AI has also shown its positive impacts when it comes to persons with disabilities for example to expand their use of computers by reading and hearing. New apps such as Ava convert spoken conversations into text, and Microsoft’s Seeing AI, which scans text and translates into spoken words for the visually impaired.
AI is starting to demand a radical culture shift, shifting our perspective of machines from passive receivers of commands into collaborators. AI alone is not going to solve today’s problems when it comes to racism or ethics. The world we build needs to be fixed by us humans.