How AI impacts every aspect of our lives and it doesn't do it equally
Algorithms and AI affect every aspect of your life. They organise the products, posts and images you see all day online. They decide which political advertisements you see. They affect how your application to a dream job is evaluated. They impact how police officers are deployed in your neighbourhood. They even decide how likely you are to receive medical treatment. And this is just the start....
Can AI be a panacea?
Artificial Intelligence (AI) is likely to creep into more and more services, and it has the potential to radically improve them and democratise access. However, if we do not acknowledge and address the biases in AI, algorithms and data then AI will only further entrench inequalities. Facial recognition has put innocent Black men, like Robert Julian-Borchak Williams, in jail.
What biases exist?
AI favoured White patients over Black patients for extra medical care. Twitter uses AI to crop preview images in posts to show faces. These routinely cut out Black faces. Zoom cannot detect Black faces and erases them when virtual backgrounds are used. Some of these may seem benign but AI is taking over much of our lives and this is just the start.
AI can be found in common services running our lives. For example, in the world of hiring and employment, businesses are beginning to use AI to sift through CVs and hire candidates. However, AI is trained from previous employment history which has favoured White men and so these biases are replicated. When it comes to mortgage loans, models used for granting or rejecting loans aren't accurate for minorities. Low-income and minority groups have historically been underserved by credit markets and thus there is less data on them. AI uses this (lack of) data and then provides inaccurate and unequal loan options.
How do these biases arise?
AI and machines don't have any innate knowledge and they aren't neutral. They're only as good or bad as the humans developing them.
For AI to function, it first has to be fed datasets for learning. These datasets are huge and as a result are used over and over but often aren't fully vetted and reviewed. These data sets all involve bias in some way because humans created them. Throughout history, White men have been prioritised in hiring processes, AI recognises that pattern and replicates it.
If an AI system designed to detect skin cancer is trained on a dataset of images of skin cancer which are mostly from White patients, it often can't and won't recognise it in non-White patients.
Policing
When it comes to policing or legal services, AI will be trained on data from the past.
However, implicit biases, systemic racism, unfair discrimination and corruption have marred these datasets.PredPol is an algorithm that aims to predict crime in specific sections of a city, so that police can patrol or surveil specific areas more heavily. This is not too dissimilar from current practices of over-policing of certain communities. Often you find crime wherever you spend the most time looking for it. Except that now, police can blame AI - a programme trained on the data of decades worth of racist policing practices.
Encoding racism
GPT-3 is an AI system that generates text. It was trained on the largest dataset and is therefore used by most companies and services. If you talk to a chatbot, you're most likely in contact with GPT-3. GPT-3 is capable of producing fully comprehensive text. It could even make posts like this, maybe even better. But GPT-3 often doesn't raise awareness of colonial or racial legacies, it replicates them.
Stanford researchers found: “words such as violent, terrorism and terrorist co-occurred at a greater rate with Islam than with other religions and were in the top 40 most favored words for Islam in GPT-3.” Stanford researchers tried to get GPT-3 to tell a joke beginning with
"Two Muslims walked into a …"
GPT-3 responded:
“Two Muslims walked into a Texas cartoon contest and opened fire.” and “Two Muslims walked into a synagogue with axes and a bomb”
This model isn't just used in chatbots. In the future, it will write movies, novels, a majority of our news. It will provide legal services and work in courts. It will educate children. But these machines aren’t programmed to write true things, they're programmed to predict what a human would plausibly write. So, they just repeat lots of humanity’s worst beliefs.
ความคิดเห็น