Whether it concerns ChatGPT or a 'photo' of the Pope in a trendy coat, you can no longer ignore AI these days. There are many positive stories, but it is also a technology that comes with it a lot of criticism. One of the objections to AI is the fact that the technology is not inclusive. But why is that? And what can we do about it?
First, an example to demonstrate how AI lacks inclusivity: If you ask AI to create an image of a woman on a horse, it will most likely be a white or Asian woman with long locks. That's not surprising, because the most common images of women on horses on the internet look like this.
Another example: if your AI shows a photo of a fish, the technology will most likely only recognize the fish as a fish, if fingers are also visible in the photo. That’s because there are many photos online of people holding a fish. The online world is the only frame of reference that AI has and, unfortunately, this world is not as inclusive as we would like. We humans, power the internet, and because we are not inclusive, the online world is not inclusive either. And neither is AI in turn.
AI is, basically, like a 5-year-old.
After all, AI is like a little child: it imitates our human behavior and does what we tell it to do. When a small child does something that is not allowed, we often blame the parents. They have not raised the child correctly. How is it possible that with AI we don't blame ourselves but the technology? We cannot expect this technology to know what it’s doing simply because we call it 'intelligent'. So we have to start with ourselves.
Let’s bring a bit more “Switzerland” to the internet.
The internet and the news are full of extremes. After all, a moderate opinion is less interesting to read. AI is therefore only familiar with those extremes and has barely been introduced to the “Switzerland” of the internet. Our task is therefore to fill the internet with nuanced information, so that AI also gets to know this intermediate area.
This means that we should go back to the drawing board and learn to have a real conversation with each other both offline and online. Only in this way can extremes be brought together into a more neutral, gray zone, which AI must then be fed with. After all, ChatGPT is made to have conversations, and is based on probability, yet it is often used as an information tool, but the technology does not have the right source for this. So we need something like mathematics. We all know: 1 + 1 = 2, and that is the only answer option, no matter what anyone else says.
Don’t trust everything AI tells you
So before we ask AI for information, we must first be critical and think carefully about what input we give the technology. Starting with the information we put online, and then the questions we ask the technology. After all, AI is not 'the holy truth' and will sometimes give incorrect output, or output that we do not like, simply because our own thinking is not confirmed.
Whatever your opinion on AI and how you approach it, it is always important to keep in mind that the learning curve of AI is something that takes a lot of time. After all, a more inclusive version of AI starts with humans, and we must give ourselves the time and space to develop this ourselves first.