Tech

Addressing harmful bias from AI systems

Image created by #Copilot. Image description: An illustration showing a person using a computer with various digital-like symbols and icons around it, and a digitally created robot head coming up from the keyboard, representing artificial intelligence. The background and elements use multiple shades of blue. The image suggests how biases can be embedded within AI systems and technology.

My wish for 2025? To improve the harmful bias we see coming from AI systems, to create a more equitable future.

For the past year I’ve been working in the AI space as a Communications Manager, learning about AI and helping businesses communicate about this complex technology.

And here’s the thing: AI is (basically) holding a mirror to what we’re already doing, what we’re already experiencing in society.

It’s easy to blame the AI for “failing” or “creating bias” – but it’s actually humans who created this problem with biased beliefs that showed up in the datasets AI is trained on.

For a more equitable outcome we must challenge these biases – AI or otherwise – and I believe this requires a combination of two approaches:

  1. Human-first approach where we address these systemic issues at scale, such as through challenging our own unconscious bias, training to understand systemic implications, holding others & organisations accountable, and so on. And I mean fully address this, starting from school age all the way through to in the workplace, because there’s generations of biased beliefs that have worked their way into the systems in which our society functions.
  2. Data-first approach where we modify the data we train AI with to correct for these biases where us humans haven’t yet caught up. This won’t be easy – far from it – but it might be possible.

Of course, I’ve massively oversimplified the complexities for the sake of a LinkedIn post… but you get the idea: it’s up to us humans to create the future technologies we want as our reality. And I don’t want to hear something is biased “because of the AI” – because the AI might not have made that recommendation if the bias hadn’t shown up in the data it was trained on…

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.