
My wish for 2025? To improve the harmful bias we see coming from AI systems, to create a more equitable future.
For the past year I’ve been working in the AI space as a Communications Manager, learning about AI and helping businesses communicate about this complex technology.
And here’s the thing: AI is (basically) holding a mirror to what we’re already doing, what we’re already experiencing in society.
It’s easy to blame the AI for “failing” or “creating bias” – but it’s actually humans who created this problem with biased beliefs that showed up in the datasets AI is trained on.
For a more equitable outcome we must challenge these biases – AI or otherwise – and I believe this requires a combination of two approaches:
- Human-first approach where we address these systemic issues at scale, such as through challenging our own unconscious bias, training to understand systemic implications, holding others & organisations accountable, and so on. And I mean fully address this, starting from school age all the way through to in the workplace, because there’s generations of biased beliefs that have worked their way into the systems in which our society functions.
- Data-first approach where we modify the data we train AI with to correct for these biases where us humans haven’t yet caught up. This won’t be easy – far from it – but it might be possible.
Of course, I’ve massively oversimplified the complexities for the sake of a LinkedIn post… but you get the idea: it’s up to us humans to create the future technologies we want as our reality. And I don’t want to hear something is biased “because of the AI” – because the AI might not have made that recommendation if the bias hadn’t shown up in the data it was trained on…
