Racial and gender bias in AI systems – Alexander Fefegha

AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or other biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. A crucial principle, for both humans and machines, is to identify how we could take steps to limit the impact of bias. Bias in AI system mainly occurs in the data or in the algorithmic model. As we work to develop AI systems, it’s critical to develop and train these systems with data that is unbiased (if possible) and to develop algorithms that can be easily explained. (Event language: English)


Alexander Fefegha

Alex is the co-founder & head of making at Comuzi, a design & innovation studio, working at the intersection of emerging technology and humans. Some of Comuzi’s clients include Nike, ASOS, Uber, BBC, University of Arts London, Ustwo, Waltham Forest Council and the NHS. Alex’s work has been recognised internationally in recent times for his work investigating the ethical implications of AI, algorithmic bias in regards to race and gender and exploring the future technological interfaces that we as humans will interact with. Alex holds a Masters degree in Innovation from Central St Martins.
With the support of British Council