Data used to train artificial intelligence does not ‘reflect the demographic groups we have in Australia’, says researcher
A recent Australian study has raised significant concerns about the use of artificial intelligence (AI) in job recruitment, highlighting potential discrimination risks, especially for candidates with non-American accents or disabilities.
Dr. Natalie Sheard from the University of Melbourne conducted research revealing that many AI recruitment systems are trained on biased datasets predominantly sourced from the United States. This leads to reduced accuracy in understanding non-native English speakers. For instance, the word error rate for AI transcription can rise to 22% for speakers with Chinese accents, compared to under 10% for U.S.-based English speakers.
The study also underscores a lack of transparency in AI hiring decisions, leaving job seekers and recruiters without clear feedback. While AI has not yet faced court challenges in Australia, past incidents—such as overturned Service Australia promotions due to AI flaws—highlight its potential risks. Dr. Sheard advocates for specific AI legislation and stronger anti-discrimination laws to address these concerns.
Further emphasizing these issues, a viral video showed a candidate interacting with an AI assistant named “Alex the recruiter” from Club Pilates. Viewers described the experience as “dystopian,” “dehumanizing,” and “disrespectful,” lamenting the lack of human interaction. AI bots, capable of interviewing hundreds of candidates in a short time, are being adopted by companies like L’Oreal to streamline hiring processes by analyzing candidates’ tone and facial expressions. Despite arguments that AI reduces bias and increases efficiency, critics argue it lacks the instinct, empathy, and nuance of human recruiters.
Recruitment expert Tammie Ballis warns that AI interviews could be “irresponsible and dangerous,” citing issues such as lack of transparency, inability for candidates to ask questions or seek feedback, and the potential for technical malfunctions. While AI may help with tasks like resume screening or writing job ads, Ballis insists that interviewing remains fundamentally a human responsibility.
In response to these concerns, the Australian Human Rights Commission and the Actuaries Institute have collaborated to produce guidance on preventing discrimination in AI applications. They emphasize the necessity for rigorous protections to ensure the integrity of anti-discrimination laws in the face of rapid technological advancement.
As AI continues to permeate the recruitment landscape, experts urge organizations to implement thorough reviews and human oversight to ensure ethical and responsible integration of AI. This includes evaluating AI tools both prior to their implementation and after a certain period of use, to support equity and diversity while mitigating risks of bias.
The study serves as a critical reminder of the potential pitfalls of relying heavily on AI in recruitment and the importance of maintaining human elements in the hiring process to ensure fairness and inclusivity.