Diversity within the artificial intelligence industry

Diversity within the artificial intelligence industry

Artificial intelligence (AI) technology holds incredible potential to positively impact the way society works, communicates, cares and understands. In fact, AI innovations are already being implemented in sectors such as health, computing, science and marketing to complement the expertise of professionals worldwide. This presents the possibility that AI can be beneficial to all diverse populations around the globe. 

However, without a fair reflection of these diverse communities within the AI industry, this may not come to fruition. That’s because an unjust representation of the various gender identities, ethnicities and sexual orientations typical of our global population at a decision-making level will not only engender disadvantages within the workforce, but also limit the capacity of AI to serve the world.

Drawing on statistics that demonstrates the current monoculture of the AI industry, this article will explore the importance of a diverse workforce in AI and explore the ways in which IntelliHQ strives to right the scales.


An emerging industry already unequal 

Although an emerging industry, there is already a clear underrepresentation of the world’s diverse communities in the AI workforce. 

Gender identity
In computer science departments of the world’s top universities, less than 17% of tenure-track professors are identified as women, non-binary or transgender.In two thirds of the countries leading AI innovation, the AI skills penetration rate for women, non-binary people and transfolk is lower than it is for men. 
Race and ethnicity
In computer science departments of the world’s top universities, 67% of tenure-track professors are white.Less than 1% of these tenure-track professors are Black, African, Indigenous or of Hispanic, Latino or Spanish origins. 
Sexual orientation
Over 40% of the LGBTQIA+ community within the AI field has experienced discrimination or harassment at some point in their study or career.Almost 82% of the LGBTQIA+ community within the AI field consider a lack of role models in the industry to be a major career obstacle.

Statistics courtesy of the Stanford University’s Artificial Intelligence Index Report 2021


Risks of a homogenous AI workforce 

An AI workforce unrepresentative of the communities it serves can lead to damaging internal cultures and undermined AI viability. The risks stem from homogeneity at decision-making and expand into the efficacies of the final innovations.  

  • Uneven distribution of power in leadership and decision-making 

The diversity crisis within the AI workforce trickles down from the leadership level. With disparate amounts of power currently within the hands of a majorly white and male-identifying demographic, industry priorities of AI advancement, data-collection and use are not representative of a true cross-section of society.  

Black in AI cofounder, Timnit Gebru, stated in an interview with the MIT Technology Review that a damaging bias has already eventuated in the emerging field.

“There is a bias to what kinds of problems we think are important, what kinds of research we think are important and where we think AI should go,” she said. “If we don’t have diversity in our set of researchers, we are not going to address problems that are faced by the majority of people in the world”.

It is only when historically underrepresented and vulnerable communities fairly populate the decision-making space within the AI industry that the issues of the most privileged will cease to dominate the sector’s progression.

  • Reduces the scope of datasets

David Quigley is the Founder and Director of Medmin, a consultancy agency that helps hospitals improve the quality of clinical information. He acknowledges that dataset diversity and control should be a chief consideration in AI and machine-learning frameworks.

“It’s less about implementing large amounts of data, but the right data that accurately reflects the population,” he says. “Ethically governed datasets are inclusive and key to relevant AI technology”.

This involves collecting data that takes into account things that shape the population’s everyday experiences. This includes ethnicity, gender identity and sexual orientation. With the AI industry’s relatively homogenous makeup, this data is not sought and the distinct issues that these diverse communities face are resultantly not accounted for in AI models.

With the vast majority of AI studies assuming gender is binary and just 2% of studies  funded by the National Cancer Institute meet diversity goals, it’s clear to see that the informed AI models will naturally become disconnected from society at large.

  • Perpetuates social hierarchies and inequality in AI modelling  

Misrepresentative datasets fed to AI models can lead to bias that reflects historical patterns of discrimination, underrepresentation and devaluation. The resultant systems are fundamentally flawed, expanding inequalities and diminishing population-wide application.

As seen by Amazon’s current facial recognition technologies failing to perceive darker skin tones, Microsoft’s Twitter-powered chatbot, Tay, evolving into a misanthropic beast and Berlin Transport’s gender recognition technology neglecting the existence of non-binary and gender-diverse people, insufficient AI models are already causing damage.

Sociologist Ruha Benjamin, author of Race After Technology: Abolitionist Tools for the New Jim Code, notes that AI is revealing that the social hierarchies in our everyday lives have corresponding virtual ones. She stresses that the potential to break these lies within the diverse natures of those creating them.

“I realise that we put so much investment in being saved by these objects we create – by these technologies,” says Benjamin. “But our real resource is ourselves, our communities, our relationships, our stories, our narratives”.


Breaking the bias at the beginning

The AI industry is just emerging. That means there is a chance to ensure its development is in safe hands for tomorrow. 

That’s why initiatives like IntelliHQ’s Diversity in AI program have been established to extend opportunities to women, non-binary people, transfolk and people of colour to become leaders in the industry. 

Dr Steph Chaousis of IntelliHQ says that equalising opportunity now is the only way to set the groundwork for an applicable future for AI.

“An industry that fairly represents the communities that will put AI into play secures an illustrious future for AI technologies,” Dr Steph Chaousis says. “There is so much potential for these technologies to revolutionise the way we live and work, solve problems and protect those who are vulnerable, but the change needs to begin at the top – before it’s too late.”

“The Diversity in AI program has been designed to uplift the right voices, find the right data and innovate for the actual society we live in today. AI will energise an equitable future only if we build an equitable base”.

With initial funding by the Australian Government’s Women in STEM and Entrepreneurship (WISE) program, the Diversity in AI program brings together Australia’s brightest underrepresented innovators to form a community of today and tomorrow’s AI leaders.

Participants learn valuable skills that help them to excel in an AI-focused career with confidence, support and knowledge that they’re setting the groundwork for underrepresented populations of the present and future. 

IntelliHQ’s Diversity in AI program provides the tools for women, non-binary people, transfolk and POC to be pivotal in the technological transformation ahead of us.

Share:

Related articles

How will the world foster a culture of AI in healthcare?

The importance of data security in AI-driven healthcare

Together, let’s lead the future of healthcare. Reach out to the IntelliHQ team today.

16 Nexus Way, Southport, QLD 4215

info@intelliHQ.com.au

+61 400 832 082