Advertisement

Editor's Note: This article originally appeared on Autodesk’s Redshift, a site dedicated to inspiring designers, engineers, builders and makers.

Remember those set-and-forget robot vacuum cleaners that were all the rage several years ago? In addition to being a fun (and useful) novelty, they unintentionally provided a vivid example of why diversity in artificial intelligence is essential.

One night in South Korea, where it’s common to sleep on the ground, a vacuum robot “ate” a woman’s hair while she slept. The robot had no malicious intent; it acted as it was programmed to do. But that’s just it: The implications of different cultures weren’t considered during the product-development process. Nobody asked, “Does everybody who will use this product sleep on a high bed, and what needs to be considered for those who don’t?”

As artificial intelligence (AI) becomes increasingly pervasive—in many more ways than household robotic tools—it’s more important than ever to ensure diversity in the development teams creating it. Organizations typically focus on the more obvious aspects of diversity: ethnicity, gender, and age. This ignores some of diversity’s most vital elements: culture, tradition, and religion. Failing to consider the full scope of diversity could have seriously damaging impacts on many populations that AI is designed to assist.

The Three Dimensions of Diversity

That full scope of diversity comprises three primary dimensions: human diversity, cultural diversity, and systems diversity. Human diversity refers to things about people that are immutable—such as race, ethnicity, and age—the traditional dimensions of diversity. Cultural diversity includes qualities that are core to who a person is but are changeable, such as learning, thinking, and working styles; religion; ethics; and language. And finally, systems diversity underscores how systems—education, empowerment, and performance management, for example—interact with one another.

These dimensions of diversity are applicable to any business situation, particularly when it comes to artificial intelligence. When putting a team in place to develop an AI, it’s crucial for that team to consider whether any elements of human, cultural, or systems diversity have been overlooked while building the system. The challenge is that unless the team members represent those dimensions of diversity, it’s almost impossible to have people present to ask the necessary questions: If something is not a part of their reality (like sleeping on the floor instead of in a bed), they wouldn’t even think about it. And that leads to the danger of things being forgotten, and where that plays out with AI is that it starts to be more dramatic over time.

The Negative Effects of Ignoring Human Populations

I first started thinking about AI diversity and its consequences while working with talent acquisition. Anyone who has ever completed a search on a profile-based recruiting or job-search website might have seen the compounded negative effect of learned intelligence.

Consider a standard candidate search for an engineer, just entering the technical qualifications for a role—which produces a page full of Caucasian males. Now, keep all of the technical qualifications, and add in “Society of Women Engineers” as a search term to see what changes. Not surprisingly, a page full of women, who hadn’t shown up on the first search, appears. And if you swap out “Society of Women Engineers” for “Florida International University,” a college in Miami, that search returns a list of Latino engineers who hadn’t come up on the first search, either.

Those last two searches aren’t surprising at all. What’s notable is the first search—the default. AI systems are designed not to create disruptive experiences; by definition, they aim to create seamless experiences. AI is not going to say: “You haven’t done this before. Try something entirely different that I have no evidence you’ll like at all.”

(Image credit: Getty Images via GE Reports)

With AI, if I select profiles from the first search, it will learn and then continue to yield that same type of profile and candidate time and time again. In this mode, groups of people can be systematically eliminated. If certain groups are not included in the data sets that AI is taking into consideration, in the long run, problems or challenges that are outside of the data set may not be able to be solved for at all.

History bears this out in the pre-AI world in architecture for public housing. Cities took idealized designs from Europe and set them next to freeways and other less desirable locations, with little to no public or green space. Over time, that adverse physical environment had an impact on how people grew and developed as human beings. Although that environment gave birth to hip-hop culture, many other negatives far outweighed the positives. Imagine an AI world making largely autonomous decisions about space planning and architecture. Who ensures the system is thinking about available space and light so that people aren’t living in shadows and without parks?

The Need for Diverse Creators of AI

Nothing compares to having a place at the table, influencing changes that will shape the future. When it comes to AI, including a wider range of decision makers will require hand-in-hand cooperation among businesses, schools, government agencies, and other institutions.

Currently, however, no government agency exists to address this issue. AI Now, an open-source group created to address these ethical considerations, is being developed. The Obama Administration produced a white paper on this topic, in the context of workforce impact on AI. What the Trump administration might do is still unclear.

That means a lot of the burden of ensuring diversity will fall to the businesses that are building the AI systems and the business leaders who are forming the development teams. To start, if everyone on the development team looks the same, it’s probably not diversified. Aside from race and ethnicity, it’s key to consider things like language and nationality. The world view of someone from a less developed country in Asia is likely going to be much different than someone from Germany—and they will both ask different but important questions. And from there, it’s a matter of subdividing other qualities to identify the appropriate team for the problem it’s looking to solve through AI.

The Time for Diversity in Artificial Intelligence Is Now

AI and machine learning are relatively young, which means these diversity discussions are happening early—both in terms of time and that it’s still under development. The players are still being decided. That’s good news.

It’s also encouraging that, today, a different consciousness around diversity exists. The world understands what “diversity” means, unlike 20 years ago. This will go a long way to ensuring other cultures and world views are considered in these systems—after all, diversity in AI is a global issue with implications for all.

Building complete AI systems is not just about U.S. businesses solving problems for others, but about populations being able to solve problems for themselves. This, again, underscores why having these populations at the table is important. If they aren’t saying, “You should consider this in your artificial intelligence,” then they never get served.

That’s the challenge for the AI community. Thankfully, it’s still early enough to do something about it. That means the future of AI can still be shaped, for the better, through strategic diversity efforts.

This article originally appeared on Autodesk’s Redshift, a site dedicated to inspiring designers, engineers, builders and makers.

Danny Guillory is the head of global diversity and inclusion at Autodesk, where he works to integrate all dimensions of diversity and inclusion into many parts of the organization.

Advertisement
Advertisement