How to Spot Human Bias in the Technology Your Business Uses


Throughout the pandemic, technology decision makers have been rapidly adopting new solutions to streamline remote and hybrid business processes. But this integration process has brought to light a long-standing problem: the inherent biases introduced by humans into technology products.

Although many technology solutions are powered by AI, the tech industry has historically lacked a responsible verification process to account for potential biases in the data on which the algorithms of these products are built. The result is often large-scale exclusionary practices at companies that rely on AI-based technology to run their businesses.

Tech giants like Google and Apple are committed to sweeping change, offering hope for a future where inclusion and equity are the norm in industry processes, especially in hiring. However, real change also requires an examination of the past and present biases that humans have brought to the technological solutions used by companies. And leaders need to start raising awareness of the biases inherent in the technology they use to make things happen.

A Pervasive and Overlooked Problem

The tech industry struggles with transparency, accountability and awareness, hampering efforts to reform biases in products before they start. Besides, the historical lack of diversity of technology can manifest itself in the products that people create.

Consider Amazon, which was criticized in 2018 for using an AI recruitment tool who introduced prejudice against women. The company’s machine learning team created computer programs that filtered top talent from job applications and used AI to rate applicants on a range of one (poor) to five (excellent) stars. While this idea worked in theory, the system did not inclusively assess job candidates for the company’s software developer openings in practice.

Why? Because the tool’s AI was trained to sort applicants based on 10-year-old resume data from former applicants, who were mostly male. As a result, the company’s system favored male candidates and downgraded female candidates’ resumes. And because the process was automated, it was scaled to create a proprietary hiring system across the organization. This is just one example of how data weaknesses can amplify bias.

3 ways to assess bias in new technologies

We are only at the beginning of eliminating bias in technology, so awareness is essential. But while it’s easy to blame technology for bias, bias ultimately stems from poor data provided by a human. Some factors may be beyond your control when it comes to human bias, but there are steps you can take to identify bias in the technology products your organization uses and hold vendors accountable for anti-bias technology offerings.

Prioritize fair and impartial decision-making

To prevent bias from creeping into technical decisions, every employee should be trained in fair and anti-bias practices, especially if your company writes algorithms for one of its products. Make this training a standard to ensure it is part of the fabric of your organization’s operations. Also apply anti-bias data controls if using automated processes. You don’t want automation to scale potential biases like it did with Amazon’s AI recruiting tool.

Additionally, consider creating a technology purchasing committee with a range of different leaders such as HR professionals, data scientists, and DEI experts to assess all technology purchases for potential bias. And be sure to evaluate purchasing decisions based on ethical considerations as well as business considerations. Ultimately, biased technology products create exclusionary side effects, which can negatively impact your culture and operations.

Push suppliers towards transparency

Make transparent data assessment a priority for the partner companies and vendors you work with. Look for organizations with formal, proven processes that assess data for bias and regularly publish those results. Public companies, such as Amazon, Apple and Microsoft, are required to publish annual environmental, social and governance (ESG) reports that document their impacts in these three areas, but private companies are not required to publish ESGs.

Without widespread transparency about biased data, you won’t have the basic information from technology providers to make ethical decisions. So push for proven confirmation that their data has been analyzed for bias. Also voice your concerns about companies (public or private) that don’t share information about their anti-bias efforts in their business processes, and don’t be afraid to walk away if you’re not convinced.

Prioritize inclusive features

Consider differences in culture, language, and disability when deciding on a technology solution, as human biases exist in these areas as well. For example, Zoom addresses language and disability by adding closed captions with live translation to their calls, allowing non-English speakers and those who are hard of hearing to feel included.

When adopting new technology, consider whether the user experience is acceptable for multiple generations of workers, not just younger employees. Ageism is rampant in tech products, and it’s up to you to provide training and guidance to everyone in your organization.

To do the first step

When engaging in your own DEI efforts, it is essential to eliminate human bias in your organization’s internal datasets and those you acquire from third-party vendors. By pushing for more transparency and accountability, embracing a more detailed purchasing decision process, and making inclusive products a priority, you can raise awareness and help lead the charge in eliminating human bias in technology.

Rachel Brennan is Vice President of Product Marketing at Bizagi.


Comments are closed.