Microsoft's chief diversity officer says diversity and investment in the workforce can help fix AI's bias problems.
At the beginning of 2023, Microsoft found itself in a PR firestorm. The company was working to demonstrate its progress in artificial intelligence following a multi-billion dollar investment in OpenAI, the maker of ChatGPT. It added an AI powered chatbot into its Bing search engine, which placed it among the first legacy tech companies to fold AI into its flagship products, but almost as soon as people started using it, things went sideways.
A New York Times journalist sparked international intrigue over a conversation he had with Bing that left him "deeply unsettled". Soon, users began sharing screenshots that appeared to show the tool using racial slurs and announcing plans for world domination. Microsoft quickly announced a fix, limiting the AI's responses and capabilities. In the following months, the company replaced its Bing chatbot with Copilot, which is now available as part of its Microsoft 365 software and Windows operating system.
Microsoft is far from the only company to face AI controversy, and critics say the debacle is evidence of broader carelessness around the dangers of artificial intelligence within the tech industry. For example, Google's Bard tool famously answered a question about a telescope inaccurately during a live press demo, a mistake that wiped $100bn (£82bn) off the company's value. The AI model, now called Gemini, later came under fire for "woke" bias after the tool seemed unwilling to produce images of white people for certain prompts.
Still, Microsoft says AI can be a tool to promote equity and representation – with the right safeguards. One solution it's putting forward to help address the issue of bias in AI is increasing diversity and inclusion of the teams building the technology itself.
"It's never been more important as we think about building inclusive AI and inclusive tech for the future," says Lindsay-Rae McIntyre, Microsoft's chief diversity officer, who joined the firm in 2018.
A former teacher for the deaf, McIntyre has spent over 20 years in human resources in the tech industry, including at IBM, and has lived and worked throughout the US as well as in Singapore and Dubai. Now, she says her team at Microsoft is increasingly focused on embedding inclusion practices into the firm's AI research and development to make sure there is better representation "at all levels of the company".
There's good reason for this focus. Embracing AI products has brought new life to the nearly 50-year-old brand. In July, the company reported revenue for the year was up 15% to $64.7 billion (£49.2 billion), due largely to growth from its Azure cloud business, which has greatly benefited from the AI boom as customers train their systems on the platform.
The efforts also push the company closer to its long-time goal to build technology "that understands us", as its CEO Satya Nadella recently put it. But in order for AI – or more specifically large-language models such as ChatGPT – to be empathic, relevant and accurate, McIntyre says, they needs to be trained by a more diverse group of developers, engineers and researchers.
This may not be a guaranteed fix. The large language models that underpin tools such as Copilot, ChatGPT and Gemini are built by scraping massive data sets from across the internet. If bias is present in the training data, it's exceedingly difficult to stop it from cropping up later, which is of particular concern as artificial intelligence is used out in the real world.
Companies are embracing tools to help with hiring, for example, but experts warn they can reinforce race and gender biases as they filter candidates. Recent research from MIT found that AI interpreted professions such as "flight attendant", "secretary", and "physician's assistant" as feminine jobs, while "fisherman", "lawyer", and "judge" were deemed masculine jobs. It also labelled emotions including "anxious", "depressed" and "devastated" as feminine. Meanwhile, some US police departments have begun using AI to write up crime reports.
Still, Microsoft believes that AI can support diversity and inclusion (D&I) if these ideals are built into AI models in the first place. The company maintains that inclusion has always been a large part of its culture, from the Xbox Adaptive Controller designed for users with special needs to the many accessibility features in its Microsoft 365 products. But the pressure to keep up with the pace and scale at which AI is growing is intensifying Microsoft's need to invest in its own diversity.
In its 2023 diversity report, Microsoft reported about 54.8% of its core workforce is made up of racial and ethnic minorities, which is in line in part to rivals such as Apple and Google. Meanwhile, Microsoft's workforce is 31.2% women, down a few percentage points compared to Apple and Google.
Below, McIntrye talks with the BBC about addressing biases in generative AI, how inclusive allyship looks across different cultures, and what Microsoft is doing to ensure its own talent can keep pace with AI's rapid evolution.
How is Microsoft addressing bias in generative AI that afflicts these types of tools?
We do a tremendous amount of investment [around this] but we also thread education on elements like bias and inclusion generally through all the work that we do.
Microsoft believes AI technologies should perform fairly. We are continuing to invest in research on identifying, measuring and mitigating different types of fairness-related harms, and we are innovating in new ways to proactively test our AI systems, as outlined in our Responsible AI Standard. To do this, Microsoft works with a wide array of experts, including anthropologists, linguists and social scientists – all of whom bring valuable perspectives to the table that advance and challenge the thinking of our engineers and developers.
To develop and deploy AI systems responsibly, we are putting D&I at the centre by ensuring a wide range of backgrounds, skills and experiences are represented across the Microsoft teams that are envisioning and building AI so that it's developed in a way that's inclusive of all users. We also ensure our leaders who are making the decisions about these teams and products have the tools to understand issues of privilege, power and bias.
Microsoft received backlash this summer following headlines around cutting its diversity and inclusion team. The company later clarified the team remained intact. Has there been a change to the company's commitment here?
We are really fortunate Microsoft's commitment to diversity and inclusion is unwavering and unchanged. We're expanding. In that news cycle, I was reminded of how many people are depending on Microsoft to have inclusive tech, to make sure that we are continuing to stay on the forefront of the diversity and inclusion experiences, to share our learnings and to listen to how other people are experiencing this same moment in the industry. Not because we know more than anybody else – because for sure we don't, we're learning all the time – but we do have resources that not all companies have.
More like this:
• Why Zoom wants you to go back to the office
Are there any other focus areas when it comes to AI and inclusivity?
One that is pretty dynamic is [making AI accessible] in more languages. When you're operating in a language different from your primary language, your brain has to work harder and you're less productive and less able to be your true authentic self. There's a real ongoing opportunity in AI to be even more thoughtful and relevant on the totality of global language. It's an enormous task, but we’re learning so much every day and the AI is just getting smarter.
Beyond AI, we did a partnership with external organisations who support the LGBTQ+ communities and our own employee resource group about adding pronouns to user profiles in Microsoft 365. [This allows people] to feel seen, recognised and cared for in the technology, which is the experience we want them to have.
How do you help a workforce of 230,000 understand allyship when it looks different in various cultures?
We have a core strategy but also localise it. In India, for example, we just ran a D&I experience at the intersection of race, ethnicity and religion for managers and leaders. Through feedback, we were also able to have Copilot pop up with responses that are both qualitative and quantitative so that we can be more effective as we deploy it to the whole of the workforce.
We're also doing that in Australia and New Zealand for the indigenous communities. And in the Middle East and Africa, we wanted more support for employees and their families who are going through menopause – something that isn't always talked about in cultures around the world.
We also have to think about something as simple as the devices and the languages people are using, along with the accessibility features. Everyone wants an empathetic AI that is knowledgeable and relevant. We want to feel like the AI understands us and what we're trying to achieve. So, being able to have cultural context in the AI experience will result in a more satisfied experience – and those expectations that we put on the technology ultimately come from having a diverse workforce working on and building and creating the AI from the beginning.
Microsoft by the numbers:
228,000: Microsoft’s global headcount
190: Number of countries in which Microsoft operates
9: Global employee-led resource groups that foster an inclusive workplace
5 billion+: Number of chats and images generated through Copilot to date
60,000+: Azure AI cloud customers building AI into their apps and services
How do you make sure talent doesn't get left behind as AI is evolving?
We have an incredible AI learning hub that allows us to access the latest and greatest learnings… and the employee resource group is doing upskilling, leveraging some of the content to even teach themselves. Whether they are an engineer or a field seller or a program manager, we offer AI [courses]to help make us more productive – even those of us in HR. We have a huge HR team that's focused on how to leverage AI for the HR experience at Microsoft.
How is Microsoft leveraging AI in HR?
For example, Copilot is helping us get back to employees with questions faster. We are also deploying AI skilling courses, so all employees have a shared understanding of AI technology as we use it more and more in our work across business areas.
For any organisation considering how to bring AI into HR work, there are three strategies to consider. First, form a community of learning with experts to build acumen together. Get curious with the technology, such as engaging with Copilot to learn how it can help you build inclusion in your organisation. And finally, ensure a human-centred design that puts empathy and people experience at the centre.
--
If you liked this story,sign up for The Essential List newsletter– a handpicked selection of features, videos and can't-miss news, delivered to your inbox twice a week.
Executive lounge
Artificial intelligence
Business
Diversity
Technology
Features