AI voice assistants reinforce harmful gender stereotypes, new UN report says

Artificial intelligence-powered voice assistants, many of which default to female-sounding voices, are reinforcing harmful gender stereotypes, according to a new study published by the United Nations.
Titled “I’d blush if I could,” after a response Siri utters when receiving certain sexually explicit commands, the paper explores the effects of bias in AI research and product development and the potential long-term negative implications of conditioning society, particularly children, to treat these digital voice assistants as unquestioning helpers who exist only to serve owners unconditionally. It was authored by the United Nations Educational, Scientific, and Cultural Organization, otherwise known as UNESCO.
The paper argues that by naming voice assistants with traditionally female names, like Alexa and Siri, and rendering the voices as female-sounding by default, tech companies have already preconditioned users to fall back upon antiquated and harmful perceptions of women. Going further, the paper argues that tech companies have failed to build in proper safeguards against hostile, abusive, and gendered language. Instead, most assistants, as Siri does, tend to deflect aggression or chime in with a sly joke. For instance, ask Siri to make you a sandwich, and the voice assistant will respond with, “I can’t. I don’t have any condiments.”
“Companies like Apple and Amazon, staffed by overwhelmingly male engineering teams, have built AI systems that cause their feminized digital assistants to greet verbal abuse with catch-me-if-you-can flirtation,” the report states. “Because the speech of most voice assistants is female, it sends a signal that women are ... docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’. The assistant holds no power of agency beyond what the commander asks of it. It honours commands and responds to queries regardless of their tone or hostility.”
Much has been written about the pitfalls of tech companies having built their entire consumer-facing AI platforms in the image of traditional, Hollywood-influenced ideas of subservient intelligences. In the future, it’s likely voice assistants will be the primary mode of interaction with hardware and software with the rise of so-called ambient computing, when all manner of internet-connected gadgets exist all around us at all times. (Think Spike Jonze’s Her, which seems like the most accurate depiction of the near-future in film you can find today.) How we interact with the increasingly sophisticated intelligences powering these platforms could have profound cultural and sociological effects on how we interact with other human beings, with service workers, and with humanoid robots that take on more substantial roles in daily life and the labor force.
However, as Business Insider reported last September, Amazon chose a female-sounding voice because market research indicated it would be received as more “sympathetic” and therefore more helpful. Microsoft, on the other hand, named its assistant Cortana to bank on the existing recognition of the very much female-identifying AI character in its Halo video game franchise; you can’t change Cortana’s voice to a male one, and the company hasn’t said when it plans to let users do so. Siri, for what it’s worth, is a Scandinavian name traditionally for females that means “beautiful victory” in Old Norse. In other words, these decisions about gender with regard to AI assistants were made on purpose, and after what sounds like extensive feedback.
Tech companies have made an effort to move away from these early design decisions steeped in stereotypes. Google now refers to its various Assistant voice options, which now include different accents with male and female options for each, represented by colors. You can no longer select a “male” or “female” version; each color is randomly assigned to one of eight voice options for each user. The company also rolled out an initiative called Pretty Please that rewards young children when they use phrases like “please” and “thank you” while interacting with Google Assistant. Amazon released something similar last year to encourage polite behavior when talking to Alexa.
Yet as the report says, these features and gender voice options don’t go far enough; the problem may be baked into the AI and tech industries themselves. The field of AI research is predominantly white and male, a new report from last month found. Eighty percent of AI academics are men, and just 15 percent of AI researchers at Facebook and just 10 percent at Google are women.
UNESCO says solutions to this issue would be to create as close to gender-neutral assistant voices as possible and to create systems to discourage gender-based insults. Additionally, the report says tech companies should stray away from conditioning users to treat AI as they would a lesser, subservient human being, and that the only way to avoid perpetuating harmful stereotypes like these is to remake voice assistants as purposefully non-human entities.
Source: theverge
Industry: Unified communication news

Latest Jobs
-
- Identity Channel Partner Manager | London
- London
- N/A
-
Identity Channel Partner Manager | London Location: South East UK (commutable to London) We are working with a Cyber Security business who are looking for a Channel Partner Manager to drive and grow relationships across their identity ecosystem. Prior experience working within VARs, distributors, vendors or resellers in the identity space is essential. You must have experience working with technologies such as CyberArk, Sailpoint, Okta etc Responsibilities will include, but not be limited to: Build, maintain and develop strong relationships with channel partners. Work closely with partner sales teams to support growth drive sales opportunities. Identify and onboard new partners while strengthening existing partnerships. Act as the key point of contact for all channel-related activity. If you are an experienced channel professional, with experience in the Identity space and are ready for your next challenge, apply today.
-
- Microsoft Security Operations Analyst | Bracknell | SC Clearable | SC-200
- Reading
- N/A
-
Senior SOC Analyst Level 2 / 3. Microsoft Security stack | SC Clearable Location: Hybrid remote | Berkshire SC-200 Senior SOC Analyst Level 2 / 3 to join a specialist Managed Security Services business. You will be responsible for advanced threat hunting / triage, incident response etc with a strong focus on the Microsoft Security Stack. Key Responsibilities: Lead and resolve complex security incidents / escalations Conduct advanced threat hunting using the Microsoft Security Stack. Build, optimise and maintain workbooks, rules, analytics etc. Correlate data across Microsoft 365 Defender, Azure Defender and Sentinel. Perform root cause analysis and post-incident reporting. Aid in mentoring and upskilling Level 1 and 2 SOC analysts. Required Skills & Experience: The ability to achieve UK Security Clearance (SC) - existing clearance ideal. (Sorry no visa applications) Must have current experience working with a SOC environment Key experience must also include, but not be limited to Development and tuning of custom analytic rules. Workbook creation and dashboarding. Automation using Playbooks and SOAR integration. Kusto Query Language (KQL).