AI voice assistants reinforce harmful gender stereotypes, new UN report says
Artificial intelligence-powered voice assistants, many of which default to female-sounding voices, are reinforcing harmful gender stereotypes, according to a new study published by the United Nations.
Titled “I’d blush if I could,” after a response Siri utters when receiving certain sexually explicit commands, the paper explores the effects of bias in AI research and product development and the potential long-term negative implications of conditioning society, particularly children, to treat these digital voice assistants as unquestioning helpers who exist only to serve owners unconditionally. It was authored by the United Nations Educational, Scientific, and Cultural Organization, otherwise known as UNESCO.
The paper argues that by naming voice assistants with traditionally female names, like Alexa and Siri, and rendering the voices as female-sounding by default, tech companies have already preconditioned users to fall back upon antiquated and harmful perceptions of women. Going further, the paper argues that tech companies have failed to build in proper safeguards against hostile, abusive, and gendered language. Instead, most assistants, as Siri does, tend to deflect aggression or chime in with a sly joke. For instance, ask Siri to make you a sandwich, and the voice assistant will respond with, “I can’t. I don’t have any condiments.”
“Companies like Apple and Amazon, staffed by overwhelmingly male engineering teams, have built AI systems that cause their feminized digital assistants to greet verbal abuse with catch-me-if-you-can flirtation,” the report states. “Because the speech of most voice assistants is female, it sends a signal that women are ... docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’. The assistant holds no power of agency beyond what the commander asks of it. It honours commands and responds to queries regardless of their tone or hostility.”
Much has been written about the pitfalls of tech companies having built their entire consumer-facing AI platforms in the image of traditional, Hollywood-influenced ideas of subservient intelligences. In the future, it’s likely voice assistants will be the primary mode of interaction with hardware and software with the rise of so-called ambient computing, when all manner of internet-connected gadgets exist all around us at all times. (Think Spike Jonze’s Her, which seems like the most accurate depiction of the near-future in film you can find today.) How we interact with the increasingly sophisticated intelligences powering these platforms could have profound cultural and sociological effects on how we interact with other human beings, with service workers, and with humanoid robots that take on more substantial roles in daily life and the labor force.
However, as Business Insider reported last September, Amazon chose a female-sounding voice because market research indicated it would be received as more “sympathetic” and therefore more helpful. Microsoft, on the other hand, named its assistant Cortana to bank on the existing recognition of the very much female-identifying AI character in its Halo video game franchise; you can’t change Cortana’s voice to a male one, and the company hasn’t said when it plans to let users do so. Siri, for what it’s worth, is a Scandinavian name traditionally for females that means “beautiful victory” in Old Norse. In other words, these decisions about gender with regard to AI assistants were made on purpose, and after what sounds like extensive feedback.
Tech companies have made an effort to move away from these early design decisions steeped in stereotypes. Google now refers to its various Assistant voice options, which now include different accents with male and female options for each, represented by colors. You can no longer select a “male” or “female” version; each color is randomly assigned to one of eight voice options for each user. The company also rolled out an initiative called Pretty Please that rewards young children when they use phrases like “please” and “thank you” while interacting with Google Assistant. Amazon released something similar last year to encourage polite behavior when talking to Alexa.
Yet as the report says, these features and gender voice options don’t go far enough; the problem may be baked into the AI and tech industries themselves. The field of AI research is predominantly white and male, a new report from last month found. Eighty percent of AI academics are men, and just 15 percent of AI researchers at Facebook and just 10 percent at Google are women.
UNESCO says solutions to this issue would be to create as close to gender-neutral assistant voices as possible and to create systems to discourage gender-based insults. Additionally, the report says tech companies should stray away from conditioning users to treat AI as they would a lesser, subservient human being, and that the only way to avoid perpetuating harmful stereotypes like these is to remake voice assistants as purposefully non-human entities.
Source: theverge
Industry: Unified communication news
Latest Jobs
-
- Public Sector Cyber Security Sales | UK
- England
- N/A
-
Public Sector Cyber Security Sales | UK UK | Remote / Hybrid A cyber security provider is seeking a Public Sector Sales professional to drive growth across UK government and public sector organisations. Must have current Cyber Security sales experience. Responsibilities Generate new business selling cyber security solutions into UK public sector Build relationships with CIO, CISO and senior technology stakeholders Manage the full sales cycle from opportunity to contract close Develop pipeline across central government, local government and public sector bodies Support bids, tenders and framework opportunities Experience Proven cyber security sales experience in the UK Track record selling into public sector organisations Familiarity with CCS, G Cloud or other government frameworks Strong stakeholder engagement and deal management skills Location UK based Security Requirements Eligible to obtain UK Security Clearance
-
- Security Architect | MoD - Security Cleared. OUTSIDE IR35 | Hampshire
- N/A
- Outside IR35
-
Security Architect | MOD | Security Cleared | Outside IR35 | Hampshire Commutable The successful candidate must be willing to undergo DV Clearance, ideally already holding active clearance. You will produce high and low level security architecture documentation, guiding and validating designs for systems deployed within sensitive environments. The role requires providing specialist security input into solution design, service transition and change initiatives, working closely with engineering, operations, client and third party stakeholders. You must have current hands on architectural experience, including VMware secure platform design and virtualisation architecture, alongside AWS expertise. This is an outside IR35 contract- 6 month rolling. Part of a longer term MoD project
-
- Active Directory | RBA engineer | UK Remote | SC Clearable
- United Kingdom
- N/A
-
Technical Active Directory (AD) and RBA specialist needed to play a key part in complex, enterprise scale Active Directory and access transformation programmes. You will work alongside senior team, helping reshape access models, modernise legacy directory structures and strengthen security posture across secure environments. This is hands on delivery within high impact projects where your work directly improves access control, compliance and operational resilience. Active UK Security Clearance required. This is a remote role with client travel. Implementation of Role Based Access Control across large AD estates Restructuring complex permission models, security groups and delegated access Supporting domain controller upgrades and core directory improvements Applying security hardening standards and remediating audit findings Enhancing authentication, policy and access governance frameworks Troubleshooting and resolving technical AD challenges within live environments Producing robust technical documentation and identifying project risks You must have the following technical experience Enterprise Active Directory administration Role Based Access and permission remediation OU design and governance Group Policy management Security group delegation models DNS and DHCP services Kerberos authentication / NTLM PowerShell scripting and automation Azure AD | Entra ID Hybrid identity environments Identity Governance PAM
-
- Identity and Access Management Consultant (Saviynt & Microsoft Entra) | UK
- United Kingdom
- N/A
-
Role summary Technical IAM consultant delivering identity governance and cloud identity solutions to enterprise clients. What you will do Implement / Configure / Deploy Saviynt IGA / Microsoft Entra solutions: Lead technical workshops, gather requirements and translate into solution designs. Troubleshoot complex issues, support testing and deployments. Produce technical artefacts and configuration guides. Key skills Hands-on Saviynt IGA experience (workflow, connectors, access governance). Strong practical knowledge of Microsoft Entra ID / Azure AD identity and access controls. Understanding of identity protocols (SAML, OAuth, OpenID Connect) and hybrid identity. Experience with APIs / REST for integrations and automation. What we are looking for Proven delivery experience in IAM / IGA projects, preferably in consulting. Confident communicator with client-facing delivery exposure.