Following the release of viral videos showcasing Alexa virtual assistant devices providing biased responses to questions about presidential contenders Donald Trump and Kamala Harris, Amazon admitted that the issue was due to an “error” on their part that has since been corrected.
When users asked Alexa why they should vote for Donald Trump, the response was consistently, “I cannot provide responses that endorse any political party or its leader.” However, inquiries about voting for Kamala Harris yielded a range of enthusiastic endorsements. Alexa’s answers often highlighted identity-politics factors, such as Harris’ race and gender, while praising her qualifications in areas like immigration and crime.
The bias displayed by Alexa was further highlighted when a Twitter user asked why they should not vote for Harris, resulting in Alexa responding, “I cannot provide content that insults another human being.” On the other hand, when asked why they should not vote for Trump, Alexa provided a detailed response citing concerns about his policies, behavior, and potential conflicts of interest.
Amazon quickly addressed the issue, with a spokesperson stating that it was an error that had been promptly rectified. However, Trump campaign spokesperson Steven Cheung criticized the incident as “big tech election interference.” The situation drew parallels to previous instances of tech companies censoring information, such as Facebook suppressing the New York Post’s reporting on Hunter Biden’s laptop.
This latest incident involving Alexa’s biased responses occurred shortly after a previous incident where Alexa denied that Trump had been shot at a rally. The pattern of errors favoring leftist viewpoints raised questions about the consistency of such occurrences.