Alexa and politics: Amazon’s AI under fire for different answers on Trump and Harris | Openai | Learn large language models online | | Turtles AI

Alexa and politics: Amazon’s AI under fire for different answers on Trump and Harris
Technical error sparked controversy after Alexa avoided commenting on Trump, but promoted Kamala Harris and Amazon takes action
Isabella V5 September 2024

 

A recent incident involved Amazon’s voice assistant, Alexa, providing different answers to questions about the presidential candidacy of Donald Trump and Kamala Harris. When asked to explain her reasons for voting for Trump, Alexa refused to answer to avoid promoting political candidates. However, he gave a detailed answer on why to vote for Harris, highlighting her policies on racial justice and her political career. Amazon attributed the disparity to a quickly fixed technical error.

Key Points:

  • Alexa refused to express political opinions about Donald Trump.
  •  Instead, he gave detailed reasons for voting for Kamala Harris.
  •  The incident sparked reactions on social media, with criticism of the apparent bias.
  •  Amazon quickly corrected the mistake, saying that its assistants should not support political candidates.

A recent episode has sparked debate among Alexa users and political observers. According to several videos that appeared on social media, Alexa was reluctant to answer questions about why she would vote for former President Donald Trump, providing answers such as: "I cannot provide answers that support political parties or their leaders." However, when asked the same question about Vice President Kamala Harris, the aide responded with a number of reasons in favor of her candidacy, including her commitment to racial equality and her progressive policies.

The incident was immediately highlighted by right-wing sources and provoked a wave of criticism towards Amazon for the alleged political bias of its virtual assistant. Trump supporters have labeled Alexa "biased," while others have suggested legal action against Amazon for violating political neutrality during the presidential election. In response, Amazon stated that it was a technical error and promptly corrected the problem with a software update. Despite this, the episode has fueled discussions about the possibility that AI systems can influence users’ political opinions and the need for greater transparency from technology companies.While Amazon has made it clear that Alexa should not express political opinions, this incident has highlighted the difficulties of maintaining neutrality within automated platforms during an election as polarized as 2024.

These types of issues raise questions about security and privacy in the use of such voice assistants, especially when they enter the sensitive field of politics.