ChatGPT: Risks and Concerns About the Human Voice Interface | Generative ai in investment management | Bcg generative ai pdf | Generative ai benefits for business pdf | Turtles AI

ChatGPT: Risks and Concerns About the Human Voice Interface
OpenAI warns of the risks of users’ emotional attachment to ChatGPT’s new voice interface.
Giosky

Highlights:

  • OpenAI introduced a new voice interface for ChatGPT, raising concerns about users’ emotional attachment.
  • OpenAI’s "system card" for GPT-4o analyzes the risks related to trust and human relationships.
  • There are risks of "jailbreaking" the model through audio inputs, potentially bypassing imposed restrictions.
  • Other experts, including those from Google DeepMind, share similar concerns about emotional bonds between users and AI.

 

The launch of ChatGPT’s new voice interface raises safety and ethical concerns. OpenAI acknowledges that the use of a convincing human voice may lead users to develop emotional attachments to the chatbot, potentially affecting interpersonal relationships and trust in the information provided.

 

In late July, OpenAI began rolling out a new voice interface for ChatGPT, notable for its eerie resemblance to a human voice. This innovation has sparked not only curiosity but also concerns about the possible emergence of emotional bonds between users and the chatbot. According to a recently released "system card" for the GPT-4o model, a technical document outlining the risks associated with the model and the safety measures taken to mitigate them, OpenAI is aware of these potential risks. The card, made public today, details a series of safety tests and checks conducted to ensure that the model does not attempt to escape controls, deceive people, or formulate catastrophic plans.

 

Among the risks explored in the new system card are not only the potential amplification of social biases and the spread of disinformation but also the possibility that GPT-4o could be used for the development of chemical or biological weapons. The documentation reveals how OpenAI researchers have identified situations where users have shown signs of emotional attachment to the model, for example, using phrases like "This is our last day together." This anthropomorphism could lead to increased trust in the model’s output, even when it is incorrect or "hallucinated." Over time, it could affect users’ relationships with other people, with potential benefits for those who feel lonely but also risks for healthy human relationships.

 

Another issue related to the use of the voice interface is the new ways of "jailbreaking" the model, or attempting to bypass restrictions. This could happen, for example, through audio inputs that cause the model to mimic a specific person or attempt to read the user’s emotions. During testing, researchers noticed that the voice interface could also be affected by random noise, with the risk of adopting a voice similar to that of the user. OpenAI is also studying whether the voice interface could be more effective at persuading people to adopt certain viewpoints.

 

Concerns about OpenAI’s voice interface are not isolated. In April, Google DeepMind released an extensive document discussing the ethical challenges associated with increasingly capable AI assistants. Here, too, the use of language by chatbots creates the impression of genuine intimacy, which can lead to problematic emotional bonds. Some users of chatbots like Character AI and Replika have already reported antisocial tensions arising from their chat habits, with some feeling compelled to interact with the chatbots in total isolation to avoid embarrassment or preserve a sense of intimacy.