ALBAWABA – OpenAI, the American artificial intelligence research organization, expressed its concerns over the new human-like voice mode in its ChatGPT, a chatbot and virtual assistant developed by OpenAI. The company has serious concerns about users relying on ChatGPT in social and romantic relationships.
ChatGPT human-like voice mode
The new advanced human-like voice mode in ChatGPT is remarkable as it responds in real time, adapts to interruptions, and carries on conversations using laughs and whispers like humans do.
Additionally, the human-like voice over can also understand users’ emotional states based on their voice tones. It can also tell a speaker’s emotional state based on their tone of voice. Users are talking to ChatGPT’s voice mode in a language that has common connections, while the bot itself is responding to users with human-like sounds, leading users to trust the tool more than they should.

The company has serious concerns about users relying on ChatGPT in social and romantic relationships. (Shutterstock)
Experts say that artificial intelligence tools tend to always make mistakes, and users may be overly confident in using the bot, which can be dangerous. Recent surveys showed that some users formed romantic relationships with artificial intelligence chatbots. This alarmed experts and the team behind ChatGTP and OpenAI.
OpenAI responds
OpenAI stated that users’ interactions with Artificial Intelligence chatbots can impact real social interactions over time. The company has significant concerns about how humans interact with chatbots might affect real-life relationships if people are influenced by the AI.