Super Follows subscribers are users who have paid to support content creators on the platform. OpenAI has announced the Advanced Voice Mode feature distribution for ChatGPT Plus and Team subscribers.
This new feature will make the AI chatbot, known as ChatGPT, more friendly, enhancing the conversation experience. It will be added to the free application by the end of this week and the paid subscription is planned before the end of the current week.
The feature is not available in the EU and some regions, confirms OpenAI
OpenAI has also revealed that Advanced Voice Mode is not supported in the European Union and some countries like the UK, Switzerland, Iceland, Norway, and Lichtenstein.
What some people did find surprising was a short message from the OpenAI’s official tweet on the X.com site which read, “Advanced Voice is not available in EU, the UK, Switzerland, Iceland, Norway and Liechtenstein.” In response, a tweet by X.com user Dean W Ball shows an extract of the EU AI Act a part of the EU legislation that ‘prohibits making available, putting into operation for such use or using AI systems for assessing emotions of a natural person.
Statement that considers not allowing OpenAI’s Advanced Voice Mode –
“the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; Related: Recital 44“
This would be the case because Advanced Voice mode chatGPT is illegal in the EU places of work and school since it has the capability of identifying the emotions of the user.
It has to be said that OpenAI may be given some kind of an exception, but as things stand now, the AI Act will most definitely leave European countries trailing behind the rest of the world as AI progresses.
Of course, there is a notion of the Primark London defeat, the letter against the spirit of the law to pay attention to. However, it is true to mention that since Advanced Voice mode can detect the emotions of the owner and adjust accordingly, it will currently be a breach of this condition.
Newer update of the Advanced Voice Mode
The update includes five new voices, namely, Arbor, Sol, Maple, Vale, and Spruce; making the total five and including the previous Juniper, Breeze, Ember, and Cove. These nature-orientated names are indicative of one of AVM’s objectives, which is to improve conversational fluency.
The new voices bring the total number to nine. In the current version, the voice mode functions with average latencies of 2.8s for GPT-3.5 and 5.4s for GPT-4 models. This latency results from a data processing pipeline involving three separate models: one for taking voice notes and converting them to text form using GPT-3.5 or GPT-4 for generality purposes and the other one to turn the generality result back to voice. OpenAI also found out that this multi-model process becomes a lossy transmission mechanism for GPT-4.