Amazon's Alexa May Soon Start Speaking To You In A Dead Family Member's Voice
Announcing the new feature at Amazon’s Re:Mars conference in Las Vegas, Alexa head scientist Rohit Prasad said the idea is to build greater trust in the users’ interactions with the virtual assistant
Amazon may soon add a unique capability to virtual assistant Alexa — replication of the voice of family members, even those who are dead. Unveiled at Amazon’s Re:Mars conference in Las Vegas, which concluded on June 24, the feature being developed would allow Alexa to mimic the voice of a specific person based on less than a minute of recording provided to it, news agency Associated Press reported.
Speaking at the Amazon event, Rohit Prasad, senior vice-president and head scientist for Alexa, said the objective is to build greater trust in the users’ interactions with the virtual assistant, by putting more “human attributes of empathy”.
“These attributes have become even more important during the ongoing pandemic when so many of us have lost ones that we love,” Prasad was quoted as saying. “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”
How The New Alexa Feature Was Created
Showcasing the new capability, the Amazon event played a video showing a young child who asks, “Alexa, can Grandma finish reading me the Wizard of Oz?” Acknowledging the request, Alexa then switches to a different voice mimicking the child’s grandmother, and continues to read the book in that same voice, the AP report said.
Explaining how the feature was created, Prasad said Amazon had to learn how to make a “high-quality voice” with a shorter recording, as opposed to doing hours of recording in a studio. The company did not provide further details about the feature.
The Amazon offering comes after competitor Microsoft said it is phasing out synthetic voice offerings and in the process of setting stricter guidelines to “ensure the active participation of the speaker” whose voice is being recreated.
“This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners,” reads a blog post by Natasha Crampton, who heads AI ethics division at Microsoft.