Categories: Tech

This new AI can simulate your voice from just 3 seconds of audio

Microsoft’s new language model Vall-E is reportedly able to imitate any voice using just a three-second sample recording. 

The recently released AI tool was tested on 60,000 hours of English speech data. Researchers said in a paper out of Cornell University that it could replicate the emotions and tone of a speaker. 

Those findings were apparently true even when creating a recording of words that the original speaker never actually said.

“Vall-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that Vall-E significantly outperforms the state-of-the-art zero-shot [text to speech] system in terms of speech naturalness and speaker similarity,” the authors wrote. “In addition, we find Vall-E could preserve the speaker’s emotion and acoustic environment of the acoustic prompt in synthesis.”

ANDROID SPYWARE STRIKES AGAIN TARGETING FINANCIAL INSTITUTIONS AND YOUR MONEY

Microsoft Corporation booth signage is displayed at CES 2023 at the Las Vegas Convention Center on January 6, 2023, in Las Vegas, Nevada. 
((Photo by David Becker/Getty Images))

The Vall-E samples shared on GitHub are eerily similar to the speaker prompts, although they range in quality.

In one synthesized sentence from the Emotional Voices Database, Vall-E sleepily says the sentence: “We have to reduce the number of plastic bags.”

DISNEY CHARACTERS COMING TO AMAZON ALEXA WITH ‘HEY DISNEY’ COMMAND

Microsoft’s new language model Vall-E is reportedly able to imitate any voice using just a three-second sample recording.
(iStock)

However, the research in text-to-speech AI comes with a warning. 

“Since Vall-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker,” the researchers say on that web page. “We conducted the experiments under the assumption that the user agree to be the target speaker in speech synthesis. When the model is generalized to unseen speakers in the real world, it should include a protocol to ensure that the speaker approves the use of their voice and a synthesized speech detection model.”

Corporate signage of Microsoft Corp at Microsoft India Development Center, in Noida, India, on Friday, Nov. 11, 2022. 
(Photographer: Prakash Singh/Bloomberg via Getty Images)

CLICK HERE TO GET THE FOX NEWS APP 

At the moment, Vall-E, which Microsoft calls a “neural codec language model,” is not available to the public.

Share

Recent Posts

New AI apps help rental drivers avoid fake damage fees

Rental car drivers are now turning to artificial intelligence to protect themselves from surprise damage…

18 minutes ago

Fox News AI Newsletter: Melania Trump puts AI front and center

IN TODAY’S NEWSLETTER: - Google CEO, major tech leaders join first lady Melania Trump at…

3 hours ago

Delivery giant’s data breach exposes 40,000 personal records

Thousands of people have had their sensitive personal information exposed in a data breach at…

3 hours ago

Woman gets engaged to her AI chatbot boyfriend

Technology keeps changing the way we work, connect and even form relationships. Now it is…

5 hours ago

Notorious people search site returns after massive breach

Over a year ago, National Public Data (NPD) made headlines for one of the largest…

23 hours ago

Teen’s medical invention saves lives in seconds

What if stopping life-threatening bleeding could be as simple as injecting a gel? That's the…

1 day ago