Categories: Tech

This new AI can simulate your voice from just 3 seconds of audio

Microsoft’s new language model Vall-E is reportedly able to imitate any voice using just a three-second sample recording. 

The recently released AI tool was tested on 60,000 hours of English speech data. Researchers said in a paper out of Cornell University that it could replicate the emotions and tone of a speaker. 

Those findings were apparently true even when creating a recording of words that the original speaker never actually said.

“Vall-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that Vall-E significantly outperforms the state-of-the-art zero-shot [text to speech] system in terms of speech naturalness and speaker similarity,” the authors wrote. “In addition, we find Vall-E could preserve the speaker’s emotion and acoustic environment of the acoustic prompt in synthesis.”

ANDROID SPYWARE STRIKES AGAIN TARGETING FINANCIAL INSTITUTIONS AND YOUR MONEY

Microsoft Corporation booth signage is displayed at CES 2023 at the Las Vegas Convention Center on January 6, 2023, in Las Vegas, Nevada. 
((Photo by David Becker/Getty Images))

The Vall-E samples shared on GitHub are eerily similar to the speaker prompts, although they range in quality.

In one synthesized sentence from the Emotional Voices Database, Vall-E sleepily says the sentence: “We have to reduce the number of plastic bags.”

DISNEY CHARACTERS COMING TO AMAZON ALEXA WITH ‘HEY DISNEY’ COMMAND

Microsoft’s new language model Vall-E is reportedly able to imitate any voice using just a three-second sample recording.
(iStock)

However, the research in text-to-speech AI comes with a warning. 

“Since Vall-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker,” the researchers say on that web page. “We conducted the experiments under the assumption that the user agree to be the target speaker in speech synthesis. When the model is generalized to unseen speakers in the real world, it should include a protocol to ensure that the speaker approves the use of their voice and a synthesized speech detection model.”

Corporate signage of Microsoft Corp at Microsoft India Development Center, in Noida, India, on Friday, Nov. 11, 2022. 
(Photographer: Prakash Singh/Bloomberg via Getty Images)

CLICK HERE TO GET THE FOX NEWS APP 

At the moment, Vall-E, which Microsoft calls a “neural codec language model,” is not available to the public.

Share

Recent Posts

Former Vice President Mike Pence honored by Kennedy family in receiving the JFK ‘Profile in Courage Award’

BOSTON, Mass. – Former Vice President Mike Pence was honored on Sunday night for his…

2 hours ago

Trump to tap new national security advisor in 6 months; calls Waltz move ‘upgrade’

President Donald Trump said Sunday that he plans to appoint a new national security advisor…

4 hours ago

American tourist reportedly impaled on Rome’s Colosseum fence as dozens watch in horror

close Video Pope Francis' tomb opens to visitors at Rome's St. Mary Major Basilica The…

6 hours ago

Manhunt after inmate escapes at Seattle airport, boards train

close Video Small plane makes fiery crash into 2 California homes Two homes were set…

6 hours ago

Trump announces 100% tariff on all foreign-produced movies: ‘WE WANT MOVIES MADE IN AMERICA, AGAIN!’

President Donald Trump decried the state of the motion picture industry in a social media…

6 hours ago

Multimillion-dollar luxury yacht sinks off Miami Beach, 32 passengers rescued

close Video Thirty-two passengers rescued after multimillion-dollar luxury yacht sinks The yacht, identified as a…

8 hours ago