April 16, 2024
Largest text-to-speech AI model yet shows 'emergent abilities'

Researchers at Amazon have trained the largest ever text-to-speech model yet, which they claim exhibits “emergent” qualities improving its ability to speak even complex sentences naturally. The breakthrough could be what the technology needs to escape the uncanny valley.

These models were always going to grow and improve, but the researchers specifically hoped to see the kind of leap in ability that we observed once language models got past a certain size. For reasons unknown to us, once LLMs grow past a certain point, they start being way more robust and versatile, able to perform tasks they weren’t trained to.

That is not to say they are gaining sentience or anything, just that past a certain point their performance on certain conversational AI tasks hockey sticks. The team at Amazon AGI — no secret what they’re aiming at — thought the same might happen as text-to-speech models grew as well, and their research suggests this is in fact the case.

The new model is called Big Adaptive Streamable TTS with Emergent abilities, which they have contorted into the abbreviation BASE TTS. The largest version of the model uses 100,000 hours of public domain speech, 90% of which is in English, the remainder in German, Dutch and Spanish.

At 980 million parameters, BASE-large appears to be the biggest model in this category. They also trained 400M- and 150M-parameter models based on 10,000 and 1,000 hours of audio respectively, for comparison — the idea being, if one of these models shows emergent behaviors but another doesn’t, you have a range for where those behaviors begin to emerge.

As it turns out, the medium-sized model showed the jump in capability the team was looking for, not necessarily in ordinary speech quality (it is reviewed better but only by a couple points) but in the set of emergent abilities they observed and measured. Here are examples of tricky text mentioned in the paper:

  • Compound nouns: The Beckhams decided to rent a charming stone-built quaint countryside holiday cottage.
  • Emotions: “Oh my gosh! Are we really going to the Maldives? That’s unbelievable!” Jennie squealed, bouncing on her toes with uncontained glee.
  • Foreign words: “Mr. Henry, renowned for his mise en place, orchestrated a seven-course meal, each dish a pièce de résistance.
  • Paralinguistics (i.e. readable non-words): “Shh, Lucy, shhh, we mustn’t wake your baby brother,” Tom whispered, as they tiptoed past the nursery.
  • Punctuations: She received an odd text from her brother: ’Emergency @ home; call ASAP! Mom & Dad are worried…#familymatters.’
  • Questions: But the Brexit question remains: After all the trials and tribulations, will the ministers find the answers in time?
  • Syntactic complexities: The movie that De Moya who was recently awarded the lifetime achievement award starred in 2022 was a box-office hit, despite the mixed reviews.

“These sentences are designed to contain challenging tasks – parsing garden-path sentences, placing phrasal stress on long-winded compound nouns, producing emotional or whispered speech, or producing the correct phonemes for foreign words like “qi” or punctuations like “@” – none of which BASE TTS is explicitly trained to perform,” the authors write.

Such features normally trip up text-to-speech engines, which will mispronounce, skip words, use odd intonation or make some other blunder. BASE TTS still had trouble, but it did far better than its contemporaries — models like Tortoise and VALL-E.

There are a bunch of examples of these difficult texts being spoken quite naturally by the new model at the site they made for it. Of course these were chosen by the researchers, so they’re necessarily cherry-picked, but it’s impressive regardless. Here are a couple, if you don’t feel like clicking through:

Because the three BASE TTS models share an architecture, it seems clear that the size of the model and the extent of its training data seem to be the cause of the model’s ability to handle some of the above complexities. Bear in mind this is still an experimental model and process — not a commercial model or anything. Later research will have to identify the inflection point for emergent ability and how to train and deploy the resulting model efficiently.

Notably, this model is “streamable,” as the name says — meaning it doesn’t need to generate whole sentences at once but goes moment by moment at a relatively low bitrate. The team has also attempted to package the speech metadata like emotionality, prosody and so on in a separate, low-bandwidth stream that could accompany vanilla audio.

It seems that text-to-speech models may have a breakout moment in 2024 — just in time for the election! But there’s no denying the usefulness of this technology, for accessibility in particular. The team does note that it declined to publish the model’s source and other data due to the risk of bad actors taking advantage of it. The cat will get out of that bag eventually, though.

Source link