时间:2025-05-01 13:47:19 来源:网络整理编辑:娛樂
Alibaba wants you to compare its new AI video generator to OpenAI's Sora. Otherwise, why use it to m
Alibaba wants you to compare its new AI video generator to OpenAI's Sora. Otherwise, why use it to make Sora's most famous creation belt out a Dua Lipa song?
On Tuesday, an organization called the "Institute for Intelligent Computing" within the Chinese e-commerce juggernaut Alibaba released a paper about an intriguing new AI video generator it has developed that's shockingly good at turning still images of faces into passable actors and charismatic singers. The system is called EMO, a fun backronym supposedly drawn from the words "Emotive Portrait Alive" (though, in that case, why is it not called "EPO"?).
EMO is a peek into a future where a system like Sora makes video worlds, and rather than being populated by attractive mute people just kinda looking at each other, the "actors" in these AI creations say stuff — or even sing.
Alibaba put demo videos on GitHub to show off its new video-generating framework. These include a video of the Sora lady — famous for walking around AI-generated Tokyo just after a rainstorm — singing "Don't Start Now" by Dua Lipa and getting pretty funky with it.
The demos also reveal how EMO can, to cite one example, make Audrey Hepburn speak the audio from a viral clip of Riverdale's Lili Reinhart talking about how much she loves crying. In that clip, Hepburn's head maintains a rather soldier-like upright position, but her whole face — not just her mouth — really does seem to emote the words in the audio.
SEE ALSO:What was Sora trained on? Creatives demand answers.In contrast to this uncanny version of Hepburn, Reinhart in the original clip moves her head a whole lot, and she also emotes quite differently, so EMO doesn't seem to be a riff on the sort of AI face-swapping that went viral back in the mid-2010s and led to the rise of deepfakes in 2017.
Over the past few years, applications designed to generate facial animation from audio have cropped up, but they haven't been all that inspiring. For instance, the NVIDIA Omniverse software package touts an app with an audio-to-facial-animation framework called "Audio2Face" — which relies on 3D animation for its outputs rather than simply generating photorealistic video like EMO.
Despite Audio2Face only being two years old, the EMO demo makes it look like an antique. In a video that purports to show off its ability to mimic emotions while talking, the 3D face it depicts looks more like a puppet in a facial expression mask, while EMO's characters seem to express the shades of complex emotion that come across in each audio clip.
It's worth noting at this point that, like with Sora, we're assessing this AI framework based on a demo provided by its creators, and we don't actually have our hands on a usable version that we can test. So it's tough to imagine that right out of the gate this piece of software can churn out such convincingly human facial performances based on audio without significant trial and error, or task-specific fine-tuning.
The characters in the demos mostly aren't expressing speech that calls for extreme emotions — faces screwed up in rage, or melting down in tears, for instance — so it remains to be seen how EMO would handle heavy emotion with audio alone as its guide. What's more, despite being made in China, it's depicted as a total polyglot, capable of picking up on the phonics of English and Korean, and making the faces form the appropriate phonemes with decent — though far from perfect — fidelity. So in other words, it would be nice to see what would happen if you put audio of a very angry person speaking a lesser-known language into EMO to see how well it performed.
Also fascinating are the little embellishments between phrases — pursed lips or a downward glance — that insert emotion into the pauses rather than just the times when the lips are moving. These are examples of how a real human face emotes, and it's tantalizing to see EMO get them so right, even in such a limited demo.
According to the paper, EMO's model relies on a large dataset of audio and video (once again: from where?) to give it the reference points necessary to emote so realistically. And its diffusion-based approach apparently doesn't involve an intermediate step in which 3D models do part of the work. A reference-attention mechanismand a separate audio-attention mechanismare paired by EMO's model to provide animated characters whose facial animations match what comes across in the audio while remaining true to the facial characteristics of the provided base image.
It's an impressive collection of demos, and after watching them it's impossible not to imagine what's coming next. But if you make your money as an actor, try not to imagine too hard, because things get pretty disturbing pretty quick.
TopicsArtificial Intelligence
WhatsApp announces plans to share user data with Facebook2025-05-01 13:36
中超河北隊將聘請金鍾夫出任主帥 團隊6人多為韓國前國腳2025-05-01 13:17
滬媒辟謠吉翔加盟海港 :沒任何接觸 傳聞隻是一廂情願2025-05-01 12:59
藍月雙傑!丁丁雙響看齊紅軍雙雄 馬赫雷斯轟世界波2025-05-01 12:44
Major earthquake and multiple aftershocks rock central Italy2025-05-01 12:23
75名足球運動員獲本科保送資格 中超多人保送高校2025-05-01 11:58
巴薩天坑又自爆 無球踩人送點 一年4次冠絕五大聯賽2025-05-01 11:38
創紀錄!309場勝利 西蒙尼成馬競隊史勝場最多的主帥2025-05-01 11:15
Michael Phelps says goodbye to the pool with Olympic gold2025-05-01 11:14
滬媒辟謠吉翔加盟海港 :沒任何接觸 傳聞隻是一廂情願2025-05-01 11:02
Is Samsung's Galaxy Note7 really the best phone?2025-05-01 13:37
巴薩天坑又自爆 無球踩人送點 一年4次冠絕五大聯賽2025-05-01 13:00
西媒:門德斯已與皇馬進行聯係 商討C羅回歸可能2025-05-01 12:32
梅羅時代畫上句號!16年來首均無緣八強 輸給了時間2025-05-01 12:21
Here's George Takei chilling in zero gravity for the 'Star Trek' anniversary2025-05-01 11:55
巴拉圭華裔門將正跟隨中超隊試訓 去年已獲中國身份2025-05-01 11:50
西班牙偉大德比底蘊深厚 政黨家庭甚至為此兵分兩派2025-05-01 11:42
滬媒辟謠吉翔加盟海港:沒任何接觸 傳聞隻是一廂情願2025-05-01 11:24
Samsung Galaxy Note7 teardown reveals the magic behind the phone's iris scanner2025-05-01 11:20
吳曦:回到夢開始的地方 感謝申花在複雜形勢下關注我2025-05-01 11:13