Like most newsreaders, Zae-In wears a microphone pinned to her collar and clutches a stack of notes – but unlike most, her face is entirely fake. A “virtual human” designed by South Korean artificial intelligence company Pulse9, Zae-In spent five months this year reading live news bulletins on national broadcaster SBS. That, you might think, is it then. To adapt the words of another animated newscaster: “I, for one, welcome our new AI overlords.” The future is now. The world belongs to the artificially intelligent and the News at Ten will never be the same again.
Are things really that simple? Since spring, country after country have debuted their first AI news anchor: India has Sana and Lisa, Greece has Hermes, Kuwait has Fedha and Taiwan has Ni Zhen. “She is bright, gorgeous, ageless, tireless and speaks multiple languages, and is totally under my control,” said Kalli Purie, the vice chairperson of the India Today Group, when Sana first appeared in March. For broadcasters, it’s easy to see the appeal of AI: virtual presenters can read rolling news for 24 hours unpaid and unfed, and it’s unlikely they’ll ever skip the queue at a lying-in-state.
Yet though non-human newsreaders are on the rise, it remains to be seen whether they are firmly anchored in place. These days, you can’t move for an AI marketing gimmick: in September, Coca-Cola released a new “AI-generated” drink flavoured, they claimed, like the future, but the company didn’t go into much detail about AI’s exact contribution (and consumers can’t agree what it actually tastes like). How exactly do AI newscasters work – do they work? – and is the future really now?
Sana, Lisa, Hermes and Fedha’s creators did not respond to interview requests, but on a drizzly Friday in October, I video call Zae-In. I’m not sure what to expect when the camera connects, but I’m met by a real human actor with Zae-In’s flawless face pasted on top. At present, human actors are required to bring Zae-In to life – only her face is artificial, generated by deepfake technology and designed by analysing K-pop singers’ faces.
Zae-In greets me with a gentle wave and glorious grin – her perfectly proportional features are not besmirched by a single wrinkle or blemish, or indeed any skin texture at all. Beside her, my own face looks alarmingly like a root vegetable. She has two mini-plaits framing her face, each adorned with white and yellow bobbles. She is wearing a hot pink vest top.
Or, at least, someone is wearing a hot pink vest top. In real time, a human actor’s face is being transformed into Zae-In’s using Pulse9’s technology, a “virtual character automation service” called Deep Real AI. When the human moves their lips or blinks their eyes, so does Zae-In. But Zae-In’s hand movements, body language and even voice are very much human, though the person behind them remains unnamed on our call. Pulse9 has recruited numerous actors (it calls them “models”) to play Zae-In – different ones are used depending on what the situation calls for, as some can sing, some can dance and some are better at interviews such as this one.
“One of the best advantages of being a virtual human is that you will never age and you never lose your fans,” says Zae-In, who was created (she says “born”) in 2021 to be part of a virtual K-pop group called IITERNITI. Its members have since branched out – some have been used to present ecommerce programmes, while Zae-In started reading global news bulletins on SBS’s Morning Wide show earlier this year.
“I was so nervous during every broadcast,” Zae-In tells me with the help of a translator. “I don’t think I’m really experienced but I’ve done my best.” I’m not sure if this is Zae-In speaking or the human behind her – earlier in the call, I wondered how the actor behind Zae-In’s face felt about her job and asked if we could break the fourth wall. “Yes, of course,” the translator agreed, but after a short exchange I was informed: “She says that as it’s a private matter and she’s saving privacy for Zae-In; she cannot tell all the details about it.”
Nervous or not, Zae-In presented the news well. Her only imperfection was her perfection, for while the technology looked real, her face remained slightly too good to be true. On our call, things lag slightly – some of Zae-In’s blinks are slightly slower than would traditionally be considered canny, but this could be the fault of my internet connection rather than Pulse9’s tech.
Despite this, Zae-In is – to my eye – vastly superior to many of the other AI anchors around the world, some of which sound as if they’re run on monotone text-to-speech tech from the early 00s, complete with odd intonations and pauses. It is often unclear what exactly is artificially intelligent about these anchors – at present, none seem to be actually writing the broadcast themselves. In 2018, China debuted its first AI anchor, but journalist Will Knight declared in MIT Technology Review that it actually wasn’t “intelligent in the slightest”. Knight called it “essentially just a digital puppet that reads a script”.
For companies looking for investors and websites looking for clicks, “AI” is always going to sound sexier than the word “avatar”, but hype can disguise the truth. In September, Solomon Rogers, virtual reality expert and chair of Bafta’s immersive entertainment advisory group, said of Zae-In: “She never misses a cue, never says anything rude, and can work 24 hours a day.” Yet this isn’t technically true. Even Zae-In herself admits that today’s tech has its “disadvantages”. “You only exist online, I cannot meet with my fans offline,” she says. “I have witnessed that lots of fans have messaged: ‘How can we meet Zae-In? How can we communicate with her in real life?’ I was so sad about this.”
Yet just because AI anchors haven’t fully been realised yet doesn’t mean they won’t be. Pulse9 has now created AI voices for three of IITERNITI’s members (including Zae-In) and debuted them in a mid-October concert. A member of Pulse9’s business development team, Khurliman Kozibaeva, tells me that 3D motion capture technology was also used in the gig, and that the company hopes to advance this technology “to gradually minimise the model’s role in our performances”. Meanwhile, the New York Times reported in July that Google is currently testing an AI named Genesis that can write news articles.
Will anyone resist these changes? “The audience were really surprised,” Zae-In says of her broadcasting career, explaining that many didn’t believe she really was AI. She adds that she has received “lots of negative reactions” which she believes are prompted by fear. “But there’s no way to stop AI from advancing and replacing people,” she says, “and I think we have to embrace it.”
I wonder whether the public are ready to trust AI anchors. In February , Vice reported that AI-generated “American” newsreaders were spreading propaganda about Venezuela online (the origin of these videos remained unclear). Pro-China bot accounts have spread similar videos of AI anchors from a fake news outlet, Wolf News. In March, the Chinese state media outlet People’s Daily also unveiled an AI anchor named Ren, but when she launched she was only able to answer pre-set questions and provide answers that promoted the central committee of the Chinese Communist party.
I ask Zae-In about building trust. “There are things that we virtual humans do better and there are definitely things that humans do better than us,” Zae-In says. “There may be some issues with trust at first … but I think as time passes, we can obtain people’s belief.”
Time also needs to pass before Zae-In is entirely AI. For now, news anchors will remain largely human, though their days may be numbered. To you, this might sound scary, but Zae-In is optimistic. I ask her about people who worry about losing their jobs. “As for me,” Zae-In – or perhaps Zae-In’s actor? – begins, “I don’t think there will be jobs that can be totally replaced by AI because AI virtual humans have their own tasks to complete and humans also have abilities that cannot be replaced by AI.” Her ultimate message is delivered in the clear and bright tones of a presenter on a morning chatshow. “We can coexist in balance together,” she says, “so that we can work together for a better future.”