A lot of you may not know this, but several of the team members behind Asia Tech Lens are actually traditional, mainstream broadcast journalists and producers. Folks who have spent decades in fast-paced newsrooms, chasing breaking news, reporting from the ground, and figuring out how to tell complex stories in under three minutes.
But now, AI tools are popping up everywhere that claim they can handle all of it (Write the script. Read it on camera. Even generate footage) and that too in a shorter amount of time.
Google’s Veo 3 got a lot of buzz recently. It promises high-definition, cinematic-style videos generated entirely from text prompts. It’s sleek, it's ambitious, and it's being positioned as the next big leap in AI-powered video.
But it is just one of many incredible tools out there.
There’s Kling AI from China, which also turns text into cinematic-style shots, fast and with impressive production polish.
Synthesia, a UK-based platform, and Virbo, developed by Chinese software company Wondershare, let you drop in a virtual presenter at the click of a button. And of course, there’s ChatGPT and DeepSeek for scriptwriting - both fast, both (as we found out) decent.
Don’t get us wrong, we understand that a lot of these tools are in early stages and have a ways to go before they can truly be used on a broader scale. And yes, some of it is more sizzle reel than broadcast-ready. But they’re out there. People are using them. And with each new release, they get a little more convincing and cheaper to use.
So we wanted to know: What happens when you actually try to use them the way a journalist would?
Enter Miro Lu: A Veteran Broadcast Journalist
We handed the assignment to Miro Lu, an international reporter with two decades of experience. Her task was to create a professional news-style video using only generative AI tools for scripting, visuals, and reporter’s piece-to-camera or as we in the news call it, PTC.
To keep things honest:
We got one of our team members to choose the tools at random. We wanted to simulate how an average user, curious but time-starved, might explore AI today. So, Miro didn’t get a chance to try out the tools before we began filming. The only ones she was familiar with were ChatGPT and DeepSeek.
We broke the assignment into three core tasks, essentially, the building blocks of any TV news story
Script generation
Avatar creation for PTC (piece-to-camera)
B-roll (footage) generation
For scripting, the obvious platforms to use were ChatGPT and DeepSeek.
Miro’s prompt was simple: I am a TV reporter. Write a two-minute script on the topic of whether AI tools today can replace human reporters in producing news stories.
We also asked the LLMs to generate a PTC script and footage cues.
The result: both models generated decent, broadcast-style scripts that were well-structured and on-topic. They each flagged the strengths and limitations of AI in journalism, but DeepSeek stood out for offering a more measured and balanced tone.
Next, we fed the PTCs generated by the models directly into Synthesia and Wondershare, which generated digital avatars of Miro delivering the lines on camera.
Synthesia is one of the most widely used AI video generation platforms. It transforms written scripts into avatar videos and is currently popular in corporate communications, training, and marketing.
Wondershare’s Virbo offers customizable AI avatars and voiceovers tailored for content creators and educators looking for quick, editable video solutions.
Miro’s Verdict: both tools produced usable results, but each had its quirks. Synthesia offered a slightly more natural body language, and delivery while Wondershare prioritized speed and flexibility.
And finally, for the visuals, we turned to Google’s Veo 3 and Kuaishou’s Kling AI, asking them to bring to life some standard b-roll prompts.
This part took longer than expected. We started with basic commands, things like “a reporter closing a laptop.” Simple enough, right? Turns out, not quite. The platforms struggled to render anything usable with such vague prompts. To fix this, we returned to ChatGPT and DeepSeek for help.
Our takeaway: it is always better to give exact commands, elaborate on what you require the subject to do, where they are and how they move. The more details one can provide, the more realistic the video turns out to be.
Watch How It Turned Out
Watch the video for Miro’s unfiltered reactions - eye-rolls, laughter, raised eyebrows, and the occasional “impressive”.
Industry Lens: AI Is A Tool, Not A Replacement
To widen the frame beyond our experiment, we spoke to Claudia Hinterseer, a university lecturer in generative AI-assisted reporting with over two decades of experience in international media. Claudia is looking at this from the lens of a journalist, an educator, and as someone who uses AI tools in practice.
Her position is clear: AI tools are useful, but they’re not a substitute for credibility, originality, or sound editorial judgment.
“I really don't think it can take the place of journalists. It will take some places, but it will definitely never take the place of a journalist.”
She isn’t warning against AI. In fact, she encourages her students to “work with it, play around with it, understand it. The future is going to be you plus AI. And if you’re the kind of person saying, ‘I’m not so sure… I want to be a purist,’ you’re going to miss the boat.”
Claudia, like most journalists, sees real value in how AI boosts newsroom productivity, especially for time-consuming tasks like transcriptions and translations, data analysis, data research, and trends predictions.
But there are some other areas where AI cannot replace a human just yet.
“There's two things AI cannot do for you. It cannot write for you because it's a very, very bad writer, and it cannot produce photorealistic images, because we don't want to fool people. We don't want to trick people. And so as long as you have these two things clear, there's a lot AI can do for you to assist your journalism.”
That includes refining headlines, generating keywords, and sharpening interview questions.
“But it (AI) can’t write for you…because if it writes for you, it becomes really not interesting. And not humane, oftentimes…just boring.”
Claudia has been tracking the speed at which these tools have been evolving and being accepted in real time.
“When I took my AI-assisted reporting class, which started in January this year, DeepSeek wasn't accessible and by the end of my class in April, it was all about DeepSeek. So I think that definitely something that happened, that China has really stood up as being like a very, very serious player.”
And it’s not just about the tools. Claudia has noticed a fundamental shift in general AI literacy, awareness and acceptance of AI generated content among the public.
But this reduced skepticism has both positive and negative implications. Claudia says that we’re already living in what some are calling the ‘post-truth era’.
“There's going to be a lot of fake news. It's already there. It's so easy to create fake news and I don't mind so much about fake news, but fake news has the potential to have real life implications. And so it's dangerous.”
A growing body of research further highlights how AI-generated content, no matter how fluid or fluent, often struggles with confabulation, that is, presenting false information as if it were true. These hallucinations are not just simple mistakes. They can look authoritative, cite fake sources, and mislead readers who assume the machine knows best.
A Time magazine article warns that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data.
The article quotes Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology. She says that while there are other large potential harms, including election interference, arguably most concerning is the erosion of collective online trust.
The article cites the example of how, one post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.
Final Reflections
At the end of the day, it all boils down to the question: Why are you using AI?
News, unlike other forms of content, carries an expectation of reality, of truth, of showing the facts, exactly how they are. AI doesn’t understand the consequences. And it certainly can’t build relationships or ask the right follow-up questions.
As AI continues to evolve, journalists must evolve with it, but never at the cost of credibility. The human element - curious, empathetic, accountable - remains irreplaceable.
Going forward newsrooms won’t be run the same way. And that’s okay. The future of journalism is not about replacement, it’s about recalibration. It’s not about resisting the tools. It’s about using them wisely. And as long as most of us are able to move with the changing times, we and the practice of ethical journalism will survive.