The AI Agent Era Has Begun, and Privacy Risks Are Rising

While generative AI like ChatGPT and DeepSeek has dominated headlines, a quieter revolution is gaining momentum: agentic AI. These tools don’t just chat or create — they act. They can make decisions and carry out tasks for you, and they’re starting to become part of daily life.
Simply put, while generative AI responds to prompts with fresh content like text, images, or code, AI agents go a step further: they autonomously perform tasks, pursue goals, and sometimes take action without waiting for your explicit go-ahead.
AI agents come in many forms. There are simple reflex agents that react instantly to inputs, like a motion-sensor light that turns on when you walk by. Model-based agents go further by remembering past actions to make smarter decisions.
Then there are goal-based agents, designed to complete specific tasks like factory robots assembling products, and learning agents that improve over time through interaction and feedback, becoming more effective the more they’re used.
These systems are evolving rapidly, and Chinese tech companies are wasting no time. Smartphone makers like Honor, Huawei, Vivo, OPPO, and Xiaomi are in a race to perfect this next-generation technology. Many have already upgraded their mobile assistants, making use of advanced AI agents.

Take Honor’s Yoyo, a smart digital assistant that’s integrated into some of the company’s higher-end smartphones. It's reportedly capable of understanding 600 user intentions, remembering 950 personal habits, and handling 270 complex task sequences. Imagine asking your phone to order coffee, cancel an online subscription, or even plan an entire trip—all in one sentence.
When you say “please, order a cup of coffee,” for instance, the AI agent processes your request, recognizes it as a food delivery task, sends the order to the relevant app, and returns a simple “coffee card.” Tap the card, and you’re taken directly to the app to complete the purchase.
The system isn’t perfect yet, as it often still needs a few back-and-forth steps. But the idea of giving your phone one simple command and letting it handle the rest is already here, and it's only going to get better.
Other companies are also rolling out AI agents or agent-like features. OpenAI launched Operator for pro users in the U.S. this January, allowing them to outsource repetitive browser tasks like filling out forms, ordering groceries, or even creating memes with their phones.

ByteDance’s Doubao AI recently introduced a Screen Sharing Call feature, letting users talk to the agent while it views and analyzes everything on their screen. Meanwhile, Microsoft reintroduced Recall in April (almost a year after its initial debut), which acts like a photographic memory for your PC, helping users quickly find past apps, websites, or documents by browsing snapshots of on-screen activity.
So What’s the Catch?
An article from the Chinese outlet 21jingji says AI agents on smartphones automate tasks through basically two technical approaches.
The first approach is intent frameworks, where mobile agents collaborate with apps through Application Programming Interfaces (APIs) or Software Development Kits (SDKs) to trigger specific functions. Smartphone makers predefine common scenarios, allowing the AI agent to call the appropriate API and complete the task.
The second, more popularly used method involves Graphical User Interface (GUI) agents. These simulate human actions by reading what's on the screen. They rely on the device’s accessibility service, which was originally meant to help users with disabilities.
The use of accessibility services raises significant security and privacy concerns, as it allows AI agents to access everything displayed on a user's screen—including highly sensitive information such as national ID numbers, home addresses, banking details, and personal messages. Alarmingly, some AI agents activate these permissions without obtaining clear and informed user consent. Many users remain unaware of the extent of access they are granting, making them vulnerable to misuse.
Given the vast amount of data these agents can tap into, the consequences of a data breach or malicious access can be severe. If fraudsters gain control over such information, the potential for harm increases dramatically.
In Indonesia, for example, it is not uncommon for scammers to use stolen personal data to apply for loans under someone else’s name. There have also been reports of predatory lending apps that transfer money into users' bank accounts without their approval, then coerce repayment under exorbitant interest rates.
The risks of unauthorized data access go beyond financial fraud. In more disturbing cases, stalkers and criminals have exploited such data to track their victims’ daily routines—knowing when they leave home and how they commute—posing serious threats to personal safety and privacy.
Bracing the Agent Era
While security concerns around generative AI have been widely debated among government agencies and industry experts, AI agents, still a relatively new development, haven’t received the same level of attention.
That said, some researchers and institutions have started sounding the alarm, working to raise awareness within the industry about the unique risks these autonomous systems can pose.
In April 2025, Microsoft’s AI Red Team released a whitepaper for developers and security professionals, offering practical ways to reduce the unique safety and security risks of AI agents. Recommendations include ensuring security checks, limiting access to sensitive tools or data, increasing transparency for user consent and oversight, and isolating agent environments to prevent unintended actions.
These efforts reflect broader government responses to AI risks. China has passed binding laws like the Algorithm Recommendation Law and the Deep Synthesis Regulation, requiring providers to block harmful content and protect vulnerable groups such as minors. The US has taken a more decentralized approach, relying on agencies like the FTC and voluntary frameworks such as NIST’s AI Risk Management Framework, which guides companies in assessing system reliability and potential risks.
In Southeast Asia, Singapore leads with AI Verify, a governance testing framework that helps organizations assess their AI systems against global principles. Its key recommendations include notifying users when AI is in use, making decisions explainable, and ensuring human oversight to manage risks.
Emerging markets like Indonesia, Thailand, and Vietnam, however, are still in the early stages of developing AI guidelines.
Conclusion
The rapid rise of AI-powered apps promises to significantly boost efficiency and deliver user experiences that once seemed confined to science fiction.
However, alongside these exciting advances come important risks that shouldn’t be overlooked. As tech companies race to enhance AI capabilities and governments work to establish appropriate regulations, it’s up to us—as users—to remain vigilant.
That means paying attention to the small but crucial details: reading app permission requests carefully, understanding what data we’re sharing, and being mindful of the access we're granting..
Ultimately, we must remain in control of the technologies shaping our lives—before they begin to control us.
For More Info on Asia Tech Lens