March, 10, 2026
An interview with Asma Anjum, Regional Trust & Safety Lead, South Asia
Much of the debate around platform safety focuses on content removal and moderation. Far less attention is paid to what happens earlier, at the moment a user actively looks for information. Search sits at that juncture. It is where intent is explicit and where the consequences of design choices are often most acute.
As social platforms assume a more central role in how information is sought and understood, the mechanics of search raise questions that extend beyond technology into public trust, risk management, and accountability. These questions are particularly relevant in markets shaped by political complexity, environmental vulnerability, and rapid information flows.
The following conversation with Asma Anjum, Regional Trust & Safety Lead for South Asia at TikTok, examines how search-level decisions are approached, how risk is assessed, and how intervention is structured when discovery itself carries potential harm.
Q1. The ‘Search’ feature is increasingly where people go first, not just for entertainment, but for answers and clarity about a situation. When someone searches during moments of fear, confusion, or crisis, what responsibility does that place on TikTok?
Search is one of the most revealing moments in a user’s journey. People often type questions they would never articulate publicly. They search privately. And in many cases, that search reflects vulnerability.
When someone searches for information about self-harm, sexual harassment, voting procedures, or a flood emergency, the platform becomes more than a content host. It becomes an entry point to understanding. Our responsibility is not to determine what someone should believe. It is to ensure that the conditions in which they encounter information are not harmful. That means reducing the likelihood that high-risk searches surface misleading, exploitative, or context-free material as the primary response.
In Sri Lanka, for example, searches related to sexual harassment and abuse activate public service announcements and search guides that provide structured context and direct users toward safety and reporting resources. We do not assume why someone is searching. We do not interpret intent. But we do recognise that the moment may require care rather than algorithmic neutrality.
Search, to us, isn’t only a feature but rather often the first signal of distress or uncertainty. Treating it as a neutral list of results is no longer sufficient in today’s environment.
Q2. Critics often argue that platforms respond after harm has already spread. How early does TikTok intervene at the search level, particularly in Sri Lanka?
Timing is everything in search safety. If you intervene too late, misinformation can already shape perception. If you intervene too broadly without signal, you risk undermining credibility. Our approach is based on risk assessment informed by real-time signals. We monitor shifts in search behaviour, real-world developments, and inputs from internal safety teams, fact-checking partners, and policy teams. When we see spikes tied to unfolding events, that is often an early indicator of elevated risk.
In Sri Lanka, this has been particularly relevant during the 2024 Presidential and Parliamentary elections. Election-related search queries activated structured search guides, multilingual H5 information hubs, and video notice tags. These were not reactive afterthoughts. They were deployed as part of an anticipatory framework to provide procedural clarity and reduce confusion around voting processes. Similarly, during monsoon seasons and flood emergencies, search behaviour changes quickly. People search for casualty figures, safety instructions, predictions. In those moments, information is incomplete and speculation spreads easily. Search notices caution users about rapidly evolving information and direct them toward authoritative sources.
Q3. There is always tension between safety and freedom of expression. How do you ensure search interventions do not cross into censorship?
This tension is real. And it is healthy that people question it. Search interventions are not removal mechanisms. They do not automatically block access to sensitive topics but instead, they introduce context, friction, or visibility adjustments where risk warrants it. For example, in some safety-critical contexts, opt-in screens can appear before potentially distressing content is shown. That pause is intentional and reduces accidental exposure. It gives users a moment to decide.
We also distinguish clearly between removal, reduced visibility, and contextual framing. Content that violates Community Guidelines is removed. Content that does not violate policy but poses risk in certain contexts may have its visibility limited in high-risk situations. That is a ranking decision, not a censorship decision.
Q4. Sri Lanka has implemented search interventions around sexual harassment, abuse, elections, and disasters. Why focus so heavily on search in these categories?
Because search often surfaces before public speech does. When someone searches about harassment or abuse, they may be seeking clarity, validation, or next steps. Without contextual framing, they risk encountering content that trivialises, misinforms, or retraumatises.
By embedding search guides and public service messaging directly into the search experience, we ensure that credible information and support resources are visible at the moment of intent. That does not replace professional services. It does not assume vulnerability. It simply ensures that context exists alongside discovery.
Elections and disasters present a different dynamic. These are high-velocity environments where misinformation spreads quickly and emotional intensity is high. During Sri Lanka’s recent elections, structured search guides and multilingual information hubs were introduced to reduce confusion about voting procedures and eligibility. During flood-related events, notices caution users about rapidly changing information and guide them toward verified updates.
In both cases, the principle is the same. When stakes are high, search cannot function as an unstructured feed of results. It must operate as a structured entry point.
Q5. Artificial intelligence has complicated the safety landscape. AI-generated or manipulated media can look credible at speed. How does TikTok address that risk at the search level?
AI-generated misinformation presents a distinct challenge because it can appear credible, scale quickly, and blur the line between reality and fabrication. To address this risk, TikTok applies clear labeling to AI-generated videos to increase transparency. The platform also works with C2PA (Coalition for Content Provenance and Authenticity) standards to help identify and signal AI-generated media, strengthening content authenticity and reducing the risk of manipulation.
Moreover, during sensitive or critical moments, TikTok applies proactive search interventions designed to reduce harm and limit accidental exposure to misleading information. In such situations, TikTok redirects users to authoritative and verified sources. For instance, during elections, users are directed to official election commission websites to ensure they access reliable information rather than misleading content.
Overall, by combining search interventions, authoritative redirection, and AI-content labeling, TikTok aims to mitigate misinformation risks, particularly during sensitive and rapidly evolving events. TikTok applies the same safety standards to AI-generated content as it does to all other content, with additional transparency requirements.
Q6. Beyond crisis management, how does search safety intersect with young users and families?
Young users are prolific searchers. They are also still developing contextual judgment. That is why defaults matter. Age-appropriate settings, Restricted Mode, and privacy controls shape the baseline environment before parental oversight even begins. Tools such as Family Pairing allow parents and guardians to link accounts with teens and collaboratively manage screen time, discovery preferences, and interaction controls.
Importantly, Family Pairing is not covert monitoring. It is structured governance embedded within the product. It enables conversation rather than isolation. Search safety, in this context, is not isolated from broader well-being architecture. It sits within a system that includes reporting mechanisms, keyword filtering, comment controls, screen time management, and sleep reminders. Safety is cumulative. It is not only about extreme harm. It is also about sustained exposure and mental balance.
The industry needs to move beyond viewing child safety as solely a parental burden. Platforms are responsible for setting the baseline conditions. Families then operate within those conditions. In markets like Sri Lanka, where digital adoption is rapid and information flows are intense, search safety is not a niche feature. It is part of public trust. If we want digital environments that people can rely on during moments of uncertainty, we must design them to hold up under stress.
Video Story