
Adopting large language models (LLMs) without adequate governance, verification, and oversight risks legal, financial, and reputational damage.
This data emerges from the report “Risks of Unmanaged AI Reliance: Evaluating Regional Biases, Geofencing, Data Sovereignty, and Censorship in LLM Models “, the latest research from Trend Micro, a global cybersecurity leader.
AI systems can generate different results depending on geographic location, language, model design, and built-in controls. In industries where customers work directly with users or make important decisions, these inconsistencies can undermine trust, deviate from local regulations or cultural norms, and lead to costly business consequences.
The Trend Micro study involved over 100 AI models and used more than 800 targeted prompts designed to assess biases, political and cultural awareness, geofencing behaviors, data sovereignty signals, and contextual limitations. Thousands of experiments were conducted to measure how outputs change over time and location, and more than 60 million input tokens and over 500 million output tokens were analyzed.
The results reveal that identical prompts can produce different responses depending on the geographic area and the models, and even vary over repeated interactions with the same system. In politically critical scenarios, such as disputed territories or national identity, the models showed clear differences in alignment depending on location. In other tests, the models returned outdated or inconsistent results in areas requiring precision, such as financial calculations and information where time was critical.
“In many organizations, there’s a misconception that AI behaves like traditional software and that the same input reliably produces the same output,” says Marco Fanuli, Technical Director at Trend Micro Italy. “Our research shows this is incorrect. LLMs can provide different responses based on geography, language, and guardrails, and can even vary from one interaction to the next. When AI results are used directly by customers or to make business decisions, there’s a risk of losing control over communication, compliance, and cultural norms.”
The study highlights that the risks are greater for organizations that operate globally or use AI in different geographical areas. An AI-based service could be regulated by different legal, political, and sociocultural frameworks. Critical issues also affect the public sector. In this case, the results generated by AI could be interpreted as official guidance, and reliance on non-localized or unverified models could introduce sovereignty and accessibility risks.
“Artificial intelligence shouldn’t be treated as a plug-and-play productivity tool,” Marco Fanuli concludes. ” Organizations must address dependency risks, adopt clear governance, define responsibilities, and introduce human verification for any user-facing output. This includes AI vendors being transparent about how models perform, what data they’re based on, and where guardrails are applied. AI fosters innovation and efficiency, but only when used with a clear understanding of its limitations and with controls that verify how systems perform in real-world environments.”
Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.
