Redazione RHC : 25 September 2025 07:04
Artificial intelligence is increasingly being described as a double-edged sword, capable of offering enormous advantages but also opening new avenues for digital crime. During the ” TRUST AICS – 2025″ conference in Hyderabad , cybersecurity and legal experts emphasized how the same technology that enhances defenses and innovation is now increasingly being used by fraudsters to orchestrate sophisticated frauds that are difficult to detect with traditional tools.
The gravity of the phenomenon was underscored by data from the Telangana Cyber Security Bureau : nearly 250 reports of cybercrime arrive every day, resulting in economic losses of approximately €60 million. This frequency demonstrates that AI abuse is already a real emergency, no longer a theoretical risk, with consequences that affect citizens, businesses, and institutions.
Despite the risks, artificial intelligence remains an indispensable ally for security. Companies and organizations are increasingly investing in governance tools based on intelligent algorithms. According to industry experts, AI-enhanced monitoring systems can detect signs of non-compliance in real time and prevent incidents before they become damaging.
At the heart of the discussions are large-scale language models, which are currently at the heart of AI development . They offer enormous possibilities, but also pose crucial challenges of data management, localization, privacy protection, and equal access . Without targeted attention to these aspects, the technology risks amplifying inequalities and vulnerabilities.
Several speakers highlighted the need for shared responsibility. Developers are required to ensure the diversity and quality of training data, organizations adopting solutions must monitor potential bias and impartiality, while regulators must provide clear guidelines and impose standards for the safe and ethical use of AI.
Finally, the issue of legal liability emerged forcefully. The current regulatory framework, according to legal experts who participated in the debate, appears inadequate to address the damage caused by artificial intelligence tools.
It’s essential to clearly define who is responsible in the event of fraud or abuse: developers, user companies, or model providers. Only with clear and shared rules will it be possible to fully exploit the potential of AI without leaving citizens and businesses exposed to uncontrolled risks.