
Linus Torvalds, the creator of Linux, has expressed a firm and uncompromising position regarding the debate over the integration and use of artificial intelligence tools in writing and reviewing Linux kernel code.
In recent discussions among developers, attention has focused on the proposal to treat contributions generated with the help of large language models (LLMs) separately or in a regulated way . This idea was put forward by Oracle’s Lorenzo Stokes, who called the view that LLMs are simply ” another tool” naive.
Torvalds responded very bluntly: in his opinion, labeling AI-generated or AI-influenced code as “nonsense” is nonsensical . He emphasized that those who use AI to produce low-quality patches are unlikely to honestly report that they have done so, making any formal regulation unnecessary.
In a second comment, Torvalds reiterated that the kernel’s technical documentation should not turn into “an ideological arena ” between AI enthusiasts and detractors.
For this reason, he argues, the official description should remain neutral, defining AI simply as “a tool,” without further value judgments.
Despite his critical tone, Torvalds did not propose a ban on the use of AI assistants: he implicitly acknowledges that banning them would be pointless, since their use could continue anyway, perhaps without official declarations.
The discussion comes at a time when various teams involved in kernel development are developing specific guidelines for the use of AI in patching, and AI tools are increasingly being used in practice.
Torvalds has expressed less harsh views on AI in the past: in 2024, he called most AI marketing ” hype ” and admitted an interest in vibe-coding in non-critical cases , though these comments were not universally welcomed by the technical community.
Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.
