
Redazione RHC : 19 November 2025 10:32
Google has announced the launch of Gemini 3, the next generation of its flagship artificial intelligence model, which the company is integrating into search, the Gemini app, cloud services, and developer tools. Google management calls Gemini 3 the smartest model in its lineup and the next step toward artificial general intelligence (AGI).
Nearly two years ago, the company launched the so-called Gemini era , and since then, the scale of AI adoption has grown significantly. According to Google, AI Overview mode in search now reaches approximately 2 billion monthly users, the Gemini app is reaching 650 million monthly active users, over 70% of Google Cloud customers already use the company’s AI services, and approximately 13 million developers have worked with generative models.
Each generation of the Gemini family has built on the previous one. The first version introduced support for working with various data types and extended contexts, the second laid the foundation for so-called agent-based capabilities and improved the model’s reasoning capabilities, and Gemini 2.5 held the top spot in the popular LMArena rankings for several months. Now, Gemini 3 combines these advances into a single core and is expected to better understand complex user queries , consider context and intent, and generally act as a more attentive digital conversationalist.
Google says the Gemini 3 Pro achieves record results in numerous industry benchmarks for logic, math, and data processing, significantly outperforming the previous generation, the 2.5 Pro.
In tests like GPQA Diamond and Humanity’s Last Exam , the model demonstrates expert-level reasoning capabilities , and in specialized mathematical problem sets, it reaches new heights for cutting-edge models. The company also highlights its progress in multimodal testing, which simultaneously considers text, images, and video.
The developers emphasize not just “numbers in a table,” but also the model’s behavior in a typical conversation . Google says Gemini 3 is committed to providing short, relevant responses , avoiding empty compliments and clichés, and instead focusing on providing honest, helpful answers that help understand the topic or see the problem from a new perspective.
One of the key features of Gemini 3 is related to learning . The model was designed from the ground up to be multimodal : it can simultaneously process text, images, video, audio, and code, and its million-token context window allows it to handle extremely large datasets.
Google cites examples of how Gemini 3 “deciphers” old handwritten recipes, translates them from several languages, and compiles them into a family cookbook; transforms scientific articles and hours-long lectures into interactive notes and flashcards; and analyzes sports workout videos, highlighting common mistakes and suggesting a training plan.
The second main application area is software development. Gemini 3 is positioned as Google’s best model to date for so-called vibe coding, where the developer describes what they want to achieve and artificial intelligence takes care of a significant portion of the routine programming and interface assembly. According to the company , Gemini 3 tops rankings like WebDev Arena and is significantly better than its predecessors at handling tasks that require not only writing code, but also the proper use of tools, the terminal, and external APIs.

Coinciding with the launch of Gemini 3, Google unveiled its new platform, Google Antigravity . This agent-based development environment elevates AI from the “sideline chat” to the forefront, allowing it to be directly accessed in the code editor, terminal, and built-in browser.
The agent can schedule work, split tasks into phases, run multiple processes in parallel, test and validate its code, and leave detailed artifacts such as plans, logs, and screenshots so users can see exactly what the system has done. Antigravity uses not only Gemini 3 Pro , but also the specialized Gemini 2.5 Computer Use model for browser management, as well as the proprietary Nano Banana engine for image generation and editing.
For more complex tasks, Google is developing a dedicated Gemini 3 Deep Think mode. The company claims it’s even more effective at handling unconventional problems that require reasoning and finding solutions in the absence of an obvious correct answer. Deep Think is currently undergoing further security testing and is available to a limited number of testers. It will later be available to Google AI Ultra subscribers.
A separate section of the announcement is dedicated to security. Google says that Gemini 3 has undergone the most extensive testing of all its models, making it more resilient to malicious solicitations, less susceptible to user manipulation, and more protected against scenarios such as automated cyberattacks. To assess the risks, Google engaged not only its own teams but also external experts, including relevant UK government agencies and independent firms that audited the model.
The model is available in the Gemini app, in AI mode in search for paid Google AI Pro and Ultra subscribers, in the Gemini API development tools in AI Studio, and in the Gemini CLI command utility. For enterprise customers, it will be available through Vertex AI and the Gemini Enterprise package. Some agent features, such as Gemini Agent for email, are already available for testing for users with advanced subscriptions, while Deep Think features will be added gradually as testing is completed.
Google now has not only new benchmark records, but also broader reach: the company will immediately implement the model in search and key products, where it has billions of users. The question is how useful this update will be for ordinary people and developers, and how quickly Gemini 3 can transform from a spectacular demonstration of AI capabilities into a truly indispensable work tool.
Redazione