Google Releases Gemini 2.5 Experimental, Challenging OpenAI Again

Technology Author: EqualOcean News Editor: Yiran Xing, Wanqi Xu Mar 28, 2025 04:40 PM (GMT+8)

Silicon Valley tech giants continuously launch AI large models

google

On March 25, Google's newly launched Gemini 2.5 Pro Experimental has been hailed as the most advanced AI model to date. Koray Kavukcuoglu, Chief Technology Officer of Google DeepMind, said that Gemini 2.5 represents the next step in Google's goal of making "artificial intelligence smarter and with stronger reasoning capabilities".

In December 2024, Google introduced the Gemini 2.0 Flash Thinking model, a multimodal reasoning model with fast and transparent processing capabilities. On January 22, 2025, Google officially released an enhanced version of its Gemini 2.0 Flash Thinking reasoning model.

The newly released Gemini 2.5 series models are Google's attempt to challenge OpenAI's "o" series reasoning models. As the most advanced complex task model in this series, the experimental version of Gemini 2.5 Pro has comprehensively surpassed OpenAI o3-mini, Claude 3.7Sonnet, Grok-3, and DeepSeek-R1 in multiple benchmark tests. Moreover, it ranks first on LMArena (an open-source platform for evaluating large language models) with a significant advantage. However, Google has not released a comparison of Gemini 2.5 Pro with OpenAI o1, OpenAI o1-Pro, and OpenAI o3 models in benchmark tests.

Just before Google announced this news, OpenAI took the lead in conducting a live broadcast and subsequently released the brand-new GPT-4o image generation model at around the same time.

OpenAI pointed out that in the past, AI could often generate stunning visual effects but struggled to meet practical application needs. GPT-4o can retain both the conversation context and prompt words while allowing users to upload images for expansion or modification, significantly improving the accuracy and practical value of visual output. Altman announced during the live broadcast event that the native image generation function is based on the GPT-4o model and no longer requires calling the independent DALL-E text-to-image generation model.