Google has released Gemini 3.1 Pro, the latest version of its large language model. The model is currently available in preview and is expected to reach general availability soon.
Gemini 3.1 Pro represents a significant upgrade over its predecessor, Gemini 3, which was launched in November and was already regarded as a highly capable AI system.
According to benchmark data shared by Google, the new model shows substantial performance gains over earlier versions. On independent evaluations, including a benchmark known as Humanity’s Last Exam, Gemini 3.1 Pro achieved significantly higher scores than Gemini 3.
Brendan Foody, CEO of AI startup Mercor, stated that Gemini 3.1 Pro now ranks at the top of the APEX Agents leaderboard. Mercor’s APEX benchmarking system is designed to measure how effectively AI models perform real world professional tasks.
Foody noted that the results demonstrate rapid progress in agent capabilities, particularly in knowledge intensive work requiring multi step reasoning.
The release occurs amid intensified competition in the large language model sector. Major developers, including OpenAI and Anthropic, have recently introduced new models aimed at improving agentic performance and complex reasoning abilities.
Gemini 3.1 Pro positions Google at the forefront of current benchmark rankings as model developers race to deliver higher performance across standardized and real task evaluations.