Google Gemini Pro 1.5 Surpasses GPT-4 Turbo By Increasing Token Count To 1 Million. What Does It Mean?

What distinguishes the Gemini 1.5 Pro is its extensive understanding of context across different modalities.

Google asserts that the Gemini 1.5 Pro can achieve comparable results to the recently launched Gemini 1.0 Ultra but with significantly reduced computing power.

The standout feature of the Gemini 1.5 Pro is its capability to consistently process information across up to one million tokens.

It makes it the longest context window for any large-scale foundational model to date.

To provide context, the Gemini 1.0 models offer a context window of up to 32,000 tokens, GPT-4 Turbo has 128,000 tokens, and Claude 2.1 has 200,000 tokens.

Google is allowing a select group of developers and enterprise customers to experiment with a context window of up to one million tokens.

Currently, in preview mode, developers can test the Gemini 1.5 Pro using Google’s AI Studio and Vertex AI.

Gemini 1.5 Pro is capable of processing approx 700,000 words or 30,000 lines of code, a substantial upgrade against Gemini 1.0 Pro, which can handle 35 times less.

Additionally, the Gemini 1.5 Pro can efficiently handle 11 hours of audio and 1 hour of video across various languages.

Thanks for Reading. UP NEXT

Your iPhone Might Be Next Target Of Malware

View next story