Gemini (language model)#Updates
{{Short description|Large language model developed by Google}}
{{Use American English|date=August 2023}}
{{Use list-defined references|date=August 2023}}
{{Use mdy dates|date=August 2023}}
{{Distinguish|Gemini (chatbot)}}
{{Infobox software
| logo = Google Gemini logo.svg
| logo_upright = 1.1
| author =
| developer = Google DeepMind
| released = {{Start date and age|2023|12|06}}
| discontinued =
| latest release version =
| latest release date = {{Start date and age|2023|12|06}}
| qid =
| programming language =
| replaces = PaLM 2
| replaced_by =
| language = English
| genre = Large language model
| license = Proprietary
| website = {{official url}}
}}
Gemini is a family of multimodal large language models (LLMs) developed by Google DeepMind, and the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name. In March 2025, Gemini 2.5 Pro Experimental was rated as highly competitive.
History
= Development =
{{further|Gemini (chatbot)#Background}}
{{Multiple image
| total_width = 270
| caption_align = center
| image1 = Sundar Pichai (cropped).jpg
| caption1 =
| image2 = Demis Hassabis Royal Society.jpg
| caption2 =
| footer = Google CEO Sundar Pichai ({{abbr|L|left}}) and DeepMind CEO Demis Hassabis ({{abbr|R|right}}) spearheaded the development of Gemini.
}}
Google announced Gemini, a large language model (LLM) developed by subsidiary Google DeepMind, during the Google I/O keynote on May 10, 2023. It was positioned as a more powerful successor to PaLM 2, which was also unveiled at the event, with Google CEO Sundar Pichai stating that Gemini was still in its early developmental stages. Unlike other LLMs, Gemini was said to be unique in that it was not trained on a text corpus alone and was designed to be multimodal, meaning it could process multiple types of data simultaneously, including text, images, audio, video, and computer code. It had been developed as a collaboration between DeepMind and Google Brain, two branches of Google that had been merged as Google DeepMind the previous month. In an interview with Wired, DeepMind CEO Demis Hassabis touted Gemini's advanced capabilities, which he believed would allow the algorithm to trump OpenAI's ChatGPT, which runs on GPT-4 and whose growing popularity had been aggressively challenged by Google with LaMDA and Bard. Hassabis highlighted the strengths of DeepMind's AlphaGo program, which gained worldwide attention in 2016 when it defeated Go champion Lee Sedol, saying that Gemini would combine the power of AlphaGo and other Google–DeepMind LLMs.
In August 2023, The Information published a report outlining Google's roadmap for Gemini, revealing that the company was targeting a launch date of late 2023. According to the report, Google hoped to surpass OpenAI and other competitors by combining conversational text capabilities present in most LLMs with artificial intelligence–powered image generation, allowing it to create contextual images and be adapted for a wider range of use cases. Like Bard, Google co-founder Sergey Brin was summoned out of retirement to assist in the development of Gemini, along with hundreds of other engineers from Google Brain and DeepMind; he was later credited as a "core contributor" to Gemini. Because Gemini was being trained on transcripts of YouTube videos, lawyers were brought in to filter out any potentially copyrighted materials.
With news of Gemini's impending launch, OpenAI hastened its work on integrating GPT-4 with multimodal features similar to those of Gemini. The Information reported in September that several companies had been granted early access to "an early version" of the LLM, which Google intended to make available to clients through Google Cloud's Vertex AI service. The publication also stated that Google was arming Gemini to compete with both GPT-4 and Microsoft's GitHub Copilot.
= Launch =
On December 6, 2023, Pichai and Hassabis announced "Gemini 1.0" at a virtual press conference. It comprised three models: Gemini Ultra, designed for "highly complex tasks"; Gemini Pro, designed for "a wide range of tasks"; and Gemini Nano, designed for "on-device tasks". At launch, Gemini Pro and Nano were integrated into Bard and the Pixel 8 Pro smartphone, respectively, while Gemini Ultra was set to power "Bard Advanced" and become available to software developers in early 2024. Other products that Google intended to incorporate Gemini into included Search, Ads, Chrome, Duet AI on Google Workspace, and AlphaCode 2. It was made available only in English. Touted as Google's "largest and most capable AI model" and designed to emulate human behavior, the company stated that Gemini would not be made widely available until the following year due to the need for "extensive safety testing". Gemini was trained on and powered by Google's Tensor Processing Units (TPUs), and the name is in reference to the DeepMind–Google Brain merger as well as NASA's Project Gemini.
Gemini Ultra was said to have outperformed GPT-4, Anthropic's Claude 2, Inflection AI's Inflection-2, Meta's LLaMA 2, and xAI's Grok 1 on a variety of industry benchmarks, while Gemini Pro was said to have outperformed GPT-3.5. Gemini Ultra was also the first language model to outperform human experts on the 57-subject Massive Multitask Language Understanding (MMLU) test, obtaining a score of 90%. Gemini Pro was made available to Google Cloud customers on AI Studio and Vertex AI on December 13, while Gemini Nano will be made available to Android developers as well. Hassabis further revealed that DeepMind was exploring how Gemini could be "combined with robotics to physically interact with the world". In accordance with an executive order signed by U.S. President Joe Biden in October, Google stated that it would share testing results of Gemini Ultra with the federal government of the United States. Similarly, the company was engaged in discussions with the government of the United Kingdom to comply with the principles laid out at the AI Safety Summit at Bletchley Park in November.
= Updates =
Google partnered with Samsung to integrate Gemini Nano and Gemini Pro into its Galaxy S24 smartphone lineup in January 2024. The following month, Bard and Duet AI were unified under the Gemini brand, with "Gemini Advanced with Ultra 1.0" debuting via a new "AI Premium" tier of the Google One subscription service. Gemini Pro also received a global launch.
In February, 2024, Google launched Gemini 1.5 in a limited capacity, positioned as a more powerful and capable model than 1.0 Ultra. This "step change" was achieved through various technical advancements, including a new architecture, a mixture-of-experts approach, and a larger one-million-token context window, which equates to roughly an hour of silent video, 11 hours of audio, 30,000 lines of code, or 700,000 words. The same month, Google debuted Gemma, a family of free and open-source LLMs that serve as a lightweight version of Gemini. They come in two sizes, with a neural network with two and seven billion parameters, respectively. Multiple publications viewed this as a response to Meta and others open-sourcing their AI models, and a stark reversal from Google's longstanding practice of keeping its AI proprietary. Google announced an additional model, Gemini 1.5 Flash, on May 14th at the 2024 I/O keynote.
Gemma 2 was released on June 27, 2024.{{Cite web |date=2024-06-27 |title=Gemma 2 is now available to researchers and developers |url=https://blog.google/technology/developers/google-gemma-2/ |access-date=2024-08-15 |website=Google |language=en-us}}
Two updated Gemini models, Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002, were released on September 24, 2024.{{Cite web |title=Updated production-ready Gemini models, reduced 1.5 Pro pricing, increased rate limits, and more- Google Developers Blog |url=https://developers.googleblog.com/en/updated-production-ready-gemini-models-reduced-15-pro-pricing-increased-rate-limits-and-more/ |access-date= |website=developers.googleblog.com |language=en}}
On December 11, 2024, Google announced Gemini 2.0 Flash Experimental,{{Cite web |date=2024-12-11 |title=Introducing Gemini 2.0: our new AI model for the agentic era |url=https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0 |access-date=2024-12-20 |website=Google |language=en-us}} a significant update to its Gemini AI model. This iteration boasts improved speed and performance over its predecessor, Gemini 1.5 Flash. Key features include a Multimodal Live API for real-time audio and video interactions, enhanced spatial understanding, native image and controllable text-to-speech generation (with watermarking), and integrated tool use, including Google Search.{{Cite web |title=The next chapter of the Gemini era for developers- Google Developers Blog |url=https://developers.googleblog.com/en/the-next-chapter-of-the-gemini-era-for-developers/ |access-date=2024-12-20 |website=developers.googleblog.com |language=en}} It also introduces improved agentic capabilities, a new Google Gen AI SDK,{{Cite web |title=Gemini 2.0 Flash (experimental) {{!}} Gemini API |url=https://ai.google.dev/gemini-api/docs/models/gemini-v2 |access-date=2024-12-20 |website=Google AI for Developers |language=en}} and "Jules," an experimental AI coding agent for GitHub. Additionally, Google Colab is integrating Gemini 2.0 to generate data science notebooks from natural language. Gemini 2.0 was available through the Gemini chat interface for all users as "Gemini 2.0 Flash experimental".
On January 30, 2025, Google released Gemini 2.0 Flash as the new default model, with Gemini 1.5 Flash still available for usage. This was followed by the release of Gemini 2.0 Pro on February 5, 2025. Additionally, Google released Gemini 2.0 Flash Thinking Experimental, which details the language model's thinking process when responding to prompts.{{cite web |title=Gemini 2.0 is now available to everyone |url=https://blog.google/technology/google-deepmind/gemini-model-updates-february-2025/ |website=Google |language=en-us |date=5 February 2025}}
Gemma 3 was released on March 12, 2025.{{cite web|title=Introducing Gemma 3: The most capable model you can run on a single GPU or TPU|url=https://blog.google/technology/developers/gemma-3/|website=The Keyword|date=March 12, 2025}}{{cite web|title=Welcome Gemma 3: Google's all new multimodal, multilingual, long context open LLM |url=https://huggingface.co/blog/gemma3|website=Hugging Face|date=March 12, 2025}} The next day, Google announced that Gemini in Android Studio would be able to understand simple UI mockups and transform them into working Jetpack Compose code.{{Cite web |last=Abner |first=Li |date=Mar 13, 2025 |title=Gemini in Android Studio can now turn UI mockups into code |url=https://9to5google.com/2025/03/13/gemini-android-studio-images-code/ |access-date=April 12, 2025 |website=9to5Google}}
Gemini 2.5 Pro Experimental was released on March 25, 2025, described by Google as its most intelligent AI model yet, featuring enhanced reasoning and coding capabilities, and a "thinking model" capable of reasoning through steps before responding, using techniques like chain-of-thought prompting, whilst maintaining native multimodality and launching with a 1 million token context window.
At Google I/O 2025, Google announced significant updates to its Gemini core models.{{cite web |title=Gemini 2.5: Our most intelligent models are getting even better |url=https://blog.google/technology/google-deepmind/google-gemini-updates-io-2025/ |date=2025-05-20 |access-date=2025-05-21}}{{cite web |title=Google I/O 2025 announcements: Gemini 2.5 models, Imagen 4, Veo 3 and Flow |url=https://www.gsmarena.com/google_i_o_2025_announcements_gemini_25_models_imagen_4_veo_3_and_flow-news-67889.php |date=2025-05-21 |access-date=2025-05-21}} Gemini 2.5 Flash became the default model, delivering faster responses. Gemini 2.5 Pro was introduced as the most advanced Gemini model, featuring reasoning, coding capabilities, and the new Deep Think mode for complex tasks.{{cite web |title=Deep Think boosts the performance of Google’s flagship Gemini AI model |url=https://techcrunch.com/2025/05/20/deep-think-boosts-the-performance-of-googles-flagship-google-gemini-ai-model/ |date=2025-05-20 |access-date=2025-05-21 |website=TechCrunch}} Both 2.5 Pro and Flash support native audio output and improved security.
General availability for Gemini 2.5 Pro and Flash is scheduled for June 2025.{{cite web |title=Google upgrades Gemini 2.5 Pro with a new Deep Think mode for advanced reasoning abilities |url=https://the-decoder.com/google-upgrades-gemini-2-5-pro-with-a-new-deep-think-mode-for-advanced-reasoning-abilities/ |date=2025-05-21 |access-date=2025-05-21}}
= Model versions =
The following table lists the main model versions of Gemini, describing the significant changes included with each version:{{Cite web |title=Gemini Release updates |url=https://gemini.google.com/updates |website=Google |access-date=April 9, 2025 |archive-date=April 9, 2025 |archive-url=https://web.archive.org/web/20250409083428/https://gemini.google.com/updates |url-status=live }}{{Cite web |title=Gemini models |url=https://ai.google.dev/gemini-api/docs/models |website=Google |access-date=April 9, 2025 |archive-date=April 9, 2025 |archive-url=https://web.archive.org/web/20250409083428/https://ai.google.dev/gemini-api/docs/models |url-status=live }}
class="wikitable sortable"
! Version ! Release date ! Status ! Description |
Bard
| {{date|2023-03-21}} | {{eliminated|Discontinued}} | The first version |
1.0 Nano
| {{date|2023-12-06}} | {{eliminated|Discontinued}} | For mobile devices |
1.0 Pro
| {{date|2023-12-13}} | {{eliminated|Discontinued}} | |
1.0 Ultra
| {{date|2024-02-08}} | {{eliminated|Discontinued}} | |
1.5 Pro
| {{date|2024-02-15}} | {{eliminated|Discontinued}} | |
1.5 Flash
| {{date|2024-05-14}} | {{eliminated|Discontinued}} |
2.0 Flash
| {{date|2025-01-30}} | {{eliminated|Discontinued}} | Default model as of Jan 2025, improved speed, multimodal, real-time audio/video, enhanced spatial understanding |
2.0 Flash Thinking
| {{date|2025-02-05}} | {{eliminated|Discontinued}} | Experimental, exposes model’s reasoning process, available in Gemini app and API |
2.0 Flash-Lite
| {{date|2025-02}} | {{active}} | Most cost-efficient model, public preview in Google AI Studio and Vertex AI |
2.0 Pro
| {{date| 2025-02-05}} | {{eliminated|Discontinued}} | |
2.5 Pro
| {{date| 2025-03-25}} | {{active}} | |
2.5 Flash
| {{date| 2025-04-17}} | {{active}} | Default model as of May 2025 |
Technical specifications
The first generation of Gemini ("Gemini 1") has three models, with the same architecture. They are decoder-only transformers, with modifications to allow efficient training and inference on TPUs. They have a context length of 32,768 tokens, with multi-query attention. Two versions of Gemini Nano, Nano-1 (1.8 billion parameters) and Nano-2 (3.25 billion parameters), are distilled from larger Gemini models, designed for use by edge devices such as smartphones. As Gemini is multimodal, each context window can contain multiple forms of input. The different modes can be interleaved and do not have to be presented in a fixed order, allowing for a multimodal conversation. For example, the user might open the conversation with a mix of text, picture, video, and audio, presented in any order, and Gemini might reply with the same free ordering. Input images may be of different resolutions, while video is inputted as a sequence of images. Audio is sampled at 16 kHz and then converted into a sequence of tokens by the Universal Speech Model. Gemini's dataset is multimodal and multilingual, consisting of "web documents, books, and code, and includ[ing] image, audio, and video data".
The second generation of Gemini ("Gemini 1.5") has two models. Gemini 1.5 Pro is a multimodal sparse mixture-of-experts, with a context length in the millions, while Gemini 1.5 Flash is distilled from Gemini 1.5 Pro, with a context length above 2 million.
Gemma 2 27B is trained on web documents, code, science articles. Gemma 2 9B was distilled from 27B. Gemma 2 2B was distilled from a 7B model that remained unreleased.{{Citation |last1=Gemma Team |title=Gemma 2: Improving Open Language Models at a Practical Size |date=2024-08-02 |arxiv=2408.00118 |last2=Riviere |first2=Morgane |last3=Pathak |first3=Shreya |last4=Sessa |first4=Pier Giuseppe |last5=Hardin |first5=Cassidy |last6=Bhupatiraju |first6=Surya |last7=Hussenot |first7=Léonard |last8=Mesnard |first8=Thomas |last9=Shahriari |first9=Bobak}}
{{As of|2025|2}}, the models released include{{Cite web |title=Gemma explained: An overview of Gemma model family architectures- Google Developers Blog |url=https://developers.googleblog.com/en/gemma-explained-overview-gemma-model-family-architectures/ |access-date=2024-08-15 |website=developers.googleblog.com |language=en}}
- Gemma 1 (2B, 7B)
- CodeGemma (2B and 7B) - Gemma 1 finetuned for code generation.
- Gemma 2 (2B, 9B, 27B) - 27B trained from scratch. 2B and 9B
- Gemma 3 (1B, 4B, 12B, 27B) - Upgrade to Gemma 2, capable of multilinguality (supports 140 languages), longer context length (128k tokens), multimodality, and function calling.
- RecurrentGemma (2B, 9B) - Griffin-based, instead of Transformer-based.
- PaliGemma (3B) - A vision-language model that takes text and image inputs, and outputs text. It is made by connecting a SigLIP image encoder with a Gemma language model.{{Cite web |title=PaLI: Scaling Language-Image Learning in 100+ Languages |url=https://research.google/blog/pali-scaling-language-image-learning-in-100-languages/ |access-date=2024-08-15 |website=research.google |language=en}}
- PaliGemma 2 (3B, 10B, 28B) - Upgrade to PaliGemma, capable of more vision-language tasks.{{Cite web |title=Introducing PaliGemma 2 mix: A vision-language model for multiple tasks- Google Developers Blog |url=https://developers.googleblog.com/en/introducing-paligemma-2-mix/ |access-date=2025-02-22 |website=developers.googleblog.com |language=en}}
Reception
{{See also|Gemini (chatbot)#Reception}}
Gemini's launch was preluded by months of intense speculation and anticipation, which MIT Technology Review described as "peak AI hype". In August 2023, Dylan Patel and Daniel Nishball of research firm SemiAnalysis penned a blog post declaring that the release of Gemini would "eat the world" and outclass GPT-4, prompting OpenAI CEO Sam Altman to ridicule the duo on X (formerly Twitter). Business magnate Elon Musk, who co-founded OpenAI, weighed in, asking, "Are the numbers wrong?" Hugh Langley of Business Insider remarked that Gemini would be a make-or-break moment for Google, writing: "If Gemini dazzles, it will help Google change the narrative that it was blindsided by Microsoft and OpenAI. If it disappoints, it will embolden critics who say Google has fallen behind."
Reacting to its unveiling in December 2023, University of Washington professor emeritus Oren Etzioni predicted a "tit-for-tat arms race" between Google and OpenAI. Professor Alexei Efros of the University of California, Berkeley praised the potential of Gemini's multimodal approach, while scientist Melanie Mitchell of the Santa Fe Institute called Gemini "very sophisticated". Professor Chirag Shah of the University of Washington was less impressed, likening Gemini's launch to the routineness of Apple's annual introduction of a new iPhone. Similarly, Stanford University's Percy Liang, the University of Washington's Emily Bender, and the University of Galway's Michael Madden cautioned that it was difficult to interpret benchmark scores without insight into the training data used. Writing for Fast Company, Mark Sullivan opined that Google had the opportunity to challenge the iPhone's dominant market share, believing that Apple was unlikely to have the capacity to develop functionality similar to Gemini with its Siri virtual assistant. Google shares spiked by 5.3 percent the day after Gemini's launch.
Google faced criticism for a demonstrative video of Gemini, which was not conducted in real time.{{Cite web |last=Elias |first=Steve Kovach, Jennifer |date=2023-12-08 |title=Google faces controversy over edited Gemini AI demo video |url=https://www.cnbc.com/2023/12/08/google-faces-controversy-over-edited-gemini-ai-demo-video.html |access-date=2023-12-09 |website=CNBC |language=en |archive-date=December 9, 2023 |archive-url=https://web.archive.org/web/20231209010937/https://www.cnbc.com/2023/12/08/google-faces-controversy-over-edited-gemini-ai-demo-video.html |url-status=live }}
Gemini 2.5 Pro Experimental debuted at the top position on the LMArena leaderboard, a benchmark measuring human preference, indicating strong performance and output quality. The model achieved state-of-the-art or highly competitive results across various benchmarks evaluating reasoning, knowledge, science, math, coding, and long-context performance, such as Humanity's Last Exam, GPQA, AIME 2025, SWE-bench and MRCR. Initial reviews highlighted its improved reasoning capabilities and performance gains compared to previous versions. Published benchmarks also showed areas where contemporary models from competitors like Anthropic, xAI, or OpenAI held advantages.
See also
- Gato, a multimodal neural network developed by DeepMind
- Gemini Robotics
References
{{reflist|refs=
{{Cite news |last=Grant |first=Nico |date=May 10, 2023 |title=Google Builds on Tech's Latest Craze With Its Own A.I. Products |url=https://www.nytimes.com/2023/05/10/technology/google-ai-products.html |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230510180605/https://www.nytimes.com/2023/05/10/technology/google-ai-products.html |archive-date=May 10, 2023 |access-date=August 21, 2023 |newspaper=The New York Times |issn=0362-4331}}
{{Cite web |last=Ortiz |first=Sabrina |date=May 10, 2023 |title=Every major AI feature announced at Google I/O 2023 |url=https://www.zdnet.com/article/every-major-ai-feature-announced-at-google-io-2023/ |url-status=live |archive-url=https://web.archive.org/web/20230510224825/https://www.zdnet.com/article/every-major-ai-feature-announced-at-google-io-2023/ |archive-date=May 10, 2023 |access-date=August 21, 2023 |website=ZDNet}}
{{Cite news |last=Milmo |first=Dan |date=December 6, 2023 |title=Google says new AI model Gemini outperforms ChatGPT in most tests |url=https://www.theguardian.com/technology/2023/dec/06/google-new-ai-model-gemini-bard-upgrade |url-status=live |archive-url=https://web.archive.org/web/20231206162533/https://www.theguardian.com/technology/2023/dec/06/google-new-ai-model-gemini-bard-upgrade |archive-date=December 6, 2023 |access-date=December 6, 2023 |newspaper=The Guardian |issn=0261-3077}}
{{Cite magazine |last=Levy |first=Steven |author-link=Steven Levy |date=September 11, 2023 |title=Sundar Pichai on Google;s AI, Microsoft's AI, OpenAI, and ... Did We Mention AI? |url=https://www.wired.com/story/sundar-pichai-google-ai-microsoft-openai/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230911124432/https://www.wired.com/story/sundar-pichai-google-ai-microsoft-openai/ |archive-date=September 11, 2023 |access-date=September 12, 2023 |magazine=Wired}}
{{Cite magazine |last=Knight |first=Will |date=June 26, 2023 |title=Google DeepMind's CEO Says Its Next Algorithm Will Eclipse ChatGPT |url=https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230626121231/https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ |archive-date=June 26, 2023 |access-date=August 21, 2023 |magazine=Wired}}
{{Cite web |last=Victor |first=Jon |date=August 15, 2023 |title=How Google is Planning to Beat OpenAI |url=https://www.theinformation.com/articles/the-forced-marriage-at-the-heart-of-googles-ai-race |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20230815134508/https://www.theinformation.com/articles/the-forced-marriage-at-the-heart-of-googles-ai-race |archive-date=August 15, 2023 |access-date=August 21, 2023 |website=The Information}}
{{Cite news |last=Grant |first=Nico |date=January 20, 2023 |title=Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight |url=https://www.nytimes.com/2023/01/20/technology/google-chatgpt-artificial-intelligence.html |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230120081118/https://www.nytimes.com/2023/01/20/technology/google-chatgpt-artificial-intelligence.html |archive-date=January 20, 2023 |access-date=February 6, 2023 |newspaper=The New York Times |issn=0362-4331}}
{{Cite news |last1=Kruppa |first1=Miles |last2=Seetharaman |first2=Deepa |date=July 21, 2023 |title=Sergey Brin Is Back in the Trenches at Google |url=https://www.wsj.com/articles/sergey-brin-google-ai-gemini-1b5aa41e |url-access=subscription |url-status=live |archive-url=https://archive.today/20230721010618/https://www.wsj.com/amp/articles/sergey-brin-google-ai-gemini-1b5aa41e |archive-date=July 21, 2023 |access-date=September 7, 2023 |newspaper=The Wall Street Journal |issn=0099-9660}}
{{Cite web |last=Carter |first=Tom |date=December 7, 2023 |title=Google confirms that its cofounder Sergey Brin played a key role in creating its ChatGPT rival |url=https://www.businessinsider.com/google-sergey-brin-back-from-retirement-to-build-gemini-2023-12 |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231207213015/https://www.businessinsider.com/google-sergey-brin-back-from-retirement-to-build-gemini-2023-12 |archive-date=December 7, 2023 |access-date=December 31, 2023 |website=Business Insider}}
{{Cite web |last=Victor |first=Jon |date=September 18, 2023 |title=OpenAI Hustles to Beat Google to Launch 'Multimodal' LLM |url=https://www.theinformation.com/articles/openai-hustles-to-beat-google-to-launch-multimodal-llm |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20230918174849/https://www.theinformation.com/articles/openai-hustles-to-beat-google-to-launch-multimodal-llm |archive-date=September 18, 2023 |access-date=October 15, 2023 |website=The Information}}
{{Cite news |author= |date=September 14, 2023 |title=Google nears release of AI software Gemini, The Information reports |url=https://www.reuters.com/technology/google-nears-release-ai-software-gemini-information-2023-09-15/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230915113109/https://www.reuters.com/technology/google-nears-release-ai-software-gemini-information-2023-09-15/ |archive-date=September 15, 2023 |access-date=October 2, 2023 |publisher=Reuters}}
{{Cite web |last=Nolan |first=Beatrice |date=September 23, 2023 |title=Google is quietly handing out early demos of its GPT-4 rival called Gemini. Here's what we know so far about the upcoming AI model. |url=https://www.businessinsider.com/google-gemini-explainer-ai-model-2023-9 |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230923121028/https://www.businessinsider.com/google-gemini-explainer-ai-model-2023-9 |archive-date=September 23, 2023 |access-date=October 16, 2023 |website=Business Insider}}
{{Cite news |last=Kruppa |first=Miles |date=December 6, 2023 |title=Google Announces AI System Gemini After Turmoil at Rival OpenAI |url=https://www.wsj.com/tech/ai/google-announces-ai-system-gemini-after-turmoil-at-rival-openai-10835335 |url-access=subscription |url-status=live |archive-url=https://archive.today/20231206152820/https://www.wsj.com/tech/ai/google-announces-ai-system-gemini-after-turmoil-at-rival-openai-10835335 |archive-date=December 6, 2023 |access-date=December 6, 2023 |newspaper=The Wall Street Journal |issn=0099-9660}}
{{Cite news |last1=Liedtike |first1=Michael |last2=O'Brien |first2=Matt |date=December 6, 2023 |title=Google launches Gemini, upping the stakes in the global AI race |url=https://apnews.com/article/google-gemini-artificial-intelligence-launch-95d05d02051e75e20b574614ae720b8b |url-status=live |archive-url=https://web.archive.org/web/20231206181414/https://apnews.com/article/google-gemini-artificial-intelligence-launch-95d05d02051e75e20b574614ae720b8b |archive-date=December 6, 2023 |access-date=December 6, 2023 |publisher=Associated Press}}
{{Cite web |last=Edwards |first=Benj |date=December 6, 2023 |title=Google launches Gemini—a powerful AI model it says can surpass GPT-4 |url=https://arstechnica.com/information-technology/2023/12/google-launches-gemini-a-powerful-ai-model-it-says-can-surpass-gpt-4/ |url-status=live |archive-url=https://web.archive.org/web/20231206182034/https://arstechnica.com/information-technology/2023/12/google-launches-gemini-a-powerful-ai-model-it-says-can-surpass-gpt-4/ |archive-date=December 6, 2023 |access-date=December 6, 2023 |website=Ars Technica}}
{{Cite web |last=Pierce |first=David |date=December 6, 2023 |title=Google launches Gemini, the AI model it hopes will take down GPT-4 |url=https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model |url-status=live |archive-url=https://web.archive.org/web/20231206174404/https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model |archive-date=December 6, 2023 |access-date=December 6, 2023 |website=The Verge}}
{{Cite web |last1=Fung |first1=Brian |last2=Thorbecke |first2=Catherine |date=December 6, 2023 |title=Google launches Gemini, its most-advanced AI model yet, as it races to compete with ChatGPT |url=https://www.cnn.com/2023/12/06/tech/google-launches-gemini-compete-with-chatgpt/index.html |url-status=live |archive-url=https://web.archive.org/web/20231206183632/https://www.cnn.com/2023/12/06/tech/google-launches-gemini-compete-with-chatgpt/index.html |archive-date=December 6, 2023 |access-date=December 6, 2023 |publisher=CNN Business}}
{{Cite web |last=Elias |first=Jennifer |date=December 6, 2023 |title=Google launches its largest and 'most capable' AI model, Gemini |url=https://www.cnbc.com/2023/12/06/google-launches-its-largest-and-most-capable-ai-model-gemini.html |url-status=live |archive-url=https://web.archive.org/web/20231206160659/https://www.cnbc.com/2023/12/06/google-launches-its-largest-and-most-capable-ai-model-gemini.html |archive-date=December 6, 2023 |access-date=December 6, 2023 |publisher=CNBC}}
{{Cite web |date=December 6, 2023 |title=Google launches Gemini, upping the stakes in the global AI race |url=https://www.cbsnews.com/sanfrancisco/news/google-ups-the-stakes-in-ai-race-with-gemini-a-technology-trained-to-behave-more-like-humans/ |url-status=live |archive-url=https://web.archive.org/web/20231207001332/https://www.cbsnews.com/sanfrancisco/news/google-ups-the-stakes-in-ai-race-with-gemini-a-technology-trained-to-behave-more-like-humans/ |archive-date=December 7, 2023 |access-date=December 7, 2023 |publisher=CBS News}}
{{Cite web |last=Henshall |first=Will |date=December 6, 2023 |title=Google DeepMind Unveils Its Most Powerful AI Offering Yet |url=https://time.com/6343450/gemini-google-deepmind-ai/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231206232509/https://time.com/6343450/gemini-google-deepmind-ai/ |archive-date=December 6, 2023 |access-date=December 6, 2023 |magazine=Time}}
{{Cite news |last1=Metz |first1=Cade |last2=Grant |first2=Nico |date=December 6, 2023 |title=Google Updates Bard Chatbot With 'Gemini' A.I. as It Chases ChatGPT |url=https://www.nytimes.com/2023/12/06/technology/google-ai-bard-chatbot-gemini.html |url-status=live |archive-url=https://web.archive.org/web/20231206153133/https://www.nytimes.com/2023/12/06/technology/google-ai-bard-chatbot-gemini.html |archive-date=December 6, 2023 |access-date=December 6, 2023 |newspaper=The New York Times |issn=0362-4331}}
{{Cite magazine |last=Knight |first=Will |date=December 6, 2023 |title=Google Just Launched Gemini, Its Long-Awaited Answer to ChatGPT |url=https://www.wired.com/story/google-gemini-ai-model-chatgpt/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231206151324/https://www.wired.com/story/google-gemini-ai-model-chatgpt/ |archive-date=December 6, 2023 |access-date=December 6, 2023 |magazine=Wired}}
{{Cite news |last1=Alba |first1=Davey |author-link1=Davey Alba |last2=Ghaffary |first2=Shirin |date=December 6, 2023 |title=Google Opens Access to Gemini, Racing to Catch Up to OpenAI |url=https://www.bloomberg.com/news/articles/2023-12-06/google-opens-access-to-gemini-racing-to-catch-up-to-openai |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231206161403/https://www.bloomberg.com/news/articles/2023-12-06/google-opens-access-to-gemini-racing-to-catch-up-to-openai |archive-date=December 6, 2023 |access-date=December 7, 2023 |publisher=Bloomberg News}}
{{Cite magazine |last=Knight |first=Will |date=December 6, 2023 |title=Google DeepMind's Demis Hassabis Says Gemini Is a New Breed of AI |url=https://www.wired.com/story/google-deepmind-demis-hassabis-gemini-ai/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231206153212/https://www.wired.com/story/google-deepmind-demis-hassabis-gemini-ai/ |archive-date=December 6, 2023 |access-date=December 7, 2023 |magazine=Wired}}
{{Cite news |last1=Gurman |first1=Mark |last2=Love |first2=Julia |last3=Alba |first3=Davey |date=January 17, 2024 |title=Samsung Bets on Google-Powered AI Features in Smartphone Revamp |url=https://www.bloomberg.com/news/articles/2024-01-17/samsung-unpacked-event-galaxy-s24-s24-s24-ultra-phones-include-google-ai |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240117182045/https://www.bloomberg.com/news/articles/2024-01-17/samsung-unpacked-event-galaxy-s24-s24-s24-ultra-phones-include-google-ai |archive-date=January 17, 2024 |access-date=February 4, 2024 |publisher=Bloomberg News}}
{{Cite magazine |last=Chokkattu |first=Julian |date=January 17, 2024 |title=Samsung's Galaxy S24 Phones Call on Google's AI to Spruce Up Their Smarts |url=https://www.wired.com/story/samsung-galaxy-unpacked-2024/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240117181252/https://www.wired.com/story/samsung-galaxy-unpacked-2024/ |archive-date=January 17, 2024 |access-date=February 4, 2024 |magazine=Wired}}
{{Cite news |last=Metz |first=Cade |date=February 8, 2024 |title=Google Releases Gemini, an A.I.-Driven Chatbot and Voice Assistant |url=https://www.nytimes.com/2024/02/08/technology/google-gemini-ai-app.html |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240208132803/https://www.nytimes.com/2024/02/08/technology/google-gemini-ai-app.html |archive-date=February 8, 2024 |access-date=February 8, 2024 |newspaper=The New York Times |issn=0362-4331}}
{{Cite news |last=Dastin |first=Jeffrey |date=February 8, 2024 |title=Google rebrands Bard chatbot as Gemini, rolls out paid subscription |url=https://www.reuters.com/technology/google-rebrands-bard-chatbot-gemini-rolls-out-paid-subscription-2024-02-08/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240208162345/https://www.reuters.com/technology/google-rebrands-bard-chatbot-gemini-rolls-out-paid-subscription-2024-02-08/ |archive-date=February 8, 2024 |access-date=February 8, 2024 |publisher=Reuters}}
{{Cite web |last=Li |first=Abner |date=February 8, 2024 |title=Google One AI Premium is $19.99/mo with Gemini Advanced & Gemini for Workspace |url=https://9to5google.com/2024/02/08/google-one-ai-premium/ |url-status=live |archive-url=https://web.archive.org/web/20240208162504/https://9to5google.com/2024/02/08/google-one-ai-premium/ |archive-date=February 8, 2024 |access-date=February 8, 2024 |website=9to5Google}}
{{Cite web |last=Mehta |first=Ivan |date=February 1, 2024 |title=Google's Bard chatbot gets the Gemini Pro update globally |url=https://techcrunch.com/2024/02/01/googles-bard-chatbot-gets-the-gemini-pro-update-globally/ |url-status=live |archive-url=https://web.archive.org/web/20240201150753/https://techcrunch.com/2024/02/01/googles-bard-chatbot-gets-the-gemini-pro-update-globally/ |archive-date=February 1, 2024 |access-date=February 4, 2024 |website=TechCrunch}}
{{Cite magazine |last=Knight |first=Will |date=February 15, 2024 |title=Google's Flagship AI Model Gets a Mighty Fast Upgrade |url=https://www.wired.com/story/google-deepmind-gemini-pro-ai-upgrade/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240215152939/https://www.wired.com/story/google-deepmind-gemini-pro-ai-upgrade/ |archive-date=February 15, 2024 |access-date=February 21, 2024 |magazine=Wired}}
{{Cite magazine |last=Nieva |first=Richard |date=February 15, 2024 |title=Google Unveils Gemini 1.5, But Only Developers And Enterprise Clients Have Access For Now |url=https://www.forbes.com/sites/richardnieva/2024/02/14/google-deepmind-gemini/ |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20240215182848/https://www.forbes.com/sites/richardnieva/2024/02/14/google-deepmind-gemini/ |archive-date=February 15, 2024 |access-date=February 21, 2024 |magazine=Forbes}}
{{Cite magazine |last=McCracken |first=Harry |date=February 15, 2024 |title=Google's new Gemini 1.5 AI can dive deep into oceans of video and audio |url=https://www.fastcompany.com/91029527/google-gemini-1-5-ai-llm |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240217180943/https://www.fastcompany.com/91029527/google-gemini-1-5-ai-llm |archive-date=February 17, 2024 |access-date=February 21, 2024 |magazine=Fast Company}}
{{Cite web |last=Stokes |first=Samantha |date=February 15, 2024 |title=Here's everything you need to know about Gemini 1.5, Google's newly updated AI model that hopes to challenge OpenAI |url=https://www.businessinsider.com/google-gemini-1-5-retools-ai-into-one-advanced-model-2024-2 |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240219195159/https://www.businessinsider.com/google-gemini-1-5-retools-ai-into-one-advanced-model-2024-2 |archive-date=February 19, 2024 |access-date=February 21, 2024 |website=Business Insider}}
{{Cite magazine |last=Khan |first=Jeremy |date=February 21, 2024 |title=Google unveils new family of open-source AI models called Gemma to take on Meta and others—deciding open-source AI ain't so bad after all |url=https://fortune.com/2024/02/21/google-new-family-open-source-ai-models-gemma/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240221135344/https://fortune.com/2024/02/21/google-new-family-open-source-ai-models-gemma/ |archive-date=February 21, 2024 |access-date=February 21, 2024 |magazine=Fast Company}}
{{Cite news |last=Alba |first=Davey |date=February 21, 2024 |title=Google Delves Deeper Into Open Source with Launch of Gemma AI Model |url=https://www.bloomberg.com/news/articles/2024-02-21/google-releases-gemma-ai-model-for-open-source-developers |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240221131813/https://www.bloomberg.com/news/articles/2024-02-21/google-releases-gemma-ai-model-for-open-source-developers |archive-date=February 21, 2024 |access-date=February 21, 2024 |publisher=Bloomberg News}}
{{Cite news |last1=Metz |first1=Cade |last2=Grant |first2=Nico |date=February 21, 2024 |title=Google Is Giving Away Some of the A.I. That Powers Chatbots |url=https://www.nytimes.com/2024/02/21/technology/google-open-source-ai.html |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20240221140434/https://www.nytimes.com/2024/02/21/technology/google-open-source-ai.html |archive-date=February 21, 2024 |access-date=February 21, 2024 |newspaper=The New York Times |issn=0362-4331}}
{{Cite web |last=Elias |first=Jennifer |date=August 12, 2024 |title=Google rolls out its most powerful AI models as competition from OpenAI heats up |url=https://www.cnbc.com/2024/05/14/google-announces-lightweight-ai-model-gemini-flash-1point5-at-google-i/o.html |url-status=live |archive-url=https://web.archive.org/web/20240514182111/https://www.cnbc.com/2024/05/14/google-announces-lightweight-ai-model-gemini-flash-1point5-at-google-i/o.html |archive-date=May 14, 2024 |access-date=August 13, 2024 |publisher=CNBC}}
{{Cite tech report |date=December 6, 2023 |title=Gemini: A Family of Highly Capable Multimodal Models |url=https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf |url-status=live |archive-url=https://web.archive.org/web/20231206151650/https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf |archive-date=December 6, 2023 |access-date=December 7, 2023 |publisher=Google DeepMind}}
{{Cite tech report |date=February 15, 2024 |title=Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context |url=https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf |url-status=live |archive-url=https://web.archive.org/web/20240226230625/https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf |archive-date=Feb 26, 2024 |access-date=May 17, 2024 |publisher=Google DeepMind}}
{{Cite magazine |last1=Heikkilä |first1=Melissa |last2=Heaven |first2=Will Douglas |date=December 6, 2023 |title=Google DeepMind's new Gemini model looks amazing—but could signal peak AI hype |url=https://www.technologyreview.com/2023/12/06/1084471/google-deepminds-new-gemini-model-looks-amazing-but-could-signal-peak-ai-hype/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231206175747/https://www.technologyreview.com/2023/12/06/1084471/google-deepminds-new-gemini-model-looks-amazing-but-could-signal-peak-ai-hype/ |archive-date=December 6, 2023 |access-date=December 6, 2023 |magazine=MIT Technology Review}}
{{Cite web |last=howdhury |first=Hasan |date=August 29, 2023 |title=AI bros are at war over declarations that Google's upcoming Gemini AI model smashes OpenAI's GPT-4 |url=https://www.businessinsider.com/google-gemini-ai-model-smashes-gpt4-says-semianalysis-2023-8 |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230829164940/https://www.businessinsider.com/google-gemini-ai-model-smashes-gpt4-says-semianalysis-2023-8 |archive-date=August 29, 2023 |access-date=September 7, 2023 |website=Business Insider}}
{{Cite magazine |last=Harrison |first=Maggie |date=August 31, 2023 |title=OpenAI Rages at Report that Google's New AI Crushes GPT-4 |url=https://futurism.com/the-byte/openai-report-google-ai-gpt-4 |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20230831232156/https://futurism.com/the-byte/openai-report-google-ai-gpt-4 |archive-date=August 31, 2023 |access-date=September 7, 2023 |magazine=Fortune}}
{{Cite web |last=Langley |first=Hugh |date=October 12, 2023 |title=Google VP teases Gemini's multimodal future: 'I've seen some pretty amazing things.' |url=https://www.businessinsider.com/google-sissie-hsiao-teases-gemini-ai-model-pretty-amazing-things-2023-10 |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231012134647/https://www.businessinsider.com/google-sissie-hsiao-teases-gemini-ai-model-pretty-amazing-things-2023-10 |archive-date=October 12, 2023 |access-date=October 15, 2023 |website=Business Insider}}
{{Cite web |last=Madden |first=Michael G. |date=December 15, 2023 |title=Google's Gemini: is the new AI model really better than ChatGPT? |url=https://theconversation.com/googles-gemini-is-the-new-ai-model-really-better-than-chatgpt-219526 |url-status=live |archive-url=https://web.archive.org/web/20231215140239/https://theconversation.com/googles-gemini-is-the-new-ai-model-really-better-than-chatgpt-219526 |archive-date=December 15, 2023 |access-date=February 4, 2024 |website=The Conversation}}
{{Cite magazine |last=Sullivan |first=Mark |date=December 6, 2023 |title=Gemini-powered Google phones may make Siri even more of an Achilles' heel for the iPhone |url=https://www.fastcompany.com/90994058/gemini-ai-android-google-apple |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231207063441/https://www.fastcompany.com/90994058/gemini-ai-android-google-apple |archive-date=December 7, 2023 |access-date=December 7, 2023 |magazine=Fast Company}}
{{Cite news |last=Soni |first=Aditya |date=December 7, 2023 |title=Alphabet soars as Wall Street cheers arrival of AI model Gemini |url=https://www.reuters.com/technology/alphabet-soars-wall-street-cheers-arrival-ai-model-gemini-2023-12-07/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231207203954/https://www.reuters.com/technology/alphabet-soars-wall-street-cheers-arrival-ai-model-gemini-2023-12-07/ |archive-date=December 7, 2023 |access-date=February 4, 2024 |publisher=Reuters}}
|url=https://www.marketwatch.com/story/gemini-googles-long-awaited-answer-to-chatgpt-is-an-overnight-hit-a523a817 |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231207163911/https://www.marketwatch.com/story/gemini-googles-long-awaited-answer-to-chatgpt-is-an-overnight-hit-a523a817 |archive-date=December 7, 2023 |access-date=February 4, 2024 |website=MarketWatch}}
}}
Further reading
{{refbegin}}
- {{Cite magazine |last=Honan |first=Matt |date=December 6, 2023 |title=Google CEO Sundar Pichai on Gemini and the coming age of AI |url=https://www.technologyreview.com/2023/12/06/1084539/google-ceo-sundar-pichai-on-gemini-and-the-coming-age-of-ai/ |url-access=limited |url-status=live |archive-url=https://web.archive.org/web/20231206233355/https://www.technologyreview.com/2023/12/06/1084539/google-ceo-sundar-pichai-on-gemini-and-the-coming-age-of-ai/ |archive-date=December 6, 2023 |access-date=December 6, 2023 |magazine=MIT Technology Review}}
- {{Cite web |title=Gemma explained: An overview of Gemma model family architectures- Google Developers Blog |url=https://developers.googleblog.com/en/gemma-explained-overview-gemma-model-family-architectures/ |access-date=2024-08-15 |website=developers.googleblog.com |language=en}}
{{refend}}
External links
- {{Official website}}
- [https://blog.google/technology/ai/google-gemini-ai/ Press release] via The Keyword
- White paper for [https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf 1.0] and [https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf 1.5]
{{Google AI}}
{{Google LLC}}
{{Generative AI}}
{{Artificial intelligence navbox}}