Gemini AI vs ChatGPT: The Rat Race For AI’s Future

234

  • OpenAI launched ChatGPT in November 2022.
  • Google unveiled its own AI chatbot, Bard, and later introduced Gemini.
  • Gemini has proven to outperform ChatGPT in most benchmark tests.

When OpenAI launched ChatGPT last year, it opened a whole new world for us. The AI chatbot soon went viral and people across the globe started using it as per their wishes. Some used it to create music, some used it to compose poetry, while others used it to conduct research, indulge in fun Q and A sessions, and so on.

Gemini Vs ChatGPT

And just a couple of days back, we all got to experience Gemini- Google’s most powerful large language model yet. With multimodal abilities, Gemini promises to take the world by storm and let us experience the power of AI like never before. Google had announced that Bard is now being powered by Gemini and more updates will be rolled out soon.

Now, there is no doubt that the future is going to be all about AI. And the fight of dominance in the area has already begun. While ChatGPT is credited with being the first AI chatbot to converse in a human-like manner, Google is widely regarded as the “800 pound gorilla” in the online search space.

Benchmark Tests

Gemini was introduced in May, during Google’s I/O event, and left all tech enthusiasts intrigued. One of the things that generated excitement was the LLM’s multimodal capabilities and people couldn’t wait to try out the new AI in town. Just last week, Google announced the launch of Gemini and called it the “most capable and general model they have ever built.” In a blog post, the company also shared that Gemini has proved to be better than ChatGPT in most benchmark tests. Three variants of Gemini were launched – Ultra, Pro, and Nano.

Gemini was tested in General, Reasoning, Math, and Code areas to test how well it works while dealing with texts. To test the LLM’s multimodal capabilities, image, video and audio benchmark tests were conducted. And Gemini has beat ChatGPT in all tests except for “HellaSwag”, which is described as “common sense reasoning for everyday tasks.” The company also revealed that Gemini Ultra, with a score 90 percent, is the first model to “outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.” Google had also released a video showing Gemini’s multimodal capabilities. However later, the company had admitted that not everything shown in the video was 100 percent accurate and this had left a lot of people disappointed.

Did you subscribe to our daily newsletter?

It’s Free! Click here to Subscribe

Source: Indiatoday