Google uses Gemini to make Robots smarter | Ai Generativa Chatgpt | Aziende Intelligenza Artificiale Quotate in Borsa Italiana | Intelligenza Artificiale Online Gratis Senza Registrazione | Turtles AI

Google uses Gemini to make Robots smarter
Isabella V

 


 GEMINI AI TURNS ROBOTS INTO MORE AUTONOMOUS AND INTELLIGENT ASSISTANTS

 

DeepMind’s technological innovation is revolutionizing the world of robotics by integrating Gemini AI into robotic systems. This collaboration has the potential to greatly improve robots’ autonomy, efficiency, and ability to interact with their surroundings, with possible applications in areas such as home care and logistics.

 Gemini 1.5 Pro: A Leap Forward in Robotics

DeepMind’s robotics team recently published a research paper outlining how the Gemini 1.5 Pro advanced language model is making interaction between users and RT-2 robots more intuitive. With the ability to process large volumes of information, Gemini enables robots to understand and respond to instructions in natural language.

The learning process for these robots begins with the creation of a video tour of a specific area, such as a home or office. This video is "viewed" by the robot, which uses Gemini 1.5 Pro to memorize and analyze the environment. In this way, the robot becomes capable of executing commands based on what it has learned. For example, if a phone is shown and asked "where can I charge it?" the robot is able to point to a nearby electrical outlet.

 Promising Results

According to DeepMind, Gemini-equipped robots have achieved a 90 percent success rate on more than 50 user instructions in an operational area of more than 800 square meters. In addition, there is "preliminary evidence" suggesting that Gemini 1.5 Pro enables robots to plan complex tasks, not limited to simple navigation. A practical example is a user’s request to check the availability of Coca-Cola in a refrigerator: the robot, guided by Gemini, knows to check the refrigerator and report the outcome to the user.

 Competition and the Future of Robotics.

The video demonstrations presented by Google show impressive capabilities, although instruction processing times vary between 10 and 30 seconds. Even so, the technology may still take a few years before it is used daily in our homes, but current advances indicate that more advanced robots could soon help us find lost keys or wallets.

Other tech giants are following similar paths. MIT has developed a navigation method that converts visual representations into language, while Microsoft is working on a new API that will allow ChatGPT to be used to control robots and drones. These developments show that the potential of artificial intelligence technologies is still far from being fully explored.

 Conclusions

Although Google has not provided specific details on how Gemini will be used in its robots, it is likely that AI will be employed to improve crucial aspects such as sensory perception, motion planning, decision making, and interaction with the environment. DeepMind’s research and the innovations of other institutions show that we are just beginning to understand the possibilities of AI in robotics, foreshadowing a future in which robots will be an integral part of our daily lives.