During the Google I/O 2024 presentation, in addition to new artificial intelligence technologies, the software giant inadvertently demonstrated the main drawback of modern AI - errors, incorrect, and harmful advice.
The Google presentation "Search in the Gemini era" showed a new feature - video search. An example was a video of a stuck lever advancing the film in a film camera, accompanied by the question: "Why doesn't the lever move all the way?"
In response, the AI offered a rather "interesting" solution to the problem, which could destroy the shots taken.
Yes, the language model suggested "exposing" the film. Anyone who has used a film camera knows that this is a terrible idea. Opening the camera during shooting can spoil part of the film or even the entire film because of the light that will hit it. This is literally the worst thing you can suggest in this situation.
Also, it is worth noting that this is not the first time when Google's language models have been caught distorting facts. Last year, the chatbot Bard claimed that the "James Webb" space telescope was the first to photograph a planet outside the Solar System (this is not true). Earlier, Google employees called Bard a "pathological liar."
And now the new Google Gemini AI was "caught" distorting historical data - in images, he refused to reproduce white people where they were supposed to be according to the context.
Source: The Verge
Comments (0)
There are no comments for now