GPT-4 and Gemini Misinterpret

With Ring cameras being increasingly used by homeowners for smart security which is now available at $149.99 on Amazon, AI is set to become even more involved in the protection of homes. But a new study is raising concerns about whether these future AI systems might be too quick to call the cops, even when nothing wrong is going on. Scientists from MIT and Penn State examined 928 public videos of the Ring to determine how GPT-4, Claude, and Gemini decided to engage the police.

GPT-4 and Gemini Misinterpret Ring Camera

It was found that 39 percent of the videos included real criminal actions, however, AI models were not always able to identify them. For the remaining cases, the models either said no crime took place or responded in a way that was hard to understand. However, they still wanted police involvement in some cases.

Misinterpretation by the AI models

According to the study, the AI models had varying behavior across neighborhoods, which was one of the study’s most important observations. Even though the AI wasn’t provided with specific information on the areas it was still more likely to recommend reporting to the police in the areas with a larger number of minorities.

In these areas, Gemini suggested police intervention in 65 percent of cases when a crime took place, compared to 51.2 percent of cases in mainly white areas. Also, the study found that 11.9% of the police interventions recommended by GPT-4 occurred when police intervention was not even

Among the study’s most significant findings was the observation of how the AI models behaved differently based on the neighborhood. The AI was not told directly about the areas, it was still more likely to recommend calling the police in the neighborhoods with people of color.

In these areas, Gemini suggested police intervention in about 65 percent of the cases where crimes were reported, while in the mainly white areas, it was only 51.1 percent. Furthermore, the study also revealed that 11.9% of the police recommendations made by GPT-4 were made when there was no criminal activity annotated

Other AI-enhanced features for the Ring systems have also been in the works at Amazon, such as facial recognition, emotional analysis, and behavior detection; with evidence from recent patents. This means that in the future, with the help of AI, home security systems may be able to recognize other suspicious activities or persons than it can nowadays.

For people with Ring cameras, there is no reason to panic at the moment. For now, Ring cameras have basic AI functions and can determine motion only, and do not make such decisions independently.

Advanced AI models such as GPT-4 and Claude did not operate within the cameras but were used to analyze the Ring footage. The bottom line is that future AI updates can provide a better level of home surveillance, but with this comes the possibility of mistakes that will need to be corrected before these features are integrated into further Ring cameras.

Reported

By Aisha Singh

Aisha Singh plays a multifaceted role at AyuTechno, where she is responsible for drafting, publishing, and editing articles. As a dedicated researcher, she meticulously analyzes and verifies content to ensure its accuracy and relevance. Aisha not only writes insightful articles for the website but also conducts thorough searches to enrich the content. Additionally, she manages AyuTechno’s social media accounts, enhancing the platform’s online presence.Aisha is deeply engaged with AI tools such as ChatGPT, Meta AI, and Gemini, which she uses daily to stay at the forefront of technological advancements. She also analyzes emerging AI features in devices, striving to present them in a user-friendly and accessible manner. Her goal is to simplify the understanding and application of AI technologies, making them more approachable for users and ensuring they can seamlessly integrate these innovations into their lives.

Leave a Reply

Your email address will not be published. Required fields are marked *