Thus, the Japanese police hope to prevent serious crimes. It is noted that initially these innovative technologies will be used only to protect high-ranking VIPs. This idea emerged a year after the assassination of former Prime Minister Shinzo Abe. The police are concerned about new attacks by “lone criminals,” so they are eager to involve AI in identifying such individuals in advance.
Artificial intelligence against lone criminals
The AI-powered security monitoring test system will deliberately avoid facial recognition technology. Instead, it will focus on recognizing three types of machine learning patterns. Cameras using artificial intelligence will detect suspicious behavior and objects that may pose a threat to the environment, as well as potential intrusions into restricted areas.
It has been announced that the test launch of this surveillance and security system in Japan will take place by March 2024. According to some security experts regarding terrorist threats, the latest AI-based cameras will provide greater vigilance and help make police work significantly more effective. However, not everyone shares this view. Some are concerned about the implementation of these technologies due to the presence of hidden algorithms.
Following last year’s unexpected assassination of the prime minister and the failed attempt on the newly elected official this year, the Japanese police are making every effort to prevent high-profile crimes often committed by so-called “lone offenders.” In Japan, they are also referred to as “otaku,” which means “reclusive.”
Both of these terms refer to lonely young people who make up a dissatisfied part of Japanese society. These individuals are not involved in crime, but sometimes exhibit violence.
The system is currently in the testing phase.
Proponents of the innovative method for combating individual criminals say that the latest cameras will be able to monitor suspicious individuals, while AI will be trained to recognize genuine threats. For example, those who frequently and nervously look around, as well as those exhibiting other signs of a “guilty mind,” may come under scrutiny.
According to experts, artificial intelligence surveillance systems will be able to identify potentially threatening individuals in crowds or in other distracting factors and detect suspicious objects. Such actions are very complex even for experienced security personnel. Automating this process will significantly ease the work of specialists.
Currently, these new technologies for detecting potential offenders are still being tested. Experts note that at this stage, it is crucial to thoroughly assess the accuracy of the robotic system’s performance to determine whether it should be implemented at an official level.
Other similar technologies in the field of security
It should be noted that the Japanese are not pioneers in this area of artificial intelligence application. According to security expert Isao Itabashi, as of 2019, these technologies were being researched in 176 countries around the world, and in 52 of them, they were already being used.
Moreover, Japan introduced a similar AI-based system for use on the railway last year to detect suspicious activities. The Japanese also own the startup Vaak, which developed a humanoid robot to identify potential thieves based on their movements and gestures.
There are suspicions that this software is the foundation of the latest technology used by the Japanese police. However, there is no official confirmation of this.
As noted by the publication Daily Mail The AI Vaak is programmed to recognize suspicious activities in security footage. According to the technology developers, this system can distinguish between normal and potentially criminal behavior of individuals. When it detects something suspicious in the video surveillance, it alerts about the potential threat through a special app.
However, researchers warn that Vaak’s work to combat theft has racial and other biases. In general, according to a 2018 study, many popular AI systems exhibit sexist and racist tendencies. Therefore, researchers urge attention to this flaw in artificial intelligence.
Predictive systems with AI in the service of security
Artificial intelligence based on data from previous reports can predict where and when new crimes are likely to occur. The database used by Palantir Technologies includes a wide range of information—from geography to criminal records and social media posts. Based on this, the AI identifies individuals or locations that may be involved in a crime.
There are also many other predictive systems of a similar type that differ significantly from each other. For example, the Chicago police use a so-called “urgent list.” It is created using an algorithm that identifies individuals who are most likely to be involved in shootings.
However, many experts point out the problems with this AI system. Moreover, it has been established that data on past crimes is not a significant indicator of future criminal activity.