This method is known as peer group analysis, and the company has detailed its efforts to implement this method earlier this week. Peer group analysis uses machine learning to group similar apps, then examine them for any standout differences. Here, the AI is basically looking for anything unusual for the app category, or anything that could potentially be used for malicious purposes.
Google is using a new technique to hunt down potentially malicious apps on the Play Store before they have any chance to endanger any devices.
A good example of such potentially malicious intent is a currency conversion tool asking for permission to access your contacts, or a calculator trying to access your location. if anything unusual is picked up by the AI, a Google security engineer analyses the suspicious app.
The technology works by inspecting various factors to determine how to group apps. Things like app name and category are foremost on the priority list. Also, the AI looks at things like app description and app size. The aim is to create groups containing comparable apps, where the AI gets to have an easier time guessing an app’s function, and the permissions it requires to do its job. If an app requests permissions that are out of character for its category, it is flagged by the AI.
One of Google’s biggest specialties is machine learning, and this is not the first time that the company has attempted to use such a technology to protect its customers. For example, the Verify Apps function built into Google Play Services runs on any phone with Google’s app suite installed. If it is allowed to, it will scan any app and look for potentially malicious code or behavior.
However, this tool can still be bypassed if users want to install third-party apps. Therefore, Google is only able to protect customers who download apps exclusively from the Google Play Store. Peer group analysis just makes Google better at doing this.