The issue of data confidentiality is subtle, and people will always need tools to protect their personal information. This concerned the engineers-programmers from the University of Toronto, who developed a special filter for personal photos. Now you can distribute your pictures to the Internet freely, no AI will recognize that it is you.
The idea is simple and reduces to correcting individual pixels in a common picture, so that AI algorithms for face recognition fail. Noise is invisible to the human eye, but the machine can not draw parallels between the two photographs. This does not guarantee you complete incognito, but the probability of being seen by a spy bot in a social network and becoming the object of a personal attack is reduced at times.
In order not to puzzle over the algorithms of deception, Canadians pitted each other with two real powerful neural networks. The first was given a set of several hundred different photographs of the same people and set the task to recognize and sort by personality. Then the second neural network retouched them point-and-shoot, trying to confuse the first, and the pictures were sent for re-recognition. Developers were left to watch what changes were introduced, giving more false positives – so the core for the filter was formed.
In its current version, the filter reduces the likelihood of AI detecting a person from a photo to 1 to 200. And such trifles as nationality, age or emotions are certainly determined with an error. The authors of this technology promise in the near future to design the filter as a separate application for general use.