This post is also available in: עברית (Hebrew)
After 9/11, surveillance and security cameras in cities became prevalent. It is now possible to automate the analysis of filmed content thanks to various technologies, including machine learning tools, even when there are large amounts of data. Government officials can monitor video surveillance to gain insight into the percentage of people wearing masks, transportation departments can monitor traffic density and flow, and business owners can better understand customer behavior.
Is privacy a concern? While surveillance is evolving, states are not in a hurry to put in place regulations that protect the privacy of those being watched. Today, it is common to have blurred faces or black boxes. This does not always serve the purpose of data analysis. As a result, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a system that ensures the privacy of those filmed by surveillance cameras. “Privid” is a system that lets the analyst request video data, and adds a bit of noise (more data) to the end result so that people in the video cannot be recognized.
Furthermore, the system creates a formal definition of privacy – differential privacy – that permits access to data and statistics about private information without revealing any personally identifiable information. In this way, instead of analysts being able to view the entire video, the system hides the private data from humans, and keeps the video from displaying in its entirety, with an accuracy of 79%-99%.
Using the system, the video is broken up into small sections and processed individually. As a result, segments are merged, and the noise is added. According to news.mit.edu, analysts can also create their own neural networks which let them expand and define additional applications for the system.