Topic > Real-time surveillance system analysis on the Hadoop image processing interface

Traditional security systems work to prevent crimes as much as possible. Real-time surveillance offers the opportunity to prevent crimes before they can occur. Implementing security measures is also time-consuming and usually requires human intervention. An autonomous security system will make security economically viable and work quickly. By using face, object and behavioral recognition on the video feed provided by CCTV cameras, various criminal activities can be detected and authorities will be assisted to take action. Covering a large number of CCTVs distributed over a large space can generate a lot of data and requires enormous processing power to process this data. We will then use the Hadoop image processing interface to distribute the processing task over the cloud network, so as to improve communication between authorities in various areas. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an Original Essay Nowadays, in almost all places, security systems work rather passively. CCTV cameras installed in these systems record video and transmit it to a human supervisor. Such a security system is prone to human errors. Rapid actions which are necessary in many conditions to prevent the adversary are not possible. All security works locally and offers limited cloud capabilities. Such a static system is obsolete and is itself threatened with being misused and hacked. Therefore we propose a modern and dynamic system with ability to work in the cloud with powerful real-time surveillance and probably cheaper than the existing system. Footage from multiple CCTV cameras will reach a local station. These video feeds will be fed to preliminary object recognition algorithms and will undergo the selection process at the local station. After the initial object recognition process, the video feed will be divided into a small unit, which includes multiple images. These images will be mapped to their respective nodes for processing and their results will be reduced to get the final output. The authors in [1] proposed a scalable video processing system on Hadoop network. The system uses FFmpeg for video encoding and OpenCV for image processing. They also demonstrate a face tracking system, which groups together multiple images of the same people. The captured video feed is stored in the Hadoop distributed file system. The system does not have adequate security mechanisms and storing such a huge amount of data in HDFS will not be cost effective. The system in [2] used Nvidia CUDA by enabling Hadoop clusters to improve server performance using the parallel processing capability of CUDA cores present in Nvidia GPUs. They demonstrated an AdaBoost-based face detection algorithm in the Hadoop network. While equipping clusters with Nvidia GPUs can increase the cost of the clusters, CUDA cores potentially provide huge improvements in image processing jobs. Although we aim to implement the system in existing hardware to minimize costs. The authors in [3] used the Hadoop framework to process astronomical images. They implemented a scalable image processing pipeline on Hadoop, which involved cloud computing of astronomical images. They used an existing C++ library and JNI to use that library in Hadoop for image processing. While successful, many optimizations are notwere made and Hadoop was not properly integrated with the C++ library. A survey in [4] describes various security services provided in the Hadoop framework. Security services needed for the framework such as authentication, access control, and integrity are discussed, including what Hadoop provides and what it does not. Hadoop has multiple security flaws that can be exploited to initiate a replay attack or view files stored on the HDFS node. So as according to scholars, a good integrity checking method and a good authorization checking method are needed. Object recognition given in [5] provides an efficient way to recognize a three-dimensional object from a two-dimensional image. In its stated methodology, some characteristics of the object remain constant regardless of the viewing angle. Specific extraction of these features will save a huge amount of resources compared to older object recognition systems that recreate entire 3D objects using depth analysis. As illustrated in [6], the original eigenfaces fails to accurately classify faces when the data comes from different angles and light sources as in our problem. Therefore, we use the concept of TensorFace. A vector space of different images trained at multiple angles is applied to N-mode SVD for multilinear analysis to recognize faces. Behavior recognition can be performed as indicated in [7]. Features will be extracted from the video feed and applied to feature descriptors, model events, and event/behavior models. The output will be mapped from the feature space to the behavior label space where a classifier will map it as normal or abnormal. The system proposed in [8] asserts a cost-effective, reliable, efficient and scalable surveillance system where data is stored using the P2P concept. Avoids the load on a single Data Center and divides the load across multiple Peer Nodes. It also provides authentication as a module between peer nodes and directory nodes. The system has no method to implement computer vision and integrity checking. It offers an open source Hadoop video processing interface that integrates C/C++ applications into the Hadoop framework. It provides an R/W interface that allows developers to store, retrieve, and analyze video data from the HDFS. Using the security available in the Hadoop framework for video data can provide poor performance, and security was not mentioned in the HVPI. TensorFlow, a machine learning system, referred to in [10], provides multiple tools to implement multiple training algorithms and optimizations for multiple devices at scale. Use data flow graphs for calculation states and operations that change those states. TensorFlow can work very well with the Hadoop Framework to distribute processing across existing hardware. To provide real-time recognition, various preprocessing is performed to improve the performance of Hadoop and neural network. The whole process can be divided into the following steps: - Video Collection: - The video feed coming from the video capture device like CCTV will be converted into Hip Image Bundle (HIB) object using various tools like Hib Import, info. Next, HIB will be preprocessed using a video encoder such as Culler class and FFmpeg. At this stage you can apply various user-defined conditions such as spatial resolution or image metadata criteria. Filters such as a grayscale filter provide improvements for various face detection algorithms. The..