With the rise of the Internet industry's big data technology, cloud computing technology has become increasingly mature and popular, and applications based on cloud computing have become increasingly rich. Cloud computing technology has triggered the third information technology revolution. How to quickly and effectively locate multi-dimensional data while providing mass storage and tap out the potential correlations between various types of island data in multiple dimensions has always been a problem that we are committed to solving. This article will discuss with you the core technology of security cloud computing.
The cloud computing technology takes large systems and big data as the most prominent features, while the security industry is a very typical big data application scenario. The bayonet monitoring system and video monitoring system in the security industry consists of a large number of devices (including a large number of front-ends. Collecting equipment, backend platforms, and clusters of cloud computing servers, etc., generate geometrically-growing data every day. With the continuous success of large-scale projects in smart cities, the entire security platform presents a large amount of data, various data types, and data processing logic. Complexity, data cleansing, data sharing, and data mining have made it difficult to handle difficult problems, presenting great challenges to security vendors.
Among them, unstructured data, such as massive traffic flow information and stuck-at-shot pictures of bayonet and massive video and video files in the smart city industry, are mainly manifested in the intelligent transportation industry. Major users* and* of the security industry need to have Massive images and video files require strong practical data storage, high-performance parallel computing, and intelligent data analysis and mining. These are all in line with cloud computing features. While providing mass storage, how to quickly and effectively locate multi-dimensional data and excavate potential multi-dimensional associations of various types of island data has always been a problem we are committed to solving. Cloud computing, big data and other technologies are slowly penetrating the security industry. As these technologies mature, they will have a revolutionary impact on the security industry.
Large-scale hybrid computing technology
If a large amount of video image data generated by the monitoring system is processed manually, the efficiency will be very low. With the help of the video intelligent processing algorithm, some simple features can be obtained from the video image data for comparison, or pattern matching can be generated. Alarm events increase the efficiency of processing. The amount of data that can be processed in this way, the degree of data combination, the type of data, etc. are still at a low level and cannot cope with massive amounts of data and increasing demands. The purpose of large-scale computing technology is to provide a unified data processing platform, which can integrate a variety of intelligent algorithms and calculation models, comprehensive processing of massive monitoring data, to get more valuable data faster.
Uniform Resource Management Technology
The main data generated by the monitoring system is video and image data. After the original data is processed, it will produce more abundant data and the processing methods will be very different. For example, historical video data can be processed in the background to retrieve the video data. For the license plate and face feature data of the bayonet, real-time deployment is required. The historical bayonet information needs to be retrieved in real time. These data need different computing frameworks to deal with. Through the introduction of a unified resource management platform, different computing frameworks can be run in the same resource pool, dramatically increasing resource utilization, and at the same time when resources are monopolized by certain services, Can maximize the performance of the system.
Real-time search technology
Traditional structured data is stored using relational databases. Database clusters are formed through technologies such as RAC, and acceleration is achieved through indexing. However, the core is based on row storage and relational operations. In the face of massive records, it has encountered bottlenecks in various aspects. . The real-time search technology can deal with 10 billion grades of structured data by introducing technologies such as distributed database, columnar storage, memory calculation, and indexing engine, and can be greatly improved in terms of storage capacity, scalability, and retrieval speed. . The system has important research value and broad application prospects in the field of video surveillance such as intelligent transportation and criminal investigation.
Complex Event Processing Technology
With the development of the security industry, the business becomes more and more complex. For example, in the field of intelligent transportation, there has been a demand for vehicle integration, vehicle analysis, and peer vehicle analysis. These requirements have many characteristics, such as the conditions depend on the results, the real-time requirements of the processing process, and the amount of data to be processed.
The traditional way is to use a relational database. Through complex SQL statement combinations, the way of querying and comparing is constantly difficult to meet real-time requirements. Complex event processing By introducing streaming computing and other technologies, the input data can be dynamically analyzed in real time, and the processing speed can be greatly provided. Data that does not meet the criteria are discarded. There are only the results of processing or intermediate data that may be useful in the system. As a result, the requirements for storage are also reduced. The entire process is analyzed in memory and the real-time performance is guaranteed.
Application Face retrieval technology <br> <br> face retrieval technology on a single server is relatively mature, can be applied in the identification, arrest fugitives, suspicious persons investigation, re-check identity cards and other fields. The face detection process can be divided into the following stages: video or image decoding, face detection, feature extraction, and feature comparison. The first three steps are each time the request corresponds to a calculation, the amount of calculation is relatively controllable, and the last one The feature of the step is compared with the face feature of a millionth level per request, which is the most computational stage.
When the number of real-time application requests reaches several hundred times per second, and the number of face matchings reaches one million per time, the entire system needs to support the face-to-face comparison calculation of 100 million per second. Such large-scale calculations cannot be completed on a single machine and must be completed using clusters. The signature database itself is not large, but it has a large number of comparisons. It is a typical computationally intensive cluster. The signature database can be completely dumped into memory and can be calculated in memory.
After saving massive video retrieval <br> <br> image sensor to capture video data to the backend store, the user can select a plurality of cameras at any time the target region, presented to video retrieval cluster, the cluster retrieval quick search according to the characteristics of the target object All corresponding cameras produce video data, find the video of the target object's characteristics, and locate the exact time point. Among them, the intelligent technology is mainly used to realize the conversion of video data to object features and structured data, supporting features such as vehicle color, license plate, clothing color, and human face. Based on a unified computing resource pool, parallel operations of intelligent algorithms are implemented, and the retrieval efficiency is increased linearly.
The structured data can be saved to the database, and the next search can be directly searched through the structured data, greatly improving the retrieval efficiency.
<br> <br> distributed object storage technology cloud security on the system architecture and design, take full account of the reality of large-scale software and hardware failure cluster environment, the use of advanced management ideas and software system, a large number of ordinary storage server storage Space resources are virtualized and integrated to achieve high fault tolerance of hardware and software, and to build highly stable and reliable storage clusters.
The system separates the control flow from the data flow and fully optimizes the metadata node control system so that the system has extremely high performance and good linear scalability. The overall system provides a unified namespace for applications, making the system have excellent data sharing capabilities. The system balances the load to each node in the cluster and fully utilizes the performance of each node of the cluster to obtain a good performance aggregation capability to ensure the stability of the system. The cluster adopts a highly flexible ad hoc network technology to provide easy deployment and maintenance functions. In terms of data reliability, the system adopts intelligent redundant reconstruction technology to ensure the highest disk utilization and provide the best redundancy strategy. In addition, the system also fully considers the fault tolerance of hardware and software nodes, and has the ability to mask all maskable errors.
Fast File Indexing Technology
Cloud storage systems can support hundreds of millions of files, while also supporting thousands of simultaneous users. With such large-scale metadata and concurrent access volume, using traditional memory plus disk multi-level storage and multi-level indexing methods, the overhead of addressing will be very large, directly affecting the availability of the system.
In order to improve the response speed of the system, cloud storage adopts a coarse-grained management method and uses 64M as a typical block size for indexing, which greatly reduces the number of metadata. Even so, the metadata size of the system will still reach the GB level. Based on this situation, the system uses a full-memory meta-data access mode that can reduce file addressing times to milliseconds.
In order to ensure the reliability of the metadata, it is necessary to log the access to the metadata and periodically persist the metadata to the hard disk.
Automatic load balancing techniques employed <br> <br> central server to manage the entire cloud model storage file system, are all metadata stored on the metadata server, the file were divided in blocks of data stored on different nodes.
Metadata maintains a unified namespace and grasps the usage of data nodes throughout the system. When a client sends a request for reading and writing data to a metadata server, the metadata server uses disk usage and network load according to the data node. Select the lightest node server to provide external services and automatically adjust the load status of the cluster.
The data node also provides disk-level load balancing. Based on the disk IO load and space capacity, the disk with the lightest load is automatically selected to store new data files.
When a data node is offline due to a machine failure or other reason, the metadata server will automatically block the machine and will no longer provide this data node to the client, and the data stored on this data node will also be automatically Restoring to other available node servers automatically masks the impact of data single-node failures on the system.
In addition, the data on the faulty data node is rapidly recovered. Only the hard disk on the data node is pulled out and inserted into other data nodes. This reduces the pressure on the cluster to recover data without affecting the read and write operations of the client.
High-speed concurrent access technology <br> <br> client when accessing cloud storage, the first access to the metadata server to obtain the data node with which to exchange information, and direct access to the data nodes for data access.
There is only control flow between the client and the metadata server, and no data flow, which greatly reduces the load on the metadata server and makes it not a bottleneck for system performance. The data flow is directly transmitted between the client and the data node. Since the file is divided into multiple nodes for distributed storage, the client can access multiple node servers at the same time, so that the I/O of the entire system is highly parallel and the overall system performance is improved. improve.
In general, the overall throughput of the system is proportional to the number of node servers.
<br> <br> technology to ensure high reliability for the metadata, to provide fault tolerance through the operation log. The local server's local SSD sets up highly reliable RAID1, providing high reliability and fault tolerance. When the metadata server fails, the above metadata can be quickly restored if the disk data is preserved. The operation logs are synchronized in real time between the active and standby metadata servers to achieve a higher degree of reliability.
For node servers, fault tolerance is implemented using ErasureCode redundancy. Data redundancy distribution is stored on different data nodes. Damage to any data node will not cause any data loss and will not affect any data access and write process. After that, the data reconstruction process is performed through a flexible data recovery mechanism. The larger the cluster size, the faster the recovery.
All service node high availability technology <br> <br> systems are networked together, the use of highly reliable fault tolerance, system changes in node does not have to stop the service, you can increase or decrease online storage node.
The metadata server adopts active and standby dual-system hot backup technologies. The host fails and the standby machine automatically takes over its work. The external service does not stop. The storage node can use the Erasurecode redundancy backup mechanism, such as 4+1 node redundancy fault tolerance, arbitrary loss. A node, data is not lost, the service does not stop, the client does not sense.
The cloud computing technology takes large systems and big data as the most prominent features, while the security industry is a very typical big data application scenario. The bayonet monitoring system and video monitoring system in the security industry consists of a large number of devices (including a large number of front-ends. Collecting equipment, backend platforms, and clusters of cloud computing servers, etc., generate geometrically-growing data every day. With the continuous success of large-scale projects in smart cities, the entire security platform presents a large amount of data, various data types, and data processing logic. Complexity, data cleansing, data sharing, and data mining have made it difficult to handle difficult problems, presenting great challenges to security vendors.
Among them, unstructured data, such as massive traffic flow information and stuck-at-shot pictures of bayonet and massive video and video files in the smart city industry, are mainly manifested in the intelligent transportation industry. Major users* and* of the security industry need to have Massive images and video files require strong practical data storage, high-performance parallel computing, and intelligent data analysis and mining. These are all in line with cloud computing features. While providing mass storage, how to quickly and effectively locate multi-dimensional data and excavate potential multi-dimensional associations of various types of island data has always been a problem we are committed to solving. Cloud computing, big data and other technologies are slowly penetrating the security industry. As these technologies mature, they will have a revolutionary impact on the security industry.
Large-scale hybrid computing technology
If a large amount of video image data generated by the monitoring system is processed manually, the efficiency will be very low. With the help of the video intelligent processing algorithm, some simple features can be obtained from the video image data for comparison, or pattern matching can be generated. Alarm events increase the efficiency of processing. The amount of data that can be processed in this way, the degree of data combination, the type of data, etc. are still at a low level and cannot cope with massive amounts of data and increasing demands. The purpose of large-scale computing technology is to provide a unified data processing platform, which can integrate a variety of intelligent algorithms and calculation models, comprehensive processing of massive monitoring data, to get more valuable data faster.
Uniform Resource Management Technology
The main data generated by the monitoring system is video and image data. After the original data is processed, it will produce more abundant data and the processing methods will be very different. For example, historical video data can be processed in the background to retrieve the video data. For the license plate and face feature data of the bayonet, real-time deployment is required. The historical bayonet information needs to be retrieved in real time. These data need different computing frameworks to deal with. Through the introduction of a unified resource management platform, different computing frameworks can be run in the same resource pool, dramatically increasing resource utilization, and at the same time when resources are monopolized by certain services, Can maximize the performance of the system.
Real-time search technology
Traditional structured data is stored using relational databases. Database clusters are formed through technologies such as RAC, and acceleration is achieved through indexing. However, the core is based on row storage and relational operations. In the face of massive records, it has encountered bottlenecks in various aspects. . The real-time search technology can deal with 10 billion grades of structured data by introducing technologies such as distributed database, columnar storage, memory calculation, and indexing engine, and can be greatly improved in terms of storage capacity, scalability, and retrieval speed. . The system has important research value and broad application prospects in the field of video surveillance such as intelligent transportation and criminal investigation.
Complex Event Processing Technology
With the development of the security industry, the business becomes more and more complex. For example, in the field of intelligent transportation, there has been a demand for vehicle integration, vehicle analysis, and peer vehicle analysis. These requirements have many characteristics, such as the conditions depend on the results, the real-time requirements of the processing process, and the amount of data to be processed.
The traditional way is to use a relational database. Through complex SQL statement combinations, the way of querying and comparing is constantly difficult to meet real-time requirements. Complex event processing By introducing streaming computing and other technologies, the input data can be dynamically analyzed in real time, and the processing speed can be greatly provided. Data that does not meet the criteria are discarded. There are only the results of processing or intermediate data that may be useful in the system. As a result, the requirements for storage are also reduced. The entire process is analyzed in memory and the real-time performance is guaranteed.
Application Face retrieval technology <br> <br> face retrieval technology on a single server is relatively mature, can be applied in the identification, arrest fugitives, suspicious persons investigation, re-check identity cards and other fields. The face detection process can be divided into the following stages: video or image decoding, face detection, feature extraction, and feature comparison. The first three steps are each time the request corresponds to a calculation, the amount of calculation is relatively controllable, and the last one The feature of the step is compared with the face feature of a millionth level per request, which is the most computational stage.
When the number of real-time application requests reaches several hundred times per second, and the number of face matchings reaches one million per time, the entire system needs to support the face-to-face comparison calculation of 100 million per second. Such large-scale calculations cannot be completed on a single machine and must be completed using clusters. The signature database itself is not large, but it has a large number of comparisons. It is a typical computationally intensive cluster. The signature database can be completely dumped into memory and can be calculated in memory.
After saving massive video retrieval <br> <br> image sensor to capture video data to the backend store, the user can select a plurality of cameras at any time the target region, presented to video retrieval cluster, the cluster retrieval quick search according to the characteristics of the target object All corresponding cameras produce video data, find the video of the target object's characteristics, and locate the exact time point. Among them, the intelligent technology is mainly used to realize the conversion of video data to object features and structured data, supporting features such as vehicle color, license plate, clothing color, and human face. Based on a unified computing resource pool, parallel operations of intelligent algorithms are implemented, and the retrieval efficiency is increased linearly.
The structured data can be saved to the database, and the next search can be directly searched through the structured data, greatly improving the retrieval efficiency.
<br> <br> distributed object storage technology cloud security on the system architecture and design, take full account of the reality of large-scale software and hardware failure cluster environment, the use of advanced management ideas and software system, a large number of ordinary storage server storage Space resources are virtualized and integrated to achieve high fault tolerance of hardware and software, and to build highly stable and reliable storage clusters.
The system separates the control flow from the data flow and fully optimizes the metadata node control system so that the system has extremely high performance and good linear scalability. The overall system provides a unified namespace for applications, making the system have excellent data sharing capabilities. The system balances the load to each node in the cluster and fully utilizes the performance of each node of the cluster to obtain a good performance aggregation capability to ensure the stability of the system. The cluster adopts a highly flexible ad hoc network technology to provide easy deployment and maintenance functions. In terms of data reliability, the system adopts intelligent redundant reconstruction technology to ensure the highest disk utilization and provide the best redundancy strategy. In addition, the system also fully considers the fault tolerance of hardware and software nodes, and has the ability to mask all maskable errors.
Fast File Indexing Technology
Cloud storage systems can support hundreds of millions of files, while also supporting thousands of simultaneous users. With such large-scale metadata and concurrent access volume, using traditional memory plus disk multi-level storage and multi-level indexing methods, the overhead of addressing will be very large, directly affecting the availability of the system.
In order to improve the response speed of the system, cloud storage adopts a coarse-grained management method and uses 64M as a typical block size for indexing, which greatly reduces the number of metadata. Even so, the metadata size of the system will still reach the GB level. Based on this situation, the system uses a full-memory meta-data access mode that can reduce file addressing times to milliseconds.
In order to ensure the reliability of the metadata, it is necessary to log the access to the metadata and periodically persist the metadata to the hard disk.
Automatic load balancing techniques employed <br> <br> central server to manage the entire cloud model storage file system, are all metadata stored on the metadata server, the file were divided in blocks of data stored on different nodes.
Metadata maintains a unified namespace and grasps the usage of data nodes throughout the system. When a client sends a request for reading and writing data to a metadata server, the metadata server uses disk usage and network load according to the data node. Select the lightest node server to provide external services and automatically adjust the load status of the cluster.
The data node also provides disk-level load balancing. Based on the disk IO load and space capacity, the disk with the lightest load is automatically selected to store new data files.
When a data node is offline due to a machine failure or other reason, the metadata server will automatically block the machine and will no longer provide this data node to the client, and the data stored on this data node will also be automatically Restoring to other available node servers automatically masks the impact of data single-node failures on the system.
In addition, the data on the faulty data node is rapidly recovered. Only the hard disk on the data node is pulled out and inserted into other data nodes. This reduces the pressure on the cluster to recover data without affecting the read and write operations of the client.
High-speed concurrent access technology <br> <br> client when accessing cloud storage, the first access to the metadata server to obtain the data node with which to exchange information, and direct access to the data nodes for data access.
There is only control flow between the client and the metadata server, and no data flow, which greatly reduces the load on the metadata server and makes it not a bottleneck for system performance. The data flow is directly transmitted between the client and the data node. Since the file is divided into multiple nodes for distributed storage, the client can access multiple node servers at the same time, so that the I/O of the entire system is highly parallel and the overall system performance is improved. improve.
In general, the overall throughput of the system is proportional to the number of node servers.
<br> <br> technology to ensure high reliability for the metadata, to provide fault tolerance through the operation log. The local server's local SSD sets up highly reliable RAID1, providing high reliability and fault tolerance. When the metadata server fails, the above metadata can be quickly restored if the disk data is preserved. The operation logs are synchronized in real time between the active and standby metadata servers to achieve a higher degree of reliability.
For node servers, fault tolerance is implemented using ErasureCode redundancy. Data redundancy distribution is stored on different data nodes. Damage to any data node will not cause any data loss and will not affect any data access and write process. After that, the data reconstruction process is performed through a flexible data recovery mechanism. The larger the cluster size, the faster the recovery.
All service node high availability technology <br> <br> systems are networked together, the use of highly reliable fault tolerance, system changes in node does not have to stop the service, you can increase or decrease online storage node.
The metadata server adopts active and standby dual-system hot backup technologies. The host fails and the standby machine automatically takes over its work. The external service does not stop. The storage node can use the Erasurecode redundancy backup mechanism, such as 4+1 node redundancy fault tolerance, arbitrary loss. A node, data is not lost, the service does not stop, the client does not sense.
Thin Film,Thin Film Materials,Polymer Thin Films,Packaging Film
shaoxing xiangyu green packing co.,ltd. , https://www.xiangyubasefilm.com