Home

General Information

 

COPCAMS (COgnitive & Perceptive CAMeraS) is an ARTEMIS project funded under grant agreement No. 332913. The project started on April 1, 2013 and ended on September 30, 2016. The project consortium involved 22 partners from seven European countries.

Project Summary

 

Vision systems analysing images from multiple cameras are becoming the norm, be it in large scale surveillance, advanced manufacturing or traffic monitoring. COPCAMS leverages recent advances in embedded computing platforms to develop large scale, integrated Cognitive & Perceptive Video Systems (CPVS). It aims at exploiting new programmable accelerators, like manycores but also embedded Graphical Processing Units (GPUs), General Purpose computing on GPUs (GPGPUs), Digital Signal Processors (DSPs) and Field-Programmable Gate Arrays (FPGAs), to power a new generation of greener, low-power, smart cameras and gateways.

A Paradigm Change Fuelled by Embedded Computing Advances

COPCAMS contributed to a paradigm change: Whereas previous generation of systems had simple cameras connected to powerful centralised computing servers through high-bandwidth networking, the COPCAMS vision pushed low-power high-performance computing system on the edge of the system and in the distributed aggregators. These “smart cameras” and “smart aggregators” process video streams, extract meaningful semantic information and either decide locally whether or not the streams’ contents are of interest and are worth propagating, annotate compressed video stream with meta-information computed on higher resolution raw video streams, or uses information from other sensor – e.g. acoustic sensors, Radio Frequency Identification (RFID) sensors – to drive the camera towards actual action. Additionally, decentralised, distributed analyses or decision-making saves both energy and bandwidth while opening opportunities for new distributed applications.

More Flexibility for Smarter Applications

Due to both algorithmic and computational complexity, previous embedded vision systems were conceived as special-purpose devices dedicated to narrow application domains. The COPCAMS solution represents a significant step towards wider adoption of distributed, flexible embedded vision systems in surveillance of environments and advanced manufacturing.

Advancing the State of the Art Thanks to a Software Ecosystem

COPCAMS proposes a set of flexible programming models, tools and standard libraries for smart cameras on embedded platforms. This ecosystem addresses embedded GPU programming techniques and code production for image and video analysis, codecs and multi-sensors analysis. Parallelisation tools, data-flow programming languages, directed optimisations, system and scalar modelling have been tested and tried on several aspects of the cognitive video software stack: pre-processing steps to improve the quality and usefulness of still images; image and video understanding, object classification and recognition; video understanding; highly-parallel video coding schemes; sophisticated data fusion; detection, and multicamera tracking with object re-identification. On all these fronts, COPCAMS advances the state of the art for embedded perception & vision algorithms in two main ways: adaptation of these techniques to embedded, low-power computing platforms; and the use of open source libraries like OpenCL and OpenMP to enable efficient design and cost reductions.

This kick-started a mixed hardware and software ecosystem that reduces costs and development cycles based on low power consumption, high computing power solutions based on the latest advances in microelectronics and computing architectures, such as embedded ARM architectures, manycores, GPUs and GPGPUs.

Smart Camera Brings an Overhaul of the Value Chain

COPCAMS impact covered the complete range of the value chain: Academia and small and medium-sized enterprises (SMEs) have advanced embedded platforms to test and optimize innovative vision applications, coding and cognitive algorithms. Platform providers have a growing ecosystem and will have the possibility to explore new markets opportunities. System integrators will benefit from the powerful components and tools being developed in COPCAMS and will be able to offer a new generation of vision-related products. Finally, service providers can capitalize on the COPCAMS developments to provide value added services to end users, way beyond what can be offered today.

Prototypes and Actual COPCAMS Deployments

COPCAMS prototyped and field-tested full large-scale vision systems. It exploited advanced platforms, like GPU/GPGPU-based embedded architectures to power a new generation of vision related devices, able to extract relevant information from captured images and autonomously react to the sensed environment by interacting at large scale in a distributed manner.

COPCAMS has a significant impact on all addressed applications: Product quality control was fieldtested in a large-scale production of the copper-graphite commutators, and included three machinevision-based tasks: (1) dimensional measurements of copper base, (2) quality inspection of coppergraphite soldering, and (3) measurement of the mounting holes roughness. Design of the quality control procedures consisted of three stages: (1) image processing using machine vision algorithms, (2) construction of predictive models using machine learning algorithms, and (3) parameter tuning using optimization algorithms. These procedures were installed on the embedded computer vision platform and deployed on the targeted commutator production line that operates 24/7 and produces about 15,000 pieces per day. The resulting automated quality control replaced the previous manual product inspection and enabled innovative on-line non-contact measurements.

COPCAMS non-contact roughness measurement in commutator production

COPCAMS non-contact roughness measurement in commutator production

In addition, the manucaturing process can be monitored in real-time via smart cameras with Radio Frequency (RF) sensing capabilities. This system provides information about the position of assets used in the production process (trolleys, trays, tools, etc.), hence the process can easily be optimized to assure maximum possible productivity within a factory site. Moreover, due to the COPCAMS approach, one can easily and inexpensively scale up the installation by simply installing new smart cameras in areas where tracking is needed, while keeping the energy and bandwidth consumption levels low.

COPCAMS asset tracking for advanced manufacturing

COPCAMS asset tracking for advanced manufacturing

Summary of Impact

COPCAMS facilitates the transition from the highly vertically structured embedded vision systems market toward a more horizontal market, thereby creating new opportunities to be addressed more easily by SMEs and start-ups. Previous lack of flexibility were due to both algorithmic and computational complexity, as embedded vision systems are conceived as special-purpose devices dedicated to narrow application domains. The COPCAMS solution represents a significant step towards wider adoption of distributed, flexible embedded vision systems. COPCAMS provides key enabling Technologies to build smart environments, with a first applications to surveillance of environments and advanced manufacturing.

Comments are closed.