- Sectors
- Aerospace & Defense
- Big science
- Biotechnology
- Fintech
- Insights
Neuromorphic vision is a field that draws on the workings of the human visual system to develop electronic systems that process visual information efficiently and in real time. This approach uses sensors and algorithms designed to mimic the biological properties of the eye and brain.
Instead of capturing data in fixed frames like traditional cameras, neuromorphic sensors record individual events (changes in light intensity) at each pixel. This makes them highly efficient in terms of energy consumption and processing speed.
The term ‘neuromorphic’ was coined by Carver Mead in the 1980s. Mead, a pioneer in microelectronics, proposed to design electronic systems inspired by the structure and function of the human brain. Since then, research in neuromorphic sensors has evolved, with key milestones such as the development of event cameras (e.g. Dynamic Vision Sensor, DVS) that mimic the behaviour of the human eye.
Neuromorphic vision is closely linked to AI, providing highly optimised and relevant data for the training and execution of deep learning and machine learning algorithms. Some of its main contributions are:
The implementation of neuromorphic vision is highly recommended in sectors where low latency and energy efficiency are essential and robust real-time event processing is required. Therefore, neuromorphic vision has promising applications in sectors such as:
– Robotics: enabling improved visual perception of robots for navigation and manipulation in complex environments.
– Autonomous driving: enables fast and efficient detection of both objects and obstacles.
– Medical devices: supports some technologies such as visual prostheses or biomedical analysis.
– Security and surveillance: provides highly accurate real-time detection of suspicious movements and critical events.
– Industry and automation: aiding quality inspection systems, tracking objects on assembly lines, and industrial IoT systems.
The combination of neuromorphic vision and AI will transform the way machines perceive and understand the environment, bringing them closer to human biological processing.
Implementing neuromorphic vision in artificial intelligence presents several challenges, whether technical, economic or practical:
– Development of specialised hardware: neuromorphic sensors require advanced chips that mimic the neural activity of the brain, which are expensive and technically complex to manufacture.
– Unconventional data processing: instead of conventional images, neuromorphic sensors generate data in event format, which requires specific algorithms and new paradigms to interpret the data.
– Specialised learning algorithms: New algorithms, such as spiking neural networks (SNNs), are needed that are compatible with the asynchronous and event-driven nature of the data.
– Scalability: Algorithms and systems need to be scalable for large-scale applications, which has not yet been fully achieved.
– Lack of expertise and training: There are few experts in neuromorphic vision, and it takes time and resources to build technical teams.
In summary, neuromorphic vision has transformative potential in multiple industries. Its implementation is strategic for companies seeking technological advantage in artificial intelligence applications.
ARQUIMEA, from its research center located in the Canary Islands, has a research orbital dedicated to robotics and another to Artificial Intelligence that develops projects that explore the potential of neuromorphic vision.
In addition, all ARQUIMEA Research Center projects belong to the QCIRCLE project, co-funded by the European Union, which aims to create a center of scientific excellence in Spain.
“Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.”