It’s got praying mantis eyes

A photograph of the artificial compound eye prototype developed at the University of Virginia School of Engineering and Applied Science by associate professor Kyusang Lee.
Credit: University of Virginia School of Engineering and Applied Science / Kyusang Lee

UVA researchers sharpen machine vision by mimicking nature and taking advanced computing to the edge.

Self-driving cars occasionally crash because their visual systems can’t always process static or slow-moving objects in 3D space. In that regard, they’re like the monocular vision of many insects, whose compound eyes provide great motion-tracking and a wide field of view but poor depth perception.

Except for the praying mantis.

A praying mantis’ field of view also overlaps between its left and right eyes, creating binocular vision with depth perception in 3D space.

Combining this insight with some nifty optoelectrical engineering and innovative “edge” computing — processing data in or near the sensors that capture it — researchers at the University of Virginia School of Engineering and Applied Science have developed artificial compound eyes that overcome vexing limitations in the way machines currently collect and process real-world visual data. These limitations include accuracy issues, data processing lag times and the need for substantial computational power.

“After studying how praying mantis eyes work, we realized a biomimetic system that replicates their biological capabilities required developing new technologies,” said Byungjoon Bae, a Ph.D. candidate in the Charles L. Brown Department of Electrical and Computer Engineering.

About Those Biomimetic Peepers

The team’s meticulously designed “eyes” mimic nature by integrating microlenses and multiple photodiodes, which produce an electrical current when exposed to light. The team used flexible semiconductor materials to emulate the convex shapes and faceted positions within mantis eyes.

“Making the sensor in hemispherical geometry while maintaining its functionality is a state-of-the-art achievement, providing a wide field of view and superior depth perception,” Bae said.

“The system delivers precise spatial awareness in real time, which is essential for applications that interact with dynamic surroundings.”

Such uses include low-power vehicles and drones, self-driving vehicles, robotic assembly, surveillance and security systems, and smart home devices.

Bae, whose adviser is Kyusang Lee, an associate professor in the department with a secondary appointment in materials science and engineering, is first author of the team’s recent paper in Science Robotics.

Among the team’s important findings on the lab’s prototype system was a potential reduction in power consumption by more than 400 times compared to traditional visual systems.

Benefits of Computing on the Edge

Rather than using cloud computing, Lee’s system can process visual information in real time, nearly eliminating the time and resource costs of data transfer and external computation, while minimizing energy usage.

“The technological breakthrough of this work lies in the integration of flexible semiconductor materials, conformal devices that preserve the exact angles within the device, an in-sensor memory component, and unique post-processing algorithms,” Bae said.

The key is that the sensor array continuously monitors changes in the scene, identifying which pixels have changed and encoding this information into smaller data sets for processing.

The approach mirrors how insects perceive the world through visual cues, differentiating pixels between scenes to understand motion and spatial data. For example, like other insects — and humans, too — the praying mantis can process visual data rapidly by using the phenomenon of motion parallax, in which nearer objects appear to move faster than distant objects. Only one eye is needed to achieve the effect, but motion parallax alone isn’t sufficient for accurate depth perception.

Praying mantis eyes are special because, like us, they use stereopsis — seeing with both eyes to perceive depth — in addition to their hemispherical compound eye geometries and motion parallax to understand their surroundings.

“The seamless fusion of these advanced materials and algorithms enables real-time, efficient and accurate 3D spatiotemporal perception,” said Lee, a prolific early-career researcher in thin-film semiconductors and smart sensors.

“Our team’s work represents a significant scientific insight that could inspire other engineers and scientists by demonstrating a clever, biomimetic solution to complex visual processing challenges,” he said.

Publication

Stereoscopic artificial compound eyes for spatiotemporal perception in three-dimensional space, appeared in the May 15 edition of Science Robotics. UVA electrical and computing engineering graduate students Doeon Lee, Minseong Park, Yujia Mu, Yongmin Baek, Inbo Sim and Cong Shen also contributed to the research.

This work was supported by National Science Foundation and U.S. Air Force Office of Scientific Research.

Journal: Science Robotics
DOI: 10.1126/scirobotics.adl3606
Article Title: Stereoscopic artificial compound eyes for spatiotemporal perception in three-dimensional space
Article Publication Date: 15-May-2024

Media Contact

Jennifer McManamay
University of Virginia School of Engineering and Applied Science
jmcmanamay@virginia.edu
Office: 540-241-4002

Expert Contact

Kyusang Lee
University of Virginia School of Engineering and Applied Science
kl6ut@virginia.edu

Media Contact

Jennifer McManamay
University of Virginia School of Engineering and Applied Science

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

First-of-its-kind study uses remote sensing to monitor plastic debris in rivers and lakes

Remote sensing creates a cost-effective solution to monitoring plastic pollution. A first-of-its-kind study from researchers at the University of Minnesota Twin Cities shows how remote sensing can help monitor and…

Laser-based artificial neuron mimics nerve cell functions at lightning speed

With a processing speed a billion times faster than nature, chip-based laser neuron could help advance AI tasks such as pattern recognition and sequence prediction. Researchers have developed a laser-based…

Optimising the processing of plastic waste

Just one look in the yellow bin reveals a colourful jumble of different types of plastic. However, the purer and more uniform plastic waste is, the easier it is to…