Researchers teach computers how to name images by 'thinking'

The new system, which can automatically annotate entire online collections of photographs as they are uploaded, means significant time-savings for the millions of Internet users who now manually tag or identify their images. It also facilitates retrieval of images through the use of search terms, said James Wang, associate professor in the Penn State College of Information Sciences and Technology, and one of the technology's two inventors.

The system is described in a paper, “Real-Time Computerized Annotation of Pictures,” given at the recent ACM Multimedia 2006 conference in Santa Barbara, Calif., and authored by Jia Li, associate professor, Department of Statistics, and Wang. Penn State has filed a provisional patent application on the invention. Major search engines currently rely upon uploaded tags of text to describe images. While many collections are annotated, many are not. The result: Images without text tags are not accessible to Web searchers. Because it provides text tags, the ALIPR system-Automatic Linguistic Indexing of Pictures-Real Time-makes those images visible to Web users.

ALIPR does this by analyzing the pixel content of images and comparing that against a stored knowledge base of the pixel content of tens of thousands of image examples. The computer then suggests a list of 15 possible annotations or words for the image.

“By inputting tens of thousands of images, we have trained computers to recognize certain objects and concepts and automatically annotate those new or unseen images,” Wang said. “More than half the time, the computer's first tag out of the top 15 tags is correct.”

In addition, for 98 percent of images tested, the system has provided at least one correct annotation in the top 15 selected words. The system, which completes the annotation in about 1.4 seconds, also can be applied to other domains such as art collections, satellite imaging and pathology slides, Wang said. The new system builds on the authors' previous invention, ALIP, which also analyzes image content. But unlike ALIP which characterized images by incorporating computational-intensive spatial modeling, ALIPR characterizes images by modeling distributions of color and texture.

The researchers acknowledge computers trained with their algorithms have difficulties when photos are fuzzy or have low contrast or resolution; when objects are shown only partially; and when the angle used by the photographer presents an image in a way that is different than how the computer was trained on the object. Adding more training images as well as improving the training process may reduce these limitations-future areas of research.

Media Contact

Margaret Hopkins EurekAlert!

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

Nerve cells of blind mice retain their visual function

Nerve cells in the retina were analysed at TU Wien (Vienna) using microelectrodes. They show astonishingly stable behavior – good news for retina implants. The retina is often referred to…

State-wide center for quantum science

Karlsruhe Institute of Technology joins IQST as a new partner. The mission of IQST is to further our understanding of nature and develop innovative technologies based on quantum science by…

Newly designed nanomaterial

…shows promise as antimicrobial agent. Rice scientists develop nanocrystals that kill bacteria under visible light. Newly developed halide perovskite nanocrystals (HPNCs) show potential as antimicrobial agents that are stable, effective…