Computers learn how to create drugs of the future

The key role of computer technology in the fine-tuning of drug development and design will be considered by Professor Stephen Muggleton of Imperial College, London in his inaugural lecture, Models of Mind and Models of Body, today.

The new Professor of Bioinformatics in the Department of Computing will focus on how machine learning and logic programming can reduce the high costs of drug development in the pharmaceutical industry.

The pharmaceutical industry is increasingly overwhelmed by a large volumes of data generated both internally as a result of screening tests and combinatorial chemistry, and externally from sources such as the Human Genome Project. The majority of drug development is dependant on sifting through this information and using it to identify slight improvements in variants of patented active drugs.

Applying inductive logic programming (ILP), a research area formed at the intersection of machine learning and logic programming, Professor Muggleton and his team have shown that it is possible to construct rules that accurately predict the activity of untried drugs.

“Research and development in the pharmaceutical industry involves laboratories of chemists synthesising and testing hundreds of compounds often at great expense,” said Professor Muggleton

“It is now possible to construct rules that predict whether drugs will work from examples of drugs with known medicinal activity. The accuracy of the rules has been shown to be slightly higher than traditional statistical methods used in drug development.”

Recent research successes in the Computational Bioinformatics Laboratory led by Professor Muggleton include a collaboration with the pharmaceutical company SmithKline Beecham (now GlaxoSmithKline) that has yielded a machine-learning model that can identify novel neuropeptides at over 100 times the rate of the GSK in-house model. Working with the Universities of Manchester and Aberystwyth, researchers led by Professor Muggleton have developed a system that automatically suggests experiments for determining the function of genes in yeast, an important model organism in biological research.

“During the 21st century, it is already clear that computers will play an increasingly central role in supporting the fundamental formulation and testing of scientific hypotheses. The automatic construction and testing of hypotheses and their eventual incorporation into accepted knowledge-bases will require an ability to handle incomplete, incorrect and imprecise information,” he added.

Media Contact

Judith H Moore alphagalileo

All latest news from the category: Health and Medicine

This subject area encompasses research and studies in the field of human medicine.

Among the wide-ranging list of topics covered here are anesthesiology, anatomy, surgery, human genetics, hygiene and environmental medicine, internal medicine, neurology, pharmacology, physiology, urology and dental medicine.

Back to home

Comments (0)

Write a comment

Newest articles

First-of-its-kind study uses remote sensing to monitor plastic debris in rivers and lakes

Remote sensing creates a cost-effective solution to monitoring plastic pollution. A first-of-its-kind study from researchers at the University of Minnesota Twin Cities shows how remote sensing can help monitor and…

Laser-based artificial neuron mimics nerve cell functions at lightning speed

With a processing speed a billion times faster than nature, chip-based laser neuron could help advance AI tasks such as pattern recognition and sequence prediction. Researchers have developed a laser-based…

Optimising the processing of plastic waste

Just one look in the yellow bin reveals a colourful jumble of different types of plastic. However, the purer and more uniform plastic waste is, the easier it is to…