top of page
Hermes Solenzol

Computer Models Are Not Replacing Research on Animals, and They Never Will

The number of papers using computer models in biomedical research is insignificant compared with those using animals

The claim: computer models are replacing animals in scientific research

The buzz is everywhere when animal research is mentioned: experiments in animals are outdated because computer models are replacing them.

You may have read statements like these:

“Researchers have developed a wide range of sophisticated computer models that simulate human biology and the progression of developing diseases. Studies show that these models can accurately predict the ways that new drugs will react in the human body and replace the use of animals in exploratory research and many standard drug tests.” PETA.
“Sophisticated computer models use existing information (instead of carrying out more animal tests) to predict how a medicine or chemical, such as drain cleaner or lawn fertilizer, might affect a human.” The Humane Society of the United States.
“But research shows computer simulations of the heart have the potential to improve drug development for patients and reduce the need for animal testing.” Scientific American.
“Beginning in 2012, the Endocrine Disruptor Screening Program began a multi-year transition to validate and more efficiently use computational toxicology methods and high-throughput approaches that allow the EPA to more quickly and cost-effectively screen for potential endocrine effects. In 2017 and 2018, ORD and OCSPP worked with other federal partners to compile a large body of legacy toxicity studies that was used to develop computer-based models to predict acute toxicity without the use of animals.” Directive to Prioritize Efforts to Reduce Animal Testing memorandum by Andrew R. Wheeler, Administrator of the Environmental Protection Agency, September 10, 2019.

Papers using animals, mice, rats and non-mammals

There is a way to check if these claims are true.

The final product of scientific research is scientific articles. Therefore, we can compare the number of papers generated with computer models with those that used animal research to evaluate the actual productivity of the two approaches.

An additional problem is that a tally of the number of animals used in research is not kept for many species, including mice, rats, birds and fish. However, every paper published must report the animal species that was used.

A good place to start is to look at the number of papers using all animals, mice, rats, rodents (rats & mice) and non-mammals (birds, fish, insects, worms, etc.). 

Graph showing an increase in the number of papers using animals, rodents and non-mammals from 1975 to 2017

Figure 1 shows that the number of papers using any kind of animals increased linearly from 1975 to 2017, reaching well over 100,000 papers per year.

A large fraction of these papers are from research on rodents. Their number increase in parallel with the number of papers using all animals. However, studies using rats remained constant since 1990, while the number of papers using mice increased exponentially in that period of time. The blue line in the graph is an exponential curve, which provides an excellent fit for the mouse data. Therefore, scientists have been dropping rats in favor of mice, likely because of the increasing availability of transgenic mice, which allow performing sophisticated experiments. 

The number of papers using non-mammals (mostly birds, fish, insects and worms) has also been increasing exponentially, and recently surpassed the number of studies using rats.

Papers on humans and clinical trials

A search with the MeSH Term ‘animal’ without excluding humans yields a very high number of papers. This is because there are numerous papers on humans. They are shown in Figure 2 together with the results for non-human animals and mice or rats. Note the change in the scale of the Y-axis from Figure 1.

Clearly, there are many more papers on humans than on animals. They increase exponentially, while the studies on animals increase linearly, so that the difference between the two becomes greater with time. While in 1975 the number of papers on humans was roughly the double of the papers on animals, today there are six studies on humans for every study on animals.

graph of papers on humans and animals from 1975 to 2017

However, this does not mean that animal research is being replaced with research on humans.

Strictly speaking, research on humans is conducted in clinical trials, so let us see what happens when we do a PubMed search on clinical trials.

Looking at the Y-axis scale of Figure 3, we can see that papers reporting clinical trials are much less numerous than papers on humans in Figure 2. In 2017, there was one clinical for every 50 papers on humans. This is because most papers on humans are medical case reports, epidemiological studies and other medical observations. These could be considered research, but certainly not the kind of research on physiological and biochemical mechanisms that can replace animal research.

graph showing number of papers per year on clinical trials
Figure 3 - Number of papers per year reporting clinical trials.

The number of clinical trials has increased over time, but does not follow a clear trend, either linear or exponential (Figure 3). There was a steep drop around 1990 followed by a rapid increase up to 2003. Since then, the number of clinical trials has remained constant.

The result of my search is consistent with the reports in ClinicalTrials.gov, which lists 268,786 completed clinical trials in 2024. Since each clinical trial takes several years to run, this is consistent with the annual number of clinical trials shown in Figure 3.

Papers using computer models

Now we have enough background information to compare the number of papers using computer models with those reporting research on humans and of animals. 

Figure 4 shows the evolution in the number of papers using computer models over time. Again, note the big difference in the Y-axis scale with Figure 1.

graph showing number of papers using computer models over time

As we could expect, barely any papers using computer models were published before 1985. After that, the number of studies increased slowly until 2001 and rapidly from 2001 to 2008, when it seems to have stopped growing. At that point, the number of studies using computer models was 40 times less than the number of animal studies.

Overall, the number of papers using computer models fits quite well an exponential curve, but this is largely due to their initial growth.

However, many of these papers use computer models in combination with animal experiments, not instead of them.

As shown in Figure 4, excluding the papers that used animals in addition to computer models reduced the number of papers using computer models in 2017 by almost two-thirds. Moreover, the stagnation in the number of computer model studies after 2008 becomes more apparent. There is even a decrease after 2011.

 If computer models were replacing animal studies, what we would see is an increase in the papers exclusively using computer models. Instead, what we see is that numerous papers use both computer models and animals. This is probably because the models are used to analyze results obtained with animals. Alternatively, animal experiments could have been used to validate the computer model.

Computer modeling categories

The MeSH Term ‘computer simulation’ has six different subcategories:

  • Augmented Reality,

  • Cellular Automata,

  • Molecular Docking Simulation,

  • Molecular Dynamics Simulation,

  • Patient-Specific Modeling,

  • Virtual Reality.

Searches with ‘augmented reality’ and ‘virtual reality’ as MeSH Terms in PubMed yielded very few hits.

“A cellular automaton (pl. cellular automata, abbrev. CA) is a discrete model of computation studied in automata theory. […] Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling.” Cellular automaton, Wikipedia.

Therefore, cellular automata do not seem to be a replacement for research on animals.

According to Wikipedia, molecular dynamics “is a computer simulation method for analyzing the physical movements of atoms and molecules.” It is used in biomedical research to study the 3-dimensional structures of proteins and other biomolecules.

Molecular docking is used to study the interaction of small molecules with their ‘docking pockets’ or ‘binding sites’ in proteins like enzymes or neurotransmitter receptors. This is a great tool for designing new drugs that interact with these proteins. However, new drugs designed this way have to be tested in vitro, then on animals and then in clinical trials to be considered useful as medication. The computer model is just the first step in a long process that has to include research on animals.

Patient-specific modeling is “the development and application of computational models of human pathophysiology that are individualized to patient-specific data.” It is used to plan surgeries and to model organ function.

Clearly, none of these techniques can be used to replace animal research. Rather, they complement it.

As we can see in Figure 4, molecular dynamics and molecular docking comprise a good fraction of the recent papers using computer models. Patient-specific modeling generates a very small number of papers.

Number of papers with computer models and using different animal species

bar graph comparing the number of papers in 2017 using computer models and different animals

Figure 5 shows a comparison between the number of papers generated in 2015 with computer models, those from clinical trials, and those using different animal species. It shows different bars for all papers with computer models (CM) and for papers with computer models without animals (CM -animals).

Most of the papers that year used mice or rats. Computer models produced many fewer papers, and this number was similar to the number of papers on clinical trials.

When we consider papers using exclusively computer models, their number was much smaller and comparable with those using dogs, cats and primates.

Interestingly, papers using non-human primates are similar in number to those using zebrafish, the fruit fly Drosophila or the worm C. elegans, showing the relative importance of studies in non-mammals and invertebrates.

If we add the number of papers using these species, they vastly outnumber the papers using exclusively computer models. Figure 1 shows that the number of papers using any kind of animal in 2015 was 120,000.

‘Replace, reduce and refine’ research on animals — is it working?

‘Replace, reduce and refine’ (the ‘3Rs’) is the approach adopted by scientific institutions since the 1960s to address criticism from animal rights activists. This policy assured the public that the number of animals used in research would be reduced, that animals would be replaced by other methods or by less sentient animals, and that the way animals were used would be refined to decrease animal suffering.

While it is true that the ways animals are used in research have improved substantially, the analysis I present in this article shows that the ‘reduce’ and the ‘replace’ objectives are far from being accomplished. It is true that research on charismatic species like monkeys, dogs and cats has been replaced by research on mice. However, it has not been replaced by experiments in vitro or by computer models.

Figure 1 shows that, overall, the use of animals in research has been increasing since 1975 and will likely continue to grow in the future. A major part of this increase is due to the exponential growth in the use of mice and non-mammal species.

The 3Rs were a foolish promise because the only way we can reduce the use of animals in research is by doing less science. This would imply decreasing scientific progress, which would have a tremendous negative effect on society. We would miss cures for old diseases like cardiovascular problems and new ones like Covid-19.

Computer models are not replacing research with animals

It is clear is that computer models are not replacing animals in research.

The number of studies using computer models is relatively small and is not increasing. When we count only studies that use computer models without animals, their number is smaller and did not increase from 2008 to 2017.

At present, many of the papers using computer models deal with molecular dynamics and molecular docking, methods that complement but do not replace animal experiments. These types of papers have been increasing and many include the use of animals.

Of course, the number of papers using animals does not reflect the actual number of animals used in research. Studies using monkeys use just a few of them, while papers on mice and rats typically use hundreds of animals. Fruit flies are used by the tens of thousands. However, the number of papers does tell us the relative contribution of each species to the scientific endeavor. Also, given that the number of animals per paper for a given species is not likely to change much over time, an increase in the number of papers for that species is likely to reflect an increase in the number of animals used.

The use of animals in research is not being reduced. It continues to increase.

Regarding replacement, charismatic species like dogs, cats and monkeys are being replaced by mice and non-mammals. However, animal research overall is clearly not being replaced by computer models.

Why computer models will not replace animal research in the future

Predictions about the future are risky. Why do I dare to forecast that computers and artificial intelligence (AI) will never replace research on animals?

Surely, the rapid growth of computer power will determine that sooner or later biomedical research will move from animals to computer models. Right?

Well, no. There are some fundamental issues that determine that, for the foreseeable future, we will need the actual bodies of animals or humans to extract information from them. Even though future computers will help enormously to accelerate biomedical research, they will not be able to tell us what happens inside our bodies or the bodies of animals. We will have to look inside those bodies and tell the computers.

The reason for this lies in the nature of life itself.

Living beings have been created by evolution, which is a contingent process. The word ‘contingent’ means that there is an element of randomness in a process that makes it impossible to predict its outcome. In the words of evolutionary biologist Stephen Jay Gould, if we went back in time and run evolution again, we would end up with a completely different set of living beings. While natural selection funnels the direction of evolution through the survival of the fittest, it sits on top of random mutations and genetic drifts, which are non-deterministic processes. It is impossible to know what the animals species on planet Earth will look like a million years from today.

All the enzymes, intracellular signaling pathways, ion channels, neurotransmitter receptors, hormone receptors, membrane transporters, etc., responsible for the functioning of our bodies were created by contingent processes. Not entirely random, but still impossible to predict.

For example, imagine that you were to design a new car. You will be constrained by physics if you wanted the car to work, but the car could still have infinite different looks. It may have four wheels, or three, or six. It could ride high as a truck or low as a sport car. An external observer could not predict how it would look and how it would work.

Likewise, if you told a computer ‘find out how neurons in the spinal cord process pain’, the computer would not be able to tell you. Somebody would have to look at those neurons and find out. You have to feed that information to a computer before it can do anything with it.

The amount of information in our bodies, in each of our cells, is staggering. We have barely started to scratch it. The human genome contains 20,000 to 25,000 genes, and we still don’t know what most of them do. A computer, no matter how powerful, is not going to tell us.

And knowing what each of those genes does is only a small part of the story. We need to know how the proteins encoded by those genes interact with each other to generate metabolism. The only way to do that is to take the body of an animal and look inside.

A computer cannot guess what goes on inside the body, just like it cannot guess the content of a book that it has not read.

The mirage of the computer revolution

The advancement of computer technology in the information revolution has been so amazing that we have become convinced that there is nothing an advanced computer can’t do. That is why it is so easy for animal rights organizations to convince the public that we can eliminate animal research and replace it with computer models.

Even organizations that supposedly defend animal research have helped this misconception by promoting the idea that eventually it will be replaced (one of the three Rs) by computer models, in vitro research or clinical trials. That is simply not true.

As I have shown here, as scientific productivity increases, so does the use of animals. What has happened is that we are using fewer animals of some species (dogs, cats, rabbits and primates) by using more animals of other species, like mice and zebrafish.

Research using computer models is relatively small and is not growing fast enough to ever catch up with animal research.

Computers can do amazing things, but they cannot guess information that they do not have.

That is why computer models will never replace animal research.

There are limits to what is possible, and this is one of them.

How I made the graphs: data mining in PubMed

There is a freely accessible repository of all the papers published anywhere in the world: PubMed. It is run by the United States government, specifically by the US National Library of Medicine, part of the National Institutes of Health (NIH).

In PubMed, you can do keyword searches to find articles on any topic, so I used it for data-mining to compare the number of papers using animal research and computer models. In the “Search results” page, there is a nifty graphic on the top left, with bars representing the number of papers per year containing the keyword used in the search. Below is a “Download CSV” link that allows you to get those numbers in a spreadsheet. I imported the numbers into a graphics program (Prism 8, by GraphPad) to create the graphics that I am going to show you.

There are several ways to enter a keyword in a PubMed search. You can search for the keyword anywhere in the article (“All Fields”). However, this was not useful for my goal because if an article mentions “computer model”, this does not mean that this was the primary method used in the paper. My favorite method to restrict a search is to look for the keyword only in the title or the abstract of the paper (“Title/Abstract”). Still, this is not optimal because different authors may use different words for the same concept. For example, the terms “computer model” and “computer simulation” are synonyms.

To deal with the problem of synonyms, PubMed uses Medical Subject Headings (MeSH homepageWikipedia), a sort of thesaurus to facilitate searching by linking synonymous terms, so if you enter one of them it retrieves all the terms that are related. This is called doing an “extended search”. PubMed can perform MeSH searches by MeSH Major TopicMeSH Subheading or MeSH Terms. These different types of MeSH record types are explained here.

descriptor, Main Heading or Major Topic are terms used to describe the subject of each article. Qualifiers or Subheadings are used together with descriptors to provide more specificity. Entry Terms are “synonyms or closely related terms that are cross-referenced to descriptors”.

Therefore, I performed my searches using MeSH Terms to avoid having to find the exact wording of a MeSH Major Topic. When you introduce a keyword as MeSH Term, for example ‘mice’, PubMed searches that word and all its synonyms, in this case ‘mouse’, ‘Mus’ and ‘Mus Musculus’.

The figures that I show represent the number of papers in the period 1975-2017, because 1975 seems to be the year when PubMed starts gathering most of the papers written in the world. Records appear incomplete before that date.

It seems that it takes up to two years for PubMed to complete its collection of citations, since the number of papers in every search drops substantially during the last two years. Hence, I excluded data from 2018 to 2019.

This article was originally published in 2019 on the website Speaking of Research and then deleted. I am re-publishing the original version, so my research does not extend to 2024. Also, because the Covid-19 pandemic closed labs for a couple of years, scientific productivity was anomalous after 2020.

Recent Posts

See All

コメント


bottom of page