17 / 05 / 2022
On the 21st and 22nd of April FRAME took part in an online, two-day workshop looking to address the issue of animal methods bias in scientific publishing. The workshop was organised by a steering committee involving colleagues from the Physicians Committee for Responsible Medicine, FRAME, Animal-Free Research UK, Humane Society International, Peta, and the European Commission Joint Research Centre.
Why do we need to discuss animal bias in scientific publishing?
There are many pressures and factors that may influence researchers’ decisions on which field to study and methods to use for their investigations. This will start with their own research background and interest but will also depend on knowledge obtained from other (published) research seeking to answer similar questions. In biomedical research this existing evidence base is dominated by animal research, as this has been the prevailing paradigm throughout the 20th century and into the start of the 21st. Whilst researchers have a legal and ethical responsibility to prioritise approaches that do not require animals, external pressures may influence their choice to use, or not use animals. Some of these pressures come from the academic funding and publication systems.
The way in which scientific knowledge moves forward is through the dissemination of individual studies in academic journals; ideally this should happen whether the research has found something new, interesting and reproducible or not. These journals, also referred to as scientific or peer-reviewed journals, share articles written by experts in one field of study, and are often ranked by ‘impact factor.’ The impact factor is primarily based on the number of times the articles are cited in other research articles, across all journals. Broadly speaking, the more citations the articles in the journal have, the higher the impact factor and the more prestigious the journal is considered to be. Publication in high impact journals is an expectation from funders and universities employing researchers, and publications and citations are used as a measure of the quality of individual academics, departments, and institutions. Originally designed to assess the impact of a journal, impact factor is at best an indication of engagement and usage of articles within a particular journal,
The way in which scientific knowledge moves forward is through the dissemination of individual studies in academic journals; ideally this should happen whether the research has found something new, interesting and reproducible or not. These journals, also referred to as scientific or peer-reviewed journals, share articles written by experts in one field of study, and are often ranked by ‘impact factor.’ The impact factor is primarily based on the number of times the articles are cited in other research articles, across all journals. Broadly speaking, the more citations the articles in the journal have, the higher the impact factor and the more prestigious the journal is considered to be. Publication in high impact journals is an expectation from funders and universities employing researchers, and publications and citations are used as a measure of the quality of individual academics, departments, and institutions. Originally designed to assess the impact of a journal, impact factor is at best an indication of engagement and usage of articles within a particular journal, and is not a reflection of the scientific quality or contribution of an individual study or person. Yet many researchers feel beholden to this flawed metric.
When applying for funding and submitting articles to journals for publication the project design and methods are reviewed and considered. . Peer-reviewers are experts in the same or a similar field who judge the validity, quality, significance and novelty of the project and the submitted paper, and feed back to the journal editors. This preference might mean that papers describing animal models are more easily accepted for publication, or conversely that papers describing in vitro or other non-animal methods are more harshly judged or may be less likely to be accepted for publication at all. In some cases, there is evidence of authors seeking to publish studies that have been conducted without the use of animals, being asked to validate their findings in vivo (using live animals) in order for the research paper to be published. It is worth noting that the background knowledge and expertise of the peer-reviewers may impact this. In areas where animal studies are more common, a reviewer may not feel they have the experience to comment on a novel, non-animal approach in the same area. Rather than asking for animal data there is an argument here for diversifying the skill set of the reviewers to receive more informed feedback on the specific methods used.
This bias in turn can influence decisions to use animal studies as researchers anticipate and adapt their choices due to known or perceived journal expectation, or to pre-empt or meet requests to add animal data to non-animal studies, for papers to be accepted for publication.
This is a concerning issue as whilst there may be circumstances where the approach used by the authors is genuinely not robust enough to answer the research question, often the animal data requested to ‘back-up’ or ‘validate’ an outcome is completely inappropriate. As the evidence grows it is clear to see that results from animal studies often do not translate into humans, and that new approach methods such as organ-on-a-chip match clinical data from human subjects far more accurately. Ironically, animal studies and tests have themselves never been through a ‘validation’ process to be assessed and accepted/rejected. There is however a large body of published animal data from historical studies that can be reviewed and used to ‘check’ that a novel, non-animal approach to a question is giving the same results as historical animal studies. Viewing animal studies as the ‘benchmark’ in this way makes no sense when you consider the lack of translation reported from animal to human studies and the poor reproducibility of much animal research. The reliance on such a flawed standard reflects an endemic conservatism and lack of confidence to embrace new approaches and give them the same opportunity to be further evidenced and adopted.
The continued existence of this bias will hamper the publication and dissemination of novel, non-animal approaches to research, and increase the number of unjustifiable animal research studies. The existing evidence base is the foundation of future scientific knowledge and direction. If that foundation is poorly built, or partly missing, the whole body of knowledge in an area can be called into question. This cannot be an acceptable state of affairs given the critical importance of science and innovation.
The workshop follows on from a 2021 survey conducted by the Physicians Committee for Responsible Medicine and Humane Society International investigating animal methods bias in publishing and how this is potentially a barrier to scientific progress. (1)
The survey was completed by 90 respondents working across biological and biomedical fields and provides preliminary evidence that animal bias does exist in publishing. The responses clearly show that researchers are often carrying out animal tests alongside studies using more modern non-animal approaches either in anticipation of reviewer requests, or because journal reviewers specifically requested this after submission, for the study to be published.
Analysis of these survey results provides preliminary evidence of animal methods bias in publishing and some of its consequences for researchers, including the conduct of unjustified animal experiments, delayed time to publication, and manuscript rejection.
The workshop aimed to explore perspectives from different angles, describe animal and related biases in publishing, consider the state of animal and non-animal based experimental systems, describe animal methods bias and related bias in publishing and peer review and identify potential causes, consequences and mitigation strategies. The workshop was well attended with lively debate, and a report of the themes, discussions and conclusions will be available shortly.
What can researchers do to reduce animal bias?
Whilst journals and peer-reviewers are at the forefront of championing a reduction of this persistent preference for animal studies, researchers who are trying to publish their novel, non-animal research can take steps to help ensure their submitted manuscript avoids this publishing pitfall.
- Ensure your methods are described meticulously – do not make any assumptions of the reviewer’s prior knowledge
- Explain your choice of method and robustly defend why it is the most appropriate method for the research question
- Push back on requests for in vivo validation evidencing the lack of translation and validation of animal models where relevant.
- Consider submitting papers to another journal and make sure to support and promote journals that are taking active steps to prevent animal bias.
- Get in touch with FRAME or another organisation working to address this know about your experiences.
- Can animal data translate to innovations necessary for a new era of patient-centred and individualised healthcare? Bias in preclinical animal research | BMC Medical Ethics | Full Text (biomedcentral.com)
- (PDF) Counting on citations: A flawed way to measure quality (researchgate.net)
- Journal Impact Factor: a usefully flawed metric – Tuan V. Nguyen (nguyenvantuan.info)
- The coming of age of organoids – BioMed21.org
You can help create a world where human-relevant research is the norm.
We want to create a better, brighter future for humans and animals. We receive no government funding so our work is only possible thanks to our generous supporters, like you. Every donation brings us one step closer to a world without animal testing.