Global Governance Centre
12 May 2020

The politics of methods in the controversy over how to treat coronavirus

The quest to find a COVID-19 treatment has incited a highly publicized debate related to longstanding questions about scientific methods and public health interventions. It calls for greater reflection on the assumptions and limitations of knowledge and its underlying political and social facets.

This article is part of the series Governance, in Crisis.


Controversies have peppered the history of medicine and science more broadly. The politicisation of some of these controversies has prompted the questioning of scientific and expert authority. In this context, scientists, experts and researchers, along with domestic and global governors in the domain of health have tried to reassert the legitimacy of science, trying to delineate ‘good’ from ‘bad’ science in various ways.


The COVID-19 crisis has prompted a new and highly mediatised scientific controversy about how to cure people affected by the coronavirus 2 (SARS-CoV-2). At the heart of this controversy was the highly mediatised proposal from French doctor Didier Raoult to treat people with COVID-19 by means of a protocol combining hydroxychloroquine (an old anti-malaria drug) and azithromycin (an antibiotic). The ‘politics’ of science came frontstage in this conflict, including links between the production of science and the private sector, personal struggles among researchers who strive for recognition and authority, and the intermingling of science with policy. Of course, stakes are high not only in regard to public health, but also in terms of potential economic gains and scientific visibility for those who discover a treatment or produce work that becomes highly visible.


Debates over scientific research methods have been at the core of this controversy with associated attempts from scientists and public authorities to credit or discredit studies and their respective results. For scientists, methods have been the vector through which the boundary between good and bad science has been managed. For public authorities, methods have been invoked to assert their own competence and neutrality, what governments themselves call ‘evidence-based’ policy. The World Health Organization (WHO) has been at the front stage of these claims, positioning itself as a scientific and apolitical organisation in the midst of this controversy.


When Dr Didier Raoult published his first study (and since then second and third studies), the results have been attacked by some scientists on two grounds: the sample was too small (first and second study) and his work did not rely on randomised controlled trials (RCTs) (i.e. his research design did not include the use of a control group). Some of these critiques were amplified in the media, with the claim that one could not know the effectiveness of the medicine if the group observed was not compared to a group having received no treatment. The WHO also criticised the lack of ‘conclusive evidence’ in support of hydroxychloroquine, warning against the use of ‘untested drugs’ to treat patients, with WHO Director General Tedros Adhanom Ghebreyesus adding that ‘Small, observational and non-randomised studies will not give us the answer we need.’


But when one looks at history, the idea that RCTs offer more reliable evidence than any other method is a relatively recent one. Although RCTs came into common use in the 1930s, it was not until the early 1990s, with the emergence of the ‘Evidence-Based Medicine’ (EBM) movement, led by a group of Canadian epidemiologists, that this method started to be considered as the gold standard in medical practice. Early EBM proponents called for the replacement of the ‘old paradigm’, in which the practitioner’s intuition, clinical experience and observations acted as sufficient grounds for clinical decisions. Central to the new approach was the understanding of ‘evidence’ as hierarchical, with systematic reviews and meta-analysis of RCTs at the top and observational studies at the bottom. Clinicians were instructed, henceforth, to base their decisions on the best available evidence.


The controversy around hydroxychloroquine and what Raoult has called the ‘moral dictatorship of methodologists’ relates to the more fundamental question of which types of knowledge come to be validated as ‘truth’. With the advent of EBM, RCTs started to be considered as a self-evident and superior solution to all questions in medical care. In that sense, it is not surprising that when Dr Raoult first published his study, it was discredited by some scientists, who argued the results were only based on ‘anecdotal evidence’.


The French National Institute of Health and Medical Research announced recently the launch of ‘Discovery’ a large European clinical trial of experimental drugs which includes the test of hydroxychloroquine. Few days later, the European Medical Agency welcomed the initiative, warning that hydroxychloroquine could only be used in clinical trials or emergency use programs, as such trials ‘will enable authorities to give reliable advice based on solid evidence’.


However, RCTs, just as any other type of knowledge, are rooted in specific theoretical assumptions about nature and the ways in which it can be best understood. While RCTs can help measure the outcomes of a particular intervention, they are limited when it comes to understanding multi-causal and context-dependent phenomena. Like other existing research methods, RCTs embody certain biases. Of course, an obvious form of bias might be present when trials, often with positive outcomes, are funded by biopharmaceutical companies. But beyond this, and even when trials are publicly funded, a number of decisions are taken by researchers at every stage of the research design. Formulating the research question, selecting the variables, assembling the sample before it is randomised, conducting the analysis of the data and interpreting the results all involve human decisions which reflect certain assumptions and theoretical presuppositions. Even the processes of randomising, blinding and controlling involve decisions at every step of the process. To mention one example, choosing when to end a trial and thus when to collect endline data directly affects the nature of the results and thus the kind of claim which can be made about the effects of a treatment.


This is an excerpt. To read the full article, visit The Global.
Interested in contributing to our blog? Here is how.


Photo by Science in HD on Unsplash