Data journeys in the sciences
Autor
Leonelli, Sabina
Tempini, Niccolò
Institución
Resumen
What is the point of data in research? Philosophers and methodologists have long
discussed the use of data as empirical fodder for knowledge claims, highlighting,
for instance, the role of inductive reasoning in uncovering what data reveal about
the world and the different ways in which data can be modelled and interpreted
through statistical tools. This view of data as a fixed, context-independent body of
evidence, ready to be deployed within models and explanations, also accompanies
contemporary discourse on Big Data – and particularly the expectation that the dramatic increase in the volume of available data brings about the opportunity to
develop more and better knowledge. When taking data as ready-made sources of
evidence, however, what constitutes data in the first place is not questioned, nor is
the capacity of data to generate insight. The spotlight falls on the sophisticated algorithms and machine learning tools used to interpret a given dataset, not on the efforts
and complex conditions necessary to make the data amenable to such treatment.
This becomes problematic particularly in situations of controversy and disagreement over supposedly “undisputable” data, as in the case of current debates over the
significance of climate change, the usefulness of vaccines and the safety of genetic
engineering. Without a critical framework to understand how data come to serve as
evidence and the conditions under which this does or does not work, it is hard to
confront the challenges posed by disputes over the reliability, relevance and validity
of data as empirical grounds for knowledge claims.