EXPERIMENTS · MACHINE LEARNING · STATISTICS
EXPERIMENTS · MACHINE LEARNING · STATISTICS
I have been rereading Kuhn’s Structure of Scientific Revolutions recently and so I wanted to write a quick note about his concept of incommensurability, especially in the context of specialization. I will start by briefly explaining what I think Kuhn means by incommensurability, and then explore two objections to its substantive implications. I hope never to write on the topic again, because I do genuinely think it is (a) “true” and (b) not that interesting. But I think Kuhn’s insight that specialization in science matters a lot is correct, and I use incommensurability to make a few broader points about that.
Kuhnian incommensurability is basically the idea that two scientific paradigms, two different models for understanding the same underlying phenomena, often imply incompatible ways of understanding the world. This underlying incompatibility leads to a practical impossibility of mutual critique: if we refuse to agree on the basics of what we see in the world we cannot have a productive argument about what is true about that world. An argument needs to start from some mutual understanding. The most central and over-arching notion of incommensurability in Structure is most clearly articulated in the postscript, where Kuhn argues that:
We posit the existence of stimuli to explain our perceptions of the world, and we posit their immutability to avoid both individual and social solipsism … But our world is populated in the first instance not by stimuli but by the objects of our sensations, and these need not be the same … But where the differentiation and specialization of groups begins, we have no similar evidence for the immutability of sensation. (Structure, 193)
Unpacking this perceptual incommensurability seems to reveal three different notions: taxonomic incommensurability, methodological incommensurability, and priority incommensurability (though this last is clearly less a focus for Kuhn it is one of my preoccupations). Taxonomic incommensurability is what Kuhn most clearly develops in his comparison of Newtonian mechanics and Einsteinian relativity in Structure (98-106), and what he is most preoccupied with in The Road Since Structure (4-5). In Structure he gestures at methods and priority, arguing that paradigms produces “a consequent shift in the problems available for scientific scrutiny and in the standards by which the profession determined what should count as an admissible problem or as a legitimate problem-solution,“ (Structure, 6) but these dimensions are less explicit, and less developed in Kuhn’s later work (at least that I am aware of). Methodological incommensurability has to do with different standards for what is considered evidence, and different norms about what kinds of experiments are appropriate for investigating the central problems of the field. Priority incommensurability focuses on which problems are actually seen as central to the field. Ludwik Fleck (whom Kuhn all but plagiarized in writing Structure) develops both of these ideas much more sharply: first he argues that the broader notion of directed perception structures the design and analysis of experiments and the interpretation of those analyses (89-92) and second he constantly raises the issue of what is investigated. If there is any value in this post it is that everyone should read Genesis and development of a scientific fact.
But back to Kuhn, the practical limitations that incommensurability implies are difficult to ascertain. This makes the theory hard to work with. Incommensurability feels like a big problem. But when considering scientists’ actual practice it seems like less of a practical concern. I am interested in two possible challenges to incommensurability’s “bigness”:
The gist of my argument is that if incommensurability is empirically plausible then it must have very limited implications.
Modern science is characterized by an ever-elaborating fabric of specialized sub-communities. As Kuhn himself says:
With much reluctance I have increasingly come to feel that this process of specialization, with its consequent limitation on communication and community, is inescapable, a consequence of first principles. Specialization and the narrowing of the range of expertise now look to me like the necessary price of increasingly powerful cognitive tools. What’s involved is the same sort of development of special tools for special functions that’s apparent also in technological practice. (Road Since Structure, 8)
As more and more specialties arise it seems plausible that the diffusion of scientific facts between specialties will become easier and more common. Easier because many specialties that are very close together will face less daunting translational challenges, more common because different paradigms that are very close together will produce facts relevant to each other that will be necessary to assimilate so as to remain competitive.
An ontological image might be helpful: the questions we might ask about the world are a vast sea, and science a few paradigmatic rafts scattered across this ocean. Drawing from Fleck, one of science’s key attributes is a drive to connect concepts. To this end, enterprising scientists build their own little rafts just off of the original islands they started on. Over time a tenuous patchwork bridge emerges connecting very different sciences through a gradation of sub-specialties, each paradigmatic in its own right, with the particularities of taxonomy, method and priority that entails, but connected to very similarly oriented specialties.
Here we must acknowledge that incommensurability cannot be binary. Consider the contemporary paradigms embodied by string theory, physical modeling in biology and bioinformatics. Two of these consider biological problems, and are often in conflict. There are certainly different evidentiary standards in these two schools, but their rafts drift in similar waters, while string theory — which differs radically in its taxonomy, its methods and its priorities — seems almost to sail a different sea. Surely the modern biometrician and the physical biologist can communicate more easily with each other than either can with the string theorist. Two specializations might have more or less overlap in their taxonomies, methods, or priorities, and that these degrees of overlap correspond to different levels of friction in translation. This opens the first objection: the more specialized the science, the more connecting rafts there will be between any concepts, the easier translation will be, mediated by a gradation of languages facilitating easy communication. To see the second objection and clarify the first we can zoom in further.
Necessarily there will be a point where two rafts built out from different starting positions must meet. This should be the easy case for incommensurability, the hard case for communication. These two rafts — our two paradigms — are descended from very different progenitors. They should have very different methods, possibly different taxonomies, even if their priorities — their central questions — have come together. One problem is that paradigms themselves are not really fixed. Rather they shift and are shifted by the competition that animates Kuhn’s evolutionary process and the drive for conceptual connection which is so central to Fleck. Two paradigms in that are in competition or in the process of connection will co-evolve, developing a taxonomy with enough overlap to ensure discussion meaningful. This is all but guaranteed by Kuhn’s formulation of the taxonomic game in The Road Since Structure (see Brandom Articulating Reasons). These two paradigms will each produce facts that will necessarily effect each other. Kuhn accepts this as well, in his many discussions of assimilation. Taking the case of transcription factor binding, a process in regulatory genomics, the communities that study this problem from the physical and bioinformatic paradigms often produce facts (TTTTT is a promoter in yeast) that effect each others’ practice (the poly-T promoter has a physical binding constant on the order of 1 nM). The near constant interchange of facts makes any strong view of the constraints placed on interchange by incommensurability untenable. The same will be true for any sufficiently specialized bridges in science. A clear case unfolding now is neuroscience and psychology.
I hope the two objections I have sketched illuminate the tension in Kuhn’s concept of incommensurability, his view of specialization, and his views on assimilation. The tension is not unresolvable by any means, but the empirical realities of specialization and assimilation severely limit the practical implications for his theory of incommensurability.