Making the Invisible Visible

In this post, Jessica Villiger examines the process of coding and how it can be used in meta-analytical research projects. The reflection draws from the article, Making the Invisible Visible: Guidelines for the Coding Process in Meta-analyses, by Villiger, Simone A. Schweiger and Artur Baldauf, all of the University of Bern, published in Organizational Research Methods.

Meta-analyses are seen as the lighthouses that help entire research communities in shedding light on a plethora of future academic research avenues. However, despite their pertinence for academia, we experienced that there was little methodological guidance on one of ‘the’ key and most time-intensive steps in meta-analytic research projects – Coding.

With our article, “Making the Invisible Visible: Guidelines for the Coding Process in Meta-analyses,” published in the special issue of Organizational Research Methods, we address this research gap by developing guidelines for the process of coding. Our article empowers novice as well as experienced meta-analysts with practical advice on ‘how’ to plan, conduct and report their coding process.

We believe that the existing void in methodological literature is also partially responsible for the common misconception that coding is a purely technical task (i.e., the extraction of numeric data from studies to run meta-analytic calculations). Although we know and learned that coding encompasses much more. In fact, coding is the process of translating independent studies into a common language. Hence, researchers performing the coding task aim to make sense of the variables represented in these studies by examining and extracting their labels, definitions and measurements. This, in turn, allows a meaningful comparison and syntheses of empirical findings.

Intrigued by the question of how authors report their own coding process, we analyzed 124 meta-analyses published in top-tier journals. Our analysis revealed that only a small number of authors transparently described their coding process. This is problematic because it limits readers to fully comprehend the authors’ decision-making process, for instance, on which basis the authors developed their coding categories, and consequently to assess a meta-analysis’ quality. As such, the coding process remains the invisible architecture behind meta-analyses.

Backed with these insights we contacted the authors of the 124 meta-analyses to find out whether coding has been an issue addressed by editors and journal reviewers during their review process of these top-tier journals. Our survey findings indicated that authors have been indeed requested to describe their coding, but mostly only on a superficial level by reporting the number of coders involved and their intercoder reliability value (i.e., the extent to which coders agreed on their coding). Hence, their decision-making process on how to build coding categories or the depth of data-extraction was seldom assessed.

Moreover, the majority of authors participating in our survey stated that they were not aware of any guidelines for coding and thus, would be very grateful if such a coding manual were to exist. This information supported us in pursuing our research idea of thoroughly developing guidelines for the coding process.

Jessica Villiger, left, Simone A. Schweiger and Artur Baldauf.

Besides the content-analysis and survey, our article also offers a state-of-the-art assessment of prior advice given on the topic of coding. We added to this extant knowledge our four-step guidelines, enriched with practical examples from our own meta-analytic experiences. When referring to our guidelines, authors of meta-analyses and their readers should be able to get answers to certain questions: What do meaningful coding practices entail? What are main pitfalls? Or to what extent should the coding process be reported?

Finally, our research also aims to aid editors and reviewers on how to support authors in improving the quality of their coding and its subsequent reporting. For that reason, our article seeks to contribute to the ongoing paradigm shift of more transparent reporting of coding practices. We believe making the invisible architecture of meta-analyses visible is important because it helps researchers to better use insights from prior meta-analyses in their own work, which consequently enables them to contribute more pertinently to the advancement of science overall.

Are you curious to find out more? Then check out now “Making the Invisible Visible: Guidelines for the Coding Process in Meta-analyses.”

3 2 votes
Article Rating

Jessica Villiger

Jessica Villiger is a Post Doc/Senior Researcher for the Department of Management & Entrepreneurship at the University of Bern, a public university in Switzerland.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Would love your thoughts, please comment.x