An approach for selecting and using a method of inter-coder reliability in information management research
journal contributionposted on 07.07.2020 by A Nili, M Tate, A Barros, David Johnstone
Any type of content formally published in an academic journal, usually following a peer-review process.
© 2020 Elsevier Ltd Qualitative researchers in information management research often need to evaluate inter-coder reliability (ICR) to test the trustworthiness of their content analysis. A suitable method of evaluating ICR enables researchers to rigorously assess the degree of agreement among two or more independent qualitative coders. This allows researchers to identify mistakes in the content analysis before the codes are used in developing and testing a theory or a measurement model and avoid any associated time, effort and financial cost. Different methods have been proposed, but little guidance is available on which approach to evaluating ICR should be used. In this paper, we review and compare leading ICR methods that are suitable for qualitative information management research. We propose an approach for selecting and using an ICR method, supported by an illustrative example. The five steps in our proposed approach include: selecting an ICR method based on its characteristics and requirements of a project; developing a coding scheme; selecting and training independent coders; calculating the ICR coefficient and resolving discrepancies; and reporting the process of evaluating ICR and its results.