chevron icon Twitter logo Facebook logo LinkedIn logo YouTube logo download icon link icon audio icon quote icon posted icon clock icon author icon arrow icon arrow icon plus icon Search icon location icon location icon document icon menu icon plus-alt
Downloads:
Download

The sheer number of research outputs published every year makes systematic reviewing increasingly time- and resource-intensive. This paper explores the use of machine learning techniques to help navigate the systematic review process. Machine learning has previously been used to reliably “screen” articles for review – that is, identify relevant articles based on reviewers’ inclusion criteria. The application of machine learning techniques to subsequent stages of a review, however, such as data extraction and evidence mapping, is in its infancy. We, therefore, set out to develop a series of tools that would assist in the profiling and analysis of 1952 publications on the theme of “outcomes-based contracting.” Tools were developed for the following tasks: assigning publications into “policy area” categories; identifying and extracting key information for evidence mapping, such as organisations, laws, and geographical information; connecting the evidence base to an existing dataset on the same topic; and identifying subgroups of articles that may share thematic content.

Our results demonstrate the utility of machine learning techniques to enhance evidence accessibility and analysis within the systematic review processes. These efforts show promise in potentially yielding substantial efficiencies for future systematic reviewing and for broadening their analytical scope. Beyond this, our work suggests that there may be implications for the ease with which policymakers and practitioners can access evidence. While machine learning techniques seem poised to play a significant role in bridging the gap between research and policy by offering innovative ways of gathering, accessing, and analyzing data from systematic reviews, we also highlight their current limitations and the need to exercise caution in their application, particularly given the potential for errors and biases.