Background

Systematic reviews have become a fundamental part of evidence-based medicine; they are considered the highest form of evidence as they synthesise all available evidence on a given topic [1]. Many will also combine data to give an overall effect estimate using a meta-analysis. However, the quality and standard of reviews varies considerably. If this is not understood, or in some way established, the results of many reviews might be overstated. Quality assessment tools have been developed to assess such variation in standards.

One previously heavily cited tool is the Assessment of Multiple Systematic Reviews (AMSTAR) scale [2] which has been widely used since its development in 2007. This scale was shown to be both reliable and valid [3]. However, it came under criticism for some issues with its design. It was argued by Burda et al. [4] that AMSTAR was lacking in some key constructs, in particular, the confidence in the estimates of effect. It also lacks an item to assess subgroup and sensitivity analysis. Further criticisms include issues such as the inclusion of foreign language papers as “grey literature” and the idea that the items can often partially but not fully meet the criteria was highlighted. Also, each item was not weighted evenly and there is a lack of overall score, which became problematic when trying to compare scores. Thus, an upgraded version (AMSTAR-2) was developed in 2017. The new version promised to simplify the response categories, align the definition of research questions with the PICO (population, intervention, control group, outcome) framework, seek justification for the review authors’ selection of different study designs (randomised and non-randomised) and included numerical rating scales for inclusion in systematic reviews, seek reasons for exclusion of studies from the review, and determine whether the review authors had made a sufficiently detailed assessment of risk of bias for the included studies and whether risk of bias was considered adequately during statistical pooling and when interpreting the results [5].

A second novel assessment tool that has undergone rigorous development was published in 2016 (Risk of Bias in Systematic reviews [ROBIS [6]]). It aimed to provide a thorough and robust assessment of the level of bias within the systematic review.

Description of the assessment tools

Assessment of multiple systematic reviews (AMSTAR-2)

The main aim of the AMSTAR-2 is a tool to assess the methodological quality of the review. It is made up of 16 items in total and has simpler response categories than the original AMSTAR version. Some sections are considered by the authors to be critical domains, which can be used for determining an overall score (see Appendix, Table 12 for more information on the critical domains). AMSTAR-2 is intended for assessing effectiveness. The tool can also be applied to reviews of both randomised and non-randomised studies.

ROBIS tool

The main aim of the ROBIS tool is to evaluate the level of bias present within a systematic review. The tool is made up of three distinct phases. Firstly, there is an optional first phase to assess the applicability of the review to the research question of interest. The second phase is made up of 20-items within four main domains: study eligibility criteria, identification and selection of studies, data collection and study appraisal, synthesis and findings. This phase is to identify concerns about the review conduct. Each domain has signalling questions and ends with a judgement of concerns of each domain (low, high or unclear). There is also a third phase consisting of three signalling questions to enable an overall assessment of bias rating to be given. ROBIS has a wide application and is intended for assessing effectiveness, diagnostic test accuracy, prognosis and aetiology [6].

Previous research

Due to the novelty of both tools, there is limited available literature comparing them; however, some work has been recently published.

One review team [7, 8] compared all three tools (AMSTAR, AMSTAR-2 and ROBIS), applying them to reviews that reported both randomised and non-randomised trials. The inter-rater reliability between four raters’ across 30 systematic reviews was analysed. Minor differences were found between AMSTAR-2 and ROBIS in the assessment of systematic reviews including a mix of study type. On average, the inter-rater reliability (IRR) was higher for AMSTAR-2 compared to ROBIS. They assumed that scoring ROBIS would take more time in general, and it was always applied after AMSTAR-2, but in fact the mean time for scoring AMSTAR-2 was slightly higher than for ROBIS (18 vs. 16 min), with huge variation between the reviewers. They also reported that some signalling questions in ROBIS were judged to be very difficult to assess.

Aim

The overarching aim of our work is to add to the literature and make a further comparison of both assessment tools in two overviews of reviews. Our team had previously completed two overviews on complementary and alternative medicine (CAM) therapies for two hard-to-treat conditions. One overview evaluated systematic reviews of various CAM therapies for fibromyalgia (FM) [9], and the other evaluated systematic reviews of CAM therapies for infantile colic [10].

Objectives

Due to some of the challenges we had using both tools in our overview of reviews work, we planned a formal assessment of both tools by completing the following comparisons and evaluations:

  1. 1.

    To compare the content of the tools

  2. 2.

    To compare the percentage agreement (IRR)

  3. 3.

    To assess the useability/user experience of both tools.

Methods

Two overviews of reviews were conducted by our team [9, 10]. The first reviewed CAM for fibromyalgia and assessed the included reviews using both the original AMSTAR tool [2] and ROBIS [6]. This review was published in 2016, prior to the development and publication of AMSTAR-2 [5]. Here, we reported on 15 systematic reviews of CAM for fibromyalgia, published between 2003 and 2014 which assessed several CAM therapies. Eight of the reviews included a quantitative synthesis.

We subsequently completed a second overview of reviews of CAM for infantile colic published in 2019 [10]. Here, we used the new AMSTAR-2 tool alongside ROBIS. We reported on 16 systematic reviews of CAM for colic, published between 2011 and 2018. The reviews investigated several CAM therapies, 12 of which included a quantitative synthesis.

We later returned to the fibromyalgia review papers and reassessed them all using the AMSTAR-2 scale, for consistency. This results in a total comparison of 31 reviews. The reviewers were not strict about the order of ratings.

Assessment of methodological quality/bias of the included reviews

Three reviewers (RP, VL, PD) independently assessed each systematic review using both tools. Any reported meta-analyses were checked by a statistician experienced in meta-analyses (CP). The final score was agreed after discussion between the authors.

Data-analysis

Gwet’s AC statistic was used to calculate inter-rater reliability (IRR) [11]. Gwet’s AC2 is a weighted statistic which allows for “partial agreement” between ordinal categories. Therefore, Gwet’s AC2 was used to calculate IRR (using linear weights) for AMSTAR-2 questions with options “no”, “partial yes” and “yes” (questions 2, 4, 7, 8, 9). Gwet’s AC1 is an unweighted statistic which measures full agreement only. Gwet’s AC1 was used for all other AMSTAR-2 questions.

All signalling questions for ROBIS were analysed using Gwet’s AC2 with linear weights where “no”, “probably no”, “probably yes” and “yes” were recoded as 1–4. As mentioned above, Gwet’s AC2 is a weighted statistic which allows for “partial agreement” between ordinal categories. Ratings of “no information” were treated as missing. Gwet’s AC1 was used for ROBIS domains. Agreement for AMSTAR-2 and ROBIS was classified as “poor” (≤ 0.00), “slight” (0.01–0.20), “fair” (0.21–0.40), “moderate” (0.41–0.60), “substantial” (0.61–0.80), and “almost perfect” (0.81–1.00), following accepted criteria [12]. All analyses were completed using Stata 16 (StataCorp. 2019; Stata Statistical Software).

Results

Our first objective was to compare the content of the tools (see Table 1). Any overlaps and discrepancies between the two scales are identified. Overall, we found considerable overlap on the signalling questions. However, ROBIS does not assess whether there is a comprehensive list of studies (both included and excluded) or whether any conflicts of interest were declared (both at the individual trial level and for the reviews), as these are considered issues of methodology quality rather than bias. AMSTAR-2 also assessed possible conflicts of interest, which is not assessed in ROBIS, despite being a potential risk of bias. However, the section on synthesis was given more in-depth consideration in ROBIS tool.

Table 1 A comparison of the content of the two tools (AMSTAR-2 and ROBIS)

Section 2: Comparison of the inter-rater reliability of the tools

AMSTAR-2

The consensus results for AMSTAR-2 of both fibromyalgia and colic overviews can be found in Table 2. We report on 15 systematic reviews of CAM for fibromyalgia and found all but one review [13] rated as having critically low confidence in the results (see Appendix, Table 15 for scoring information). This was the only Cochrane review included in the FM overview. We also report on 16 systematic reviews of CAM for colic. Most were rated as having critically low confidence in the results, 4 were rated as low and 1 (a Cochrane review) was considered to have high confidence in the results. The comparison of the ratings for each review can be found in the Appendix (see Tables 9, 10, 13, and 14). There were a greater number of discrepancies between the overall risk of bias and quality ratings in the fibromyalgia reviews. The overall risk of bias/quality ratings was more consistent in the colic reviews.

Table 2 Agreed results of AMSTAR-2 for fibromyalgia

Results of inter-rater reliability analysis for AMSTAR-2

A summary of the inter-rater reliability [IRR] for AMSTAR-2 can be found in Table 3. Seven questions that relate to critical domains were identified by Shea et al. [5]; more information about these domains can be found in Appendix (Table 15).

Table 3 The inter-rater agreement between the three raters for AMSTAR-2

Summary of the findings on Inter-rater reliability

In total, 460 comparisons were included in the analysis for AMSTAR-2. The median agreement for all questions was 0.61. Eight of the 16 AMSTAR-2 questions had substantial agreement or higher. There was almost perfect agreement for questions 2 (did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review and did the report justify any significant deviations from the protocol?), 7 (did the review authors provide a list of excluded studies and justify the exclusions?) and 10 (did the review authors report on the sources of funding for the studies included in the review?). The lowest agreement was for question 14 (did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review?). Ratings were missing in 35 cases. The results are displayed in Fig. 1.

Fig. 1
figure 1

Gwet’s statistic for the inter-rater agreement for AMSTAR-2 questions

The AMSTAR-2 critical questions, in particular, seemed to have good agreement compared to the other questions. There was at least substantial agreement for all critical questions except question 13 which had moderate agreement. Questions 2 and 7 both had almost perfect agreement and had the highest agreement of all AMSTAR-2 questions.

Gwet’s AC2 statistic was used for questions 2, 4, 7, 8 and 9. Gwet’s AC1 statistic was used for all other questions. The markers represent the Gwet’s statistic and the error bars represent the 95% confidence intervals. The italicised data represent the median value for all questions.

Further information on the separate reviews can be found in the Appendix (Tables 7 and 11). The overall median IRR agreement for AMSTAR-2 questions for fibromyalgia is 0.65 and for colic is 0.60.

ROBIS

Summary of the ROBIS results

The consensus results for ROBIS for both fibromyalgia and colic overviews can be found in Table 4. With regard to the ROBIS results, domain 1 (which assessed any concerns regarding specification of study eligibility criteria), 9 fibromyalgia reviews achieved a low risk of bias rating overall and 6 colic reviews achieved a low risk of bias rating overall. In domain 2 (which assessed concerns regarding methods used to identify and/or select studies), 7 fibromyalgia reviews achieved a low risk of bias rating overall and 6 colic reviews achieved a low risk of bias rating overall.

Table 4 Tabular presentation for agreement of ROBIS results

Domain 3 assessed concerns regarding methods used to collect data and appraise studies; 7 fibromyalgia studies and 10 colic reviews achieved a low risk of bias rating overall.

With regard to domain 4 (which assessed concerns regarding the synthesis and findings), more variation in the fibromyalgia scores was found, whereas most colic reviews were rated as high risk of bias in this domain. The reviews that did not conduct a meta-analysis were hard to assess using ROBIS.

The final section provides a rating for the overall risk of bias of the reviews; 7 fibromyalgia reviews achieved a low rating; 6, a high rating; and 2, were rated as unclear. Four colic reviews achieved a low rating; 4, an unclear rating; and 8, a high rating.

Results of inter-rater reliability analysis for ROBIS

A summary of the inter-rater reliability for ROBIS can be found in Table 5.

Table 5 Inter-rater agreement

Summary of the findings on Inter-rater reliability

For ROBIS, there were 734 comparisons considered for the 24 questions. The median agreement for all questions was 0.61. Eleven of the 24 ROBIS questions had substantial agreement or higher. Ratings were missing in 9 cases. At least one rater said “no information” in 159 comparisons. Rater 1 used “no information” 73 times; rater 2, 50 times; and rater 3, 93 times. In 107 comparisons only one rater said “no information” and the raters all agreed only in 10 comparisons. “No information” was used most frequently for question 1.1 (did the review adhere to pre-defined objectives and eligibility criteria? 23 studies), question 4.2 (were all pre-defined analyses reported or departures explained? 22 studies) and question 4.5 (were the findings robust, e.g., as demonstrated through funnel plot or sensitivity analyses? 16 studies). The agreement was “moderate” for domains 1 (0.45) and 3 (0.36) and for the overall risk of bias (0.45). The agreement for domains 2 and 4 were “fair” (0.36) and “slight” (0.17), respectively. The results are summarised in Fig. 2.

Fig. 2
figure 2

Gwet’s statistic for the inter-rater agreement for ROBIS questions and domains

Gwet’s AC2 statistic was used for the ROBIS questions (filled markers) and Gwet’s AC1 statistic was used for the ROBIS domains (hollow markers). The error bars represent the 95% confidence intervals. The italicised data represent the median value for all ROBIS questions.

Further information on the separate reviews can be found in the appendix (Tables 8 and 12). The median IRR agreement for all ROBIS questions for FM is 0.55 and for colic is 0.63.

Section 3: Usability of the tools

All three raters felt AMSTAR-2 was more straightforward and user-friendly than ROBIS. This might be because it does not require expertise in systematic reviewing to complete this tool, just knowledge of trial design.

Several issues arose from using the ROBIS tool as it required more consideration to complete. Within each domain, each question had five possible responses (yes, probably yes, probably no, no, no information), although at times it was difficult to distinguish between yes/probably yes and no/probably no. It also might be more helpful to have a choice of “no concerns/minor concerns/ major concerns/considerable concerns”, instead of “low/high/unclear” judgements that are currently at the end of each domain when assessing the overall judgement of concerns. Although there were perceived differences in the individual answers to each signalling question between reviewers, the overall rating of the domains was more consistent. Overall, domains 1–3 were easier to follow and score.

The most difficult domain to score was domain 4 which covers “synthesis of evidence”. This was reflected in the lowest agreement between raters (0.17). We found that this domain is currently better designed for a review with a meta-analysis, rather than a narrative synthesis. The guidance document that accompanies the tool is long and difficult to navigate. On the plus side, despite having subjective opinions (within each domain there was variation between the reviewers’ responses to the signalling questions), you can still end with a moderately consistent overall result (0.45).

The ROBIS tool provides an overall sense of risk of bias of the review. There is better coverage overall than AMSTAR-2 and more precision with the use of a final rating. From our observations only, higher quality reviews were quicker to appraise. In our analysis, the “no information” rating for ROBIS questions was treated as missing. The raters rarely agreed on when to use this rating. In most cases, when one rater reported “no information” for a ROBIS question, the other two raters gave a different rating.

Several issues arose from using AMSTAR-2. Sometimes, the raters would have opted for a “partially yes” option when only a binary option (yes/no) was available (Q13, Q14, Q16). Also, some questions were ambiguous; in particular, Q3 asks if authors explain their selection of study design (e.g., use of RCTs/non RCTs); some reviews merely report they included RCTs rather than justifying their selection, which caused discrepancies between raters.

Also, some questions might elicit a different response depending on the outcome, e.g., Q13 (whether risk of bias was discussed/interpreted within the results), which may vary depending on whether there were multiple outcomes, and thus, which outcome is being referred to.

The raters also felt it would be helpful to have a formal space to add comments to justify their decision to help with discussions, as in the more ambiguous reviews; decisions were more open to interpretation. ROBIS, on the other hand, has a large section where the reviewer is expected to add selected text to support their decision.

Regarding completion timings, we were able to establish how long it took to complete both tools for one of the overviews (colic). There was little difference in timings between rater 1 and 2 to complete both tools; in fact, it took rater 2 slightly longer to complete AMSTAR-2 than ROBIS which is surprising, considering the issues reported above. However, rater 3 took considerably longer to complete ROBIS than AMSTAR-2 (see Table 6).

Table 6 Mean (SD) completion time (in minutes) for colic paper

Rater 3 was the most experienced reviewer and helped develop the ROBIS tool. They spent longer on bringing the evidence forward from the individual reviews into the ROBIS extraction form as recommended by the guidance document, whereas the other two raters only wrote cursory notes.

It is important to highlight that it is advised in the ROBIS guidance document that it is a tool aimed at experienced systematic reviewers and methodologists. We would agree with this recommendation but recognise that this is not often the case in many groups undertaking reviews.

Discussion

Summary of findings

The median inter-rater reliability (IRR) agreement for both AMSTAR-2 and ROBIS questions was substantial: 50% of AMSTAR-2 questions and 46% of ROBIS questions had substantial agreement or higher. For AMSTAR-2, 460 comparisons were included in the analysis. The median agreement for all questions was 0.61. For ROBIS, there were 734 comparisons considered for the 24 questions. The median agreement for all questions was also 0.61. It is interesting that the median IRR agreement for both tools was 0.61, demonstrating a similar level of rating between the two scales.

Results were similar when conducting the analysis for fibromyalgia and colic reviews separately (see appendix for independent overview results). For fibromyalgia, the median IRR value was 0.66 for the AMSTAR-2 questions compared to 0.56 for the ROBIS questions. For the colic studies both AMSTAR-2 and ROBIS had a similar median (0.60 for AMSTAR-2 and 0.63 for ROBIS).

It must also be considered that the ROBIS questions include more categories than most of the AMSTAR-2 questions. Most AMSTAR-2 questions are binary. Inter-rater agreement tends to be lower when there are more categories, as there are more possibilities for disagreement. Similarly, ROBIS includes more questions than AMSTAR-2 which can also result in more disagreement. However, despite these differences, the median agreement was the same for the AMSTAR-2 and ROBIS questions.

Usability of the tools

Several issues arose when using the ROBIS tool as it required more consideration to complete, which could become problematic in a large review. All three raters felt AMSTAR-2 was more straightforward and user-friendly than ROBIS. This might be because it does not require expertise in systematic reviewing to complete this tool, just knowledge of trial design.

AMSTAR-2 was considered quicker to work through than ROBIS, yet the median timings demonstrated only a slight increase in timing on AMSTAR-2 than ROBIS in two raters, although one rater did take considerably longer on ROBIS than AMSTAR-2. All raters felt domain 4 of ROBIS was particularly difficult to complete if there was no meta-analysis. Domain 4 would benefit from further development in order to assess reviews without a meta-analysis, as in some ways it is biassed against these types of reviews.

Relationship to background research

Previous research [7, 8] compared four raters’ assessments across 30 systematic reviews. They calculated the IRR using the Fleiss’ k [45]. The IRR for scoring the overall confidence in the SRs with AMSTAR-2 was fair (AMSTAR-2: κ = 0.30; 95% [confidence interval] CI, 0.17 to 0.43). The overall domain in ROBIS was fair (ROBIS: κ = 0.28; 95% CI, 0.13 to 0.42). Interestingly, for the overall rating, AMSTAR-2 showed a high concordance with ROBIS and a lower concordance with AMSTAR.

We were unable to directly compare our results against Pieper’s work, as the Fleiss’ kappa ignores the order of the categories (when there are more than two categories), which is why we used Gwet’s as it takes the order into account and allows for “partial agreement”. Also, Gwet scores tend to be higher than Fleiss scores in general, which makes comparisons difficult to conduct.

In Pieper et al.’s [7] study, ROBIS was always applied after AMSTAR-2, and the mean time for scoring AMSTAR-2 was slightly higher than for ROBIS (18 vs. 16 min), with huge variation between the reviewers, whereas in our study, the overall mean time (calculated for colic reviews only) was slightly higher for ROBIS than for AMSTAR-2 (24.4 min compared to 14.3 min), although the mean ROBIS result was largely influenced by one rater.

Potential bias in the overview process

One author evaluated their own work using AMSTAR-2 and ROBIS (RP: [19, 29]), although this work was also independently assessed by two other reviewers (VL, PD). In addition, one of the developers of ROBIS (PD) applied the ROBIS tool to assess the included reviews.

We had not planned to complete an IRR assessment of the two scales whilst completing these two overviews of reviews; therefore, we did not apply strict criteria to our assessment schedule, i.e., we did not apply the tools in any particular order. We also did not complete timings for some of our assessments in a systematic way.

Another issue is we compared our ratings over time, i.e., a batch of five papers were discussed before the next batch was assessed; this is likely to have led to greater consistency between the raters over time, but our numbers were too small to check this.

Conclusion

In terms of quality assessment, ROBIS is an effective tool for assessing risk of bias in a systematic review but is more difficult to use compared to AMSTAR-2. It is more complex to work through, which might be problematic in a large review. As suggested by the developers of ROBIS; it is best used by experienced systematic reviewers/methodologists. Reviews that included a meta-analysis were easier to rate, however, further developmental work could improve its use in systematic reviews without a meta-analysis. AMSTAR-2 was more user-friendly and was effective at measuring quality of a review but was a less sophisticated tool. Both tools could do with minor changes to help improve their useability for people conducting systematic reviews.