Advertisement

Recent Research Trends in Meta-analysis

Open AccessPublished:June 01, 2017DOI:https://doi.org/10.1016/j.anr.2017.05.004

      Summary

      The use of meta-analysis (MA), which is placed on top of the evidence hierarchy, in studies has been increasing exponentially. MA has three effect size families. Using the category of effect size families, this paper introduces the important points in the MA process and highlights the recent research trends in this field, such as network MA, meta-analytic structural equation modeling, and diagnostic test accuracy MA. Several reporting standards were established for primary studies and MA. The critical assessment reviews demonstrated that the current quality of nursing MA reporting was low. The problematic areas of the current nursing MA include study search, study selection, risk of bias, publication bias, and additional analysis based on quality assessment. Directions for future research are also presented in this paper.

      Keywords

      Introduction

      The use of meta-analysis in studies has been increasing exponentially since the end of the 1970s. The rapid development of meta-analysis (MA) is also related to the increasing use of evidence-based practice approaches. To date, people are overwhelmed by the flood of information available, but these data vary in terms of directionality and quantity. For example, different studies show contradictory and varying effects of vitamin C on human health. Furthermore, most nurses and nursing researchers are not aware of every research finding. Therefore, MA is a valid method of finding evidences so that clinicians and researchers can have a theoretical basis in solving health-related issues.
      MA refers to the statistical analysis of the synthesis of the quantitative studies' result. Additionally, MA is different from narrative review, vote counting, and other research review methods because it provides information on the directionality and magnitude of research findings. Effect size is the key concept in MA and an essential part of quantitative research reporting and other quantitative hypothesis testing [
      • American Psychological Association
      Reporting standards for research in psychology: why do we need them? what might they be?.
      ]. Moreover, effect size is a quantitative index of research findings and is considered to be the dependent variable in the MA process, in contrast to the study characteristic, which is the independent variable.
      The effect size is composed of three families, namely, d, r, and odds ratio (OR), and is related to research design in the primary studies. In the experimental design, d- and OR families are adequate indices for hypothesis testing and result interpretation, whereas correlation is a good index for the measure of association and relationship between variables. The nursing research MA can be categorized into three groups based on the effect size and research design (intervention, measure-of-association, and diagnostic test accuracy meta-analyses [DTA MA]) and into two groups based on the research characteristics (intervention and measure-of-association meta-analyses). Evans and Pearson explained the challenges encountered in nursing systematic review (SR) and MA [
      • Evans D.
      • Pearson A.
      Systematic reviews: gatekeepers of nursing knowledge.
      ]. The nursing MA mainly focuses on the effectiveness of intervention. However, appropriateness and feasibility are also important issues in health intervention. Randomized controlled trials (RCTs) only provide a portion of important evidences; therefore, nursing MA should answer other vital questions to gather all valid and relevant evidences together. The aforementioned three categories can be used to adequately examine the research trends in nursing MA. This paper will introduce the recent research trends and important issues on nursing MA based on these categories, as shown in Table 1.
      Table 1Effect Size Family and Meta-analysis Development Trends.
      CategoryEffect sizeStudy designRecent development
      Intervention meta-analysisD familyUnmatched groups, post-data only

      One group, pre–post

      Unmatched group, pre–post
      Indirect comparison, MTC, network meta-analysis (transitivity, consistency)
      OR familyUnmatched groups, prospective

      Matched groups, prospective

      Unmatched groups, retrospective
      Measure-of-association meta-analysisRCorrelation, regression, path analysis, SEM, HLMMeta-analytic PA, CFA, SEM, HLM
      Diagnostic test accuracy meta-analysisSensitivity

      Specificity
      Test accuracy researchBivariate and HSROC approach

      Intervention effect MA: Direct comparison MA vs. network MA

      Among the effect size families, intervention effect MA is the most closely related to mean difference and dichotomous outcome effect sizes. Thus, the Cochrane intervention handbook mainly deals with OR, risk ratio, and risk difference of the RCT research designs, in addition to the mean difference, without considering the measure-of-association studies. In 1976, Glass coined the term MA to refer to the synthesis of the results of psychotherapy studies. The mean difference and OR between the experimental and control groups are the two families of effect size. The standardized mean difference (SMD) is used to represent the continuous variables, whereas the OR is utilized to indicate the dichotomous and categorical research results.
      The research designs in experimental studies can be categorized into three groups based on the mean difference: unmatched-group post-data-only, unmatched-group pre–post data (standardized mean change difference effect size), and one-group pre–post data and matched-group designs. The unmatched-group post-data-only design is similar to the independent t-test. On the contrary, the one-group pre–post data design is closely related to the dependent t-test, whereas the unmatched-group pre–post data design is associated with the mean change difference effect size [
      • Becker B.J.
      Synthesizing standardized mean-change measures.
      ]. If a researcher uses the mean difference effect size, then one effect size as a main measure of MA should be chosen because synthesizing different research designs into one MA has its own pros and cons.
      Researchers can choose one of the study designs for the problem formulation stages or inclusion criteria. Borenstein et al. suggest that the synthesis of different research designs has no technical barriers [
      • Borenstein M.
      • Hedges L.V.
      • Higgins J.P.T.
      • Rothstein H.R.
      Introduction to meta-analysis.
      ]. Therefore, three different research designs can be synthesized together in nursing research MA [
      • Shin I.S.
      • Kim J.H.
      The effect of problem-based learning in nursing education: a meta-analysis.
      ]. Furthermore, no technical barriers might be in the synthesis of dependent and independent t-test results together, but some arguments in the educational settings might be present. Hedges' g is applicable only in the correction of small sample bias, and a meta-analyst can use Hedges' g, instead of Cohen's d, in the three mean difference study designs.
      Some information could be missing, such as correlation, in the one-group pre–post data and unmatched-group pre–post data effect size calculation. The problem is that almost every study will not report the correlation between pretest and posttest measures, because it is not considered to be a reporting value in regular studies. However, meta-analysts cannot calculate effect size without this information, which has to be imputed in some way [
      • Netz Y.
      • Wu M.J.
      • Becker B.J.
      • Tenenbaum G.
      Physical activity and psychological well-being in advanced age: a meta-analysis of intervention studies.
      ].
      The OR families are the most widely used study designs in the medical research areas. For dichotomous outcome, researchers can choose one of the OR, risk ratio, and risk difference based on the index stability and substantive meaning. OR and risk difference are the most frequently used and substantive indexes, respectively, in medical research. Similar to the mean difference, the OR families are composed of three research designs: unmatched group, prospective (controlled trials, cohort studies); matched groups, prospective (crossover trials, pre–post data designs); and unmatched group, retrospective (case–control studies). Researchers can choose one study design for the inclusion criteria or three studies simultaneously to analyze three research designs together. Generally, RCTs and non-RCTs are usually analyzed separately in medical research. However, researchers can synthesize these studies together to determine side effects or answer other important research questions. Medical researchers are increasingly paying attention to network MA because the use of direct comparison has several limitations, such as insufficient availability of direct comparison research and discrepancies in the comparison of more than three interventions together. Nowadays, MA and network MA are on top of the evidence hierarchy [
      • Roever L.
      • Biondi-Zoccai G.
      Network meta-analysis to synthesize evidence for decision making in cardiovascular research.
      ]. Network MA requires special assumption and analysis methods, such as heterogeneity, transitivity, and consistencies. Furthermore, it is also applicable to social science research fields [
      • Grant E.S.
      • Calderbank-Batista T.
      Network meta-analysis for complex social interventions: problems and potential.
      ]. Additionally, network MA can be used to provide valuable information to patients, practitioners, and decision makers.

      Measure-of-association MA: Correlation MA vs. meta-analytic structural equation modeling (SEM) approach

      Measures of association are utilized in the studies of psychological issue and relationship between health clinicians and patients. The correlation in the measure-of-association analysis is highly similar to SMD in the intervention effect MA. Additionally, covariance has similar concepts to unstandardized mean difference. A correlation is considered to be a standardized covariance, and is defined as a direct measure of relationship between two variables. Furthermore, correlation can also be extended to simple and multiple regression, path and confirmatory factor analyses, and structural equation and hierarchical linear modeling. Therefore, primary studies using these methods are directly related to measure-of-association meta-analyses. Fisher's z-transformation is used, instead of correlation, when conducting measure-of-association MA. This method is very similar to log OR in dichotomous outcomes because of the data distributional assumption. The signs of the correlation coefficients are another factor that should be considered by researchers when synthesizing correlations between two constructs. Some researchers synthesize positive and negative relationships separately, whereas others synthesize these relationships together through careful consideration of the direction of relationships between variables [
      • Park E.Y.
      • Shin I.S.
      • Kim J.H.
      A meta-analysis of the variables related to depression in Korean patient with a stroke.
      ,
      • Jang D.H.
      • Shin I.S.
      The relationship between research self-efficacy and other research constructs: synthesizing evidence and developing policy implications through meta-analysis.
      ]. Additionally, researchers should also consider the unidimensionality of the main outcome variable when conducting measure-of-association MA. For example, if a researcher wants to analyze the relationship between depression and other psychological variables as the main outcome variable, then only depression should be used as the main dependent variable, and depression and anxiety should not be synthesized as dependent variables together because of the presence of variabilities between them. If a researcher uses these two constructs simultaneously, then we cannot explicitly distinguish the relationship between depression and the other variable. The most important factor to consider when performing measure-of-association MA is the theoretical model. The measure-of-association MA is different from RCT and intervention effect studies. Researchers want to explain the relationship of variables based on theoretical or research model. Without a theoretical model, categorizing related variables and explaining the result of relationship adequately would be very difficult, similar to conducting SEM research. In the SEM research, the theoretical model is the basis of the research model, which is one of the many equivalent and alternative models. Without a theoretical model, SEM researchers cannot conduct, interpret, and discuss the research results theoretically and logically. This rationale is also applicable for measure-of-association MA. Therefore, researchers and reviewers should focus on the dependent variable in the measure-of-association MA. In contrast, the independent variable is more important than the dependent variable in the intervention effect MA. As randomization is the key concept in the intervention MA, so is the theoretical model in the measure-of-association MA. Becker proposed a model-based MA [
      • Becker B.J.
      • Schram C.M.
      Examining explanatory models through research synthesis.
      ], Viswesvaran established a meta-analytic path model, and Cheung developed meta-analytic SEM [
      • Viswesvaran C.
      • Ones D.S.
      Theory testing: combining psychometric meta-analysis and structural equations modeling.
      ,
      • Cheung M.W.-L.
      • Chan W.
      Meta-analytic structural equation modeling: a two-stage approach.
      ]. Behavioral and psychological researchers are increasingly paying attention to meta-analytic SEM approach because the use of correlation has several limitations, such as insufficient availability of indirect relationship research and discrepancies in the association of more than three variables together.

      Diagnostic test accuracy (DTA) meta-analysis

      DTA MA is another developing field of MA. Table 2 presents the difference between intervention MA and DTA MA. In terms of problem formulation, population, intervention, comparison, and outcome (PICO) and patient, presentation, prior tests, index test, comparator test, purpose, target condition, and reference standard (PPP IC PTR) are the key items in intervention MA and DTA MA, respectively. Intervention MA mainly searches for RCT and uses filters, such as Medical Subject Heading terms, but DTA MA does not use filters and has various study designs. These MA have two effect sizes: specificity and sensitivity. The threshold effect is crucial because sensitivity and specificity depend on cutoff scores. The random effect model approach is not an option but a basic approach in DTA MA because of the threshold effect. Furthermore, in DTA MA, the use of the Moses–Littenberg analysis approach in RevMan has some limitations when dealing with the threshold effect well, so bivariate and hierarchical summary receiver operating characteristic (HSROC) approaches using Stata or R software program are recommended.
      Table 2Comparison between Intervention MA and DTA MA.
      CategoryIntervention MADTA MA
      Problem formulationPICOPPP IC PTR
      SearchingFilters may be used

      Searches for RCTs and CCTs
      No filters,

      Various study designs
      Risk of biasFew key items (blinding, randomization, follow-up)Quality: many items, variation also important
      Effect sizeRisk ratio

      Odds ratio

      Risk difference

      Single outcome
      Odds ratio, depending on the cutoff scores (threshold effect)

      Pairs of outcome (sensitivity vs. specificity)
      AnalysisFixed or randomAlways random, threshold effect
      Subgroup analysisMoses–Littenberg analysis
      Meta-regressionBivariate and HSROC approaches
      Abbreviations: CCTs, controlled clinical trials; HSROC, hierarchical summary receiver operating characteristic; MA, meta-analysis; RCTs, randomized clinical trials; PICO, population, intervention, comparison, and outcome; PPP IC PTR, patient, presentation, prior tests, index test, comparator test, purpose, target condition, and reference standard.
      DTA MA is not only applicable to medical research areas, similar to network MA, but also to social science research fields. For example, Kilgus et al. conducted a curriculum-based measurement DTA MA to measure the test accuracy between oral reading and high-stake testing in a reading achievement test [
      • Kilgus S.P.
      • Methe S.A.
      • Maggin D.M.
      • Tomasula J.L.
      Curriculum-based measurement of oral reading (R-CBM): a diagnostic test accuracy meta-analysis of evidence supporting use in universal screening.
      ]. The Cochrane Handbook for Diagnostic Test Accuracy Review is currently being developed. Additionally, RevMan 5 has an analysis function for DTA MA, in addition to that for intervention MA. However, RevMan 5 has some limitations when conducting more advanced approaches for DTA MA, such as bivariate and HSROC, which require other software, for example, STATA or R program.

      Reporting standard and evaluation of nursing MA qualities

      Compared with the social science research fields, primary studies and MA have numerous reporting standards in the medical research. The reason for which may be attributed to the main research design. In medical research, randomization is the most important factor in the experimental research design. Thus, internal validity is the main issue, and primary study should report well the process of the experiment. However, randomization is not easy to perform in social science research areas, so these fields mainly use theoretical model and relationship research. For example, Zeng et al. surveyed the methodological quality assessment tools for primary clinical studies, SR, and MA [
      • Zeng X.
      • Zhang Y.
      • Kwong J.S.W.
      • Zhang C.
      • Li S.
      • Sun F.
      • et al.
      The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review.
      ].
      RCT and nonrandomized studies are the main categories in primary research, as shown in Table 3. Consolidated Standards of Reporting Trial statement and risk of bias tool is used for RCTs, whereas Strengthening the Reporting of Observational Studies in Epidemiology statement and Newcastle–Ottawa scale is utilized for observational studies. On the contrary, the reporting standard for primary diagnostic accuracy study is the Standards for Reporting Diagnostic Accuracy statement. Network MA usually deal with RCT studies, so its reporting standard is similar to that for RCT primary studies. Some meta-analysis reporting standards (MARS) were established based on the research designs in the early 1990s [
      • Oxman A.D.
      • Guyatt G.H.
      Validation of an index of the quality of review articles.
      ]. The reporting standards for RCT MA are Assessing the Methodological Quality of Systematic Reviews (AMSTAR) and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [
      • Shea B.J.
      • Grimshaw J.M.
      • Wells G.A.
      • Boers M.
      • Andersson N.
      • Hamel C.
      • et al.
      Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews.
      ,
      • Liberati A.
      • Altman D.G.
      • Tetzlaff J.
      • Mulrow C.
      • Gotzsche P.C.
      • Ioannidis J.P.
      • et al.
      The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration.
      ], whereas those for the MA of observational studies includes Meta-Analysis of Observational Studies in Epidemiology (MOOSE). The MARS for medical research were developed starting in the 1990s. In 2008, the American Psychological Association (APA) MARS was established based on previous medical reporting standards [
      • American Psychological Association
      Reporting standards for research in psychology: why do we need them? what might they be?.
      ]. Network MA was developed for multiple and indirect RCT intervention MA, and the PRISMA statement has only been recently utilized for network MA [
      • Hutton B.
      • Salanti G.
      • Caldwell D.M.
      • Chaimani A.
      • Schmid C.H.
      • Cameron C.
      • et al.
      The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations.
      ]. The primary diagnostic studies can be evaluated to include DTA MA using Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-1 or QUADAS-2. Moreover, DTA MA should be conducted and reported in accordance with the Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy and didactic guidelines [
      • Deville W.
      • Buntinx F.
      • Bouter L.M.
      • Montori V.M.
      • De Vet H.C.W.
      • Van Der Windt D.A.W.M.
      • et al.
      Conducting systematic reviews of diagnostic studies: didactic guidelines.
      ].
      Table 3Reporting Standard for Primary Study and Meta-analysis.
      CategoryPrimary studyMeta-analysis
      RCTCONSORT statement

      Risk of bias
      AMSTAR 2007, PRISMA 2009
      Observational studiesSTROBE statement

      Newcastle–Ottawa scale
      MOOSE 2000, APA 2008
      Diagnostic meta-analysisSTARD statement

      QUADAS 1/QUADAS 2
      DTA reporting standard (under development)

      BMC guideline
      Network meta-analysisPRISMA extension 2015
      Abbreviations: AMSTAR, Assessing the Methodological Quality of Systematic Reviews; BMC, BioMed Central; CONSORT, Consolidated Standards of Reporting Trial; DTA, diagnostic test accuracy; MOOSE, Meta-Analysis of Observational Studies in Epidemiology; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses; STARD, Standards for Reporting Diagnostic Accuracy; STROBE, Strengthening the Reporting of Observational Studies in Epidemiology; QUADAS, Quality Assessment of Diagnostic Accuracy Studies.
      As previously mentioned, MA is on top of the evidence hierarchy, but the critical evaluations of previous MA commonly indicated that their quality was very low. Most clinicians and nurses are so busy, so they cannot follow the findings of every study. Therefore, the results of MA are a good source of theoretical basis in the implementation evidence-based practice (EBP). Table 4 shows several reviews related to the critical evaluation of the quality of nursing MA. Seven studies were directly related to the quality evaluation of nursing MA, which mainly used PRISMA and AMSTAR as reporting standards [
      • Tam W.W.S.
      • Lo K.K.H.
      • Khalechelvam P.
      Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study.
      ,
      • Yang M.
      • Jiang L.
      • Wang A.
      • Xu G.
      Epidemiology characteristics, reporting characteristics, and methodological quality of systematic reviews and meta-analyses on traditional Chinese medicine nursing interventions published in Chinese journals.
      ,
      • Jin Y.H.
      • Wang G.H.
      • Sun Y.R.
      • Li Q.
      • Zhao C.
      • Li G.
      • et al.
      A critical appraisal of the methodology and quality of evidence of systematic reviews and meta-analyses of traditional Chinese medical nursing interventions: a systematic review of reviews.
      ,
      • Song Y.
      • Park S.
      • Kim K.
      • Park M.
      The methodological quality of systematic reviews and meta-analyses on the effectiveness of non-pharmacological cancer pain management.
      ,
      • Jin Y.H.
      • Ma E.T.
      • Gao W.J.
      • Hua W.
      • Dou H.Y.
      Reporting and methodological quality of systematic reviews or meta-analyses in nursing field in China.
      ,
      • Kim J.H.
      • Kim A.K.
      A quality assessment of meta-analyses of nursing in South Korea.
      ,
      • Seo H.J.
      • Kim K.U.
      Quality assessment of systematic reviews or meta-analyses of nursing interventions conducted by Korean reviewers.
      ]. Four authors were from China, and three authors were from Korea.
      Table 4Critical Evaluation of the Quality of Nursing MA.
      Author (yr)CountryMajor/no. of journalStandard/no. of paperMedian/meanOverall evaluation
      Tam et al. (2017)Asia, Europe, AmericaNursing/107 journals in 2014PRISMA/37 SR, 37 MAMedian rate: 64.9% (17.6–92.3), 73.0% (59.5–94.6)

      27 items
      Low adherence of SRs in nursing journals to PRISMA
      Yang et al. (2017)ChinaTCMN/PRISMA and AMSTAR/73 SR, 2005–2015PRISMA mean: 63.2%

      AMSTAR mean: 45.9%
      Study search, study selection, risk of bias, publication bias
      Jin et al. (2016)ChinaTCMN/20 MA or SRAMSTAR and GRADE4.5–8Risk of bias, inconsistency
      Song et al. (2015)KoreaCancer, pain managementAMSTAR/17 SR or MAMean: 5.47Characteristics of included studies, publication bias, quality assessment
      Jin et al. (2014)ChinaNursing/63 SR or MAPRISMA and AMSTARPRISMA mean:75%

      AMSTAR mean: 63.6%
      Literature search, heterogeneity issue, publication bias
      Kim & Kim (2013)KoreaNursing/42 SR or MAAMSTARMean: 5.61Publication bias, quality, synthesis
      Seo & Kim (2012)KoreaNursing intervention/1950–2010AMSTAR/22 SR or MAMedian: 5 (2–11)

      Mean: 4.7 (3.8–5.7)
      Literature search, publication bias, risk of bias
      Abbreviations: AMSTAR, Assessing the Methodological Quality of Systematic Reviews; GRADE, Grading of Recommendations Assessment, Development, and Evaluation; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses; MA, meta-analysis; SRs, systematic reviews; TCMN, Traditional Chinese Medicine Nursing.
      The evaluation results revealed that most nursing MA have almost similar quality levels, ranging from low to moderate. For healthcare intervention decision making, the reporting standards should be followed more carefully to achieve high-quality nursing MA. Four studies only used AMSTAR, one study only utilized PRISMA, and two studies combined PRISMA and AMSTAR as reporting standards. PRISMA and AMSTAR are composed of 27 and 11 items, respectively. The critical evaluation of AMSTAR showed that the mean or median score is 4–6 points, and that of PRISMA revealed that the mean or median rate of accordance was 56.5%–75%. These weak points were mainly related to study search, study selection, quality assessment or risk of bias, publication bias, and additional analysis. Although these studies are not entire population of the critical evaluations for nursing MA, they can satisfactorily show the big picture of the present research status. These evaluations are directly related to intervention MA, so AMSTAR and PRISMA are utilized as reporting standards. As stated by Evans and Pearson, RCT cannot cover all important healthcare issues without observational, correlational, and test accuracy studies [
      • Evans D.
      • Pearson A.
      Systematic reviews: gatekeepers of nursing knowledge.
      ].

      Directions for future research

      Several issues need to be studied further for better implementation of EBP.
      First, intervention MA usually utilized direct comparison, but several possible interventions should be considered in healthcare decision-making. Network MA includes indirect and mixed comparisons and is the new approach for intervention MA. Therefore, nursing intervention MA can be extended to network MA.
      Second, measure-of-association MA is another important field of nursing MA. Similar to intervention MA, the present measure-of-association MA is mainly used to evaluate direct relationship, such as correlation, between two constructs. However, several relationships need to be considered in the clinical setting, such as those among patient, nurse, doctors, and other healthcare decision makers. Indirect relationship should be considered, as well. The meta-analytic path model, confirmatory factor analysis, and meta-analytic SEM can be applied to nursing research fields.
      Third, intervention and measure-of-association MA are currently the focus of nursing MA issues. The use of DTA MA in medical research areas is increasing. In the social science research fields, DTA MA has already been applied in educational testing. Thus, it can possibly be utilized in solving nursing educational measurement issues or conducting other test accuracy studies in nursing research.
      Fourth, the critical evaluation of nursing intervention MA is mainly based on AMSTAR and PRISMA, and RCT cannot encompass every important nursing research issue. Hence, critical evaluations based on MOOSE or APA can provide better implications for the improvement of nursing MA. For example, the APA MARS has several detailed statistical methodology issues, such as effect size metric, weighting, dependency, random effect model, heterogeneity, outlier, and power of MA. Sutton and Higgins emphasized that the recent developments in MA include heterogeneity, random effect model, meta-regression, statistical power, and multiple outcomes [
      • Sutton A.J.
      • Higgins J.P.T.
      Recent developments in meta-analysis.
      ]. However, in the previous nursing meta-analysis quality evaluation, we cannot confirm these statistical issues because ARMSTA and PRISMA do not have specific categories.
      In the future, APA or MOOSE can be used to evaluate other MA designs in nursing research. AMSTAR is frequently utilized in the evaluation of RCT MA, but Burda et al. emphasized the limitations of AMSTAR and suggested some modifications for the improvement of its usability, reliability, and validity [
      • Burda B.U.
      • Holmer H.K.
      • Norris S.L.
      Limitations of a measurement tool to assess systematic reviews (AMSTAR) and suggestions for improvement.
      ]. Higgins et al. also reported that the existing tools do not pay enough attention to statistical and interpretational issues; thus, the development of new quality assessment tool is necessary [
      • Higgins J.P.T.
      • Lane P.W.
      • Anagnostelis B.
      • Anzures-Cabrera J.
      • Baker N.F.
      • Cappelleri J.C.
      • et al.
      A tool to assess the quality of meta-analysis.
      ]. Further research will provide more insights and better evidences for healthcare decision making than the present studies.

      Conflict of interest

      The author does not have any conflict of interest to declare.

      References

        • American Psychological Association
        Reporting standards for research in psychology: why do we need them? what might they be?.
        Am Psychol. 2008; 63: 839-851https://doi.org/10.1037/0003-066X.63.9.839
        • Evans D.
        • Pearson A.
        Systematic reviews: gatekeepers of nursing knowledge.
        J Clin Nurs. 2001; 10: 593-599
        • Becker B.J.
        Synthesizing standardized mean-change measures.
        Br J Math Stat Psychol. 1988; 41: 257-278
        • Borenstein M.
        • Hedges L.V.
        • Higgins J.P.T.
        • Rothstein H.R.
        Introduction to meta-analysis.
        John Wiley & Sons, Chichester (UK)2009
        • Shin I.S.
        • Kim J.H.
        The effect of problem-based learning in nursing education: a meta-analysis.
        Adv Health Sci Educ Theory Pract. 2013; : 1-18https://doi.org/10.1007/s10459-012-9436-2
        • Netz Y.
        • Wu M.J.
        • Becker B.J.
        • Tenenbaum G.
        Physical activity and psychological well-being in advanced age: a meta-analysis of intervention studies.
        Psychol Aging. 2005; 20: 272-284https://doi.org/10.1037/0882-7974.20.2.272
        • Roever L.
        • Biondi-Zoccai G.
        Network meta-analysis to synthesize evidence for decision making in cardiovascular research.
        Arq Bras Cardiol. 2016; 106: 333-337https://doi.org/10.5935/abc.20160052
        • Grant E.S.
        • Calderbank-Batista T.
        Network meta-analysis for complex social interventions: problems and potential.
        J Soc Social Work Res. 2013; 4: 406-420https://doi.org/10.5243/jsswr.2013.25
        • Park E.Y.
        • Shin I.S.
        • Kim J.H.
        A meta-analysis of the variables related to depression in Korean patient with a stroke.
        J Korean Acad Nurs. 2012; 42: 537-548https://doi.org/10.4040/jkan.2012.42.4.537
        • Jang D.H.
        • Shin I.S.
        The relationship between research self-efficacy and other research constructs: synthesizing evidence and developing policy implications through meta-analysis.
        Korean J Educ Policy. 2011; 8: 279-311
        • Becker B.J.
        • Schram C.M.
        Examining explanatory models through research synthesis.
        in: Cooper H.M. Hedges L.V. The handbook of research Synthesis. Russell Sage, New York1994: 357-381
        • Viswesvaran C.
        • Ones D.S.
        Theory testing: combining psychometric meta-analysis and structural equations modeling.
        Pers Psychol. 1995; 48: 865-885https://doi.org/10.1111/j.1744-6570.1995.tb01784.x
        • Cheung M.W.-L.
        • Chan W.
        Meta-analytic structural equation modeling: a two-stage approach.
        Psychol Methods. 2005; 10: 40-64https://doi.org/10.1037/1082-989X.10.1.40
        • Kilgus S.P.
        • Methe S.A.
        • Maggin D.M.
        • Tomasula J.L.
        Curriculum-based measurement of oral reading (R-CBM): a diagnostic test accuracy meta-analysis of evidence supporting use in universal screening.
        J Sch Psychol. 2014; 52: 377-405https://doi.org/10.1016/j.jsp.2014.06.002
        • Zeng X.
        • Zhang Y.
        • Kwong J.S.W.
        • Zhang C.
        • Li S.
        • Sun F.
        • et al.
        The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review.
        J Evid Based Med. 2015; 8: 2-10https://doi.org/10.1111/jebm.12141
        • Oxman A.D.
        • Guyatt G.H.
        Validation of an index of the quality of review articles.
        J Clin Epidemiol. 1991; 44: 1271-1278
        • Shea B.J.
        • Grimshaw J.M.
        • Wells G.A.
        • Boers M.
        • Andersson N.
        • Hamel C.
        • et al.
        Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews.
        BMC Med Res Methodol. 2007; 7: 10https://doi.org/10.1186/1471-2288-7-10
        • Liberati A.
        • Altman D.G.
        • Tetzlaff J.
        • Mulrow C.
        • Gotzsche P.C.
        • Ioannidis J.P.
        • et al.
        The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration.
        BMJ. 2009; 339https://doi.org/10.1136/bmj.b2700
        • Hutton B.
        • Salanti G.
        • Caldwell D.M.
        • Chaimani A.
        • Schmid C.H.
        • Cameron C.
        • et al.
        The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations.
        Ann Intern Med. 2015; 162: 777-784https://doi.org/10.7326/M14-2385
        • Deville W.
        • Buntinx F.
        • Bouter L.M.
        • Montori V.M.
        • De Vet H.C.W.
        • Van Der Windt D.A.W.M.
        • et al.
        Conducting systematic reviews of diagnostic studies: didactic guidelines.
        BMC Med Res Methodol. 2002; 2: 9https://doi.org/10.1186/1471-2288-2-9
        • Tam W.W.S.
        • Lo K.K.H.
        • Khalechelvam P.
        Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study.
        BMJ Open. 2017; 7e013905https://doi.org/10.1136/bmjopen-2016-013905
        • Yang M.
        • Jiang L.
        • Wang A.
        • Xu G.
        Epidemiology characteristics, reporting characteristics, and methodological quality of systematic reviews and meta-analyses on traditional Chinese medicine nursing interventions published in Chinese journals.
        Int J Nurs Pract. 2017; 23e12498https://doi.org/10.1111/ijn.12498
        • Jin Y.H.
        • Wang G.H.
        • Sun Y.R.
        • Li Q.
        • Zhao C.
        • Li G.
        • et al.
        A critical appraisal of the methodology and quality of evidence of systematic reviews and meta-analyses of traditional Chinese medical nursing interventions: a systematic review of reviews.
        BMJ Open. 2016; 6e011514https://doi.org/10.1136/bmjopen-2016-011514
        • Song Y.
        • Park S.
        • Kim K.
        • Park M.
        The methodological quality of systematic reviews and meta-analyses on the effectiveness of non-pharmacological cancer pain management.
        Pain Manag Nurs. 2015; 16: 781-791https://doi.org/10.1016/j.pmn.2015.06.004
        • Jin Y.H.
        • Ma E.T.
        • Gao W.J.
        • Hua W.
        • Dou H.Y.
        Reporting and methodological quality of systematic reviews or meta-analyses in nursing field in China.
        Int J Nurs Pract. 2014; 20: 70-78https://doi.org/10.1111/ijn.12123
        • Kim J.H.
        • Kim A.K.
        A quality assessment of meta-analyses of nursing in South Korea.
        J Korean Acad Nurs. 2012; 43: 736-745https://doi.org/10.4040/jkan.2013.43.6.736
        • Seo H.J.
        • Kim K.U.
        Quality assessment of systematic reviews or meta-analyses of nursing interventions conducted by Korean reviewers.
        BMC Med Res Methodol. 2012; 12: 129https://doi.org/10.1186/1471-2288-12-129
        • Sutton A.J.
        • Higgins J.P.T.
        Recent developments in meta-analysis.
        Stat Med. 2008; 27: 625-650https://doi.org/10.1002/sim.2934
        • Burda B.U.
        • Holmer H.K.
        • Norris S.L.
        Limitations of a measurement tool to assess systematic reviews (AMSTAR) and suggestions for improvement.
        Syst Rev. 2016; 5: 58https://doi.org/10.1186/s13643-016-0237-1
        • Higgins J.P.T.
        • Lane P.W.
        • Anagnostelis B.
        • Anzures-Cabrera J.
        • Baker N.F.
        • Cappelleri J.C.
        • et al.
        A tool to assess the quality of meta-analysis.
        Res Synth Methods. 2013; 4: 351-366https://doi.org/10.1002/jrsm.1092