A brief overview:

Evidence-based practice (EBP) is integrating the best available evidence with clinical expertise and considering patients' values and specific circumstances to inform management practices and health policy decisions.

Incorporating EBP into clinical settings is crucial for supporting healthcare professionals in making informed clinical decisions.

However, the adoption of EBP is frequently impeded by a lack of support and various challenges, including limited time and a shortage of appraisal expertise, which obstruct the regular use of up-to-date, high-quality evidence in clinical decision-making.

Numerous respected authors have created valid and reliable instruments to assist users in critically evaluating scholarly literature. However, these tools often employ language tailored more to researchers. Our tool addresses this by making subtle modifications to improve readability and interpretation. It also simplifies select aspects of the original tools to reduce the time needed for completion. Out of respect for the original author's work and to serve those desiring in-depth information, we provide links to the original tools.

The Elev Tool purpose

The Elev-Tool was designed by Dr. Alessandra Narciso Garcia Trepte (Garcia AN) for healthcare professionals, faculty, and students to evaluate articles for clinical applicability and methodological quality, enabling them to determine their confidence level in applying the research findings.

The tool features three evaluative components for assessing study designs: clinical applicability, methodological quality, and the clinician's confidence level in using the evidence. After evaluating the clinical applicability and methodological quality, the tool generates an assessment indicating whether the evidence from the article is of 'low,' 'moderate,' or 'high' quality in these areas. Furthermore, it enables clinicians to gauge their confidence—whether low, moderate, or high—in integrating the evidence into their clinical practice.

Dr. Alexandre Lopes played a significant role in enhancing the clarity and clinical applicability of the questions. Additionally, his suggestions were instrumental in improving the tool's interactive features

Know how to use the tool: a step-by-step guide

After registering, please note the four key steps to maximize the tool's effectiveness and contribute to advancing the knowledge field.

1

Select the type of study design

Once you've identified an article of interest from databases such as PubMed, PEDro, or the Cochrane Library, select its study design.

2

Evaluate the clinical applicability

Please answer the questions based on the selected article to the best of your ability. The Elev Tool will then generate an assessment indicating whether the article's information has 'low,' 'moderate,' or 'high' clinical applicability.

3

Evaluate the methodological quality

Please answer the questions based on the selected article to the best of your ability. The Elev Tool will then generate an assessment indicating whether the article's information has 'low,', 'moderate,' or 'high’ methodological quality.

4

Estimate your confidence level

Using the results of the clinical applicability and methodological quality assessments, you can determine your confidence level—low, moderate, or high—in incorporating the evidence into your clinical practice.

5

Save or share your findings

Upon completing your appraisal, you may choose to keep your findings private or to share them with the community. If you opt to share, you can add comments that could benefit your peers. You will also have access to view comments from other users.

Try it now

Supporter

References to Original Tools

References: 1. Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Grimshaw J, Hanna SE, Littlejohns P, Makarski J, Zitzelsberger L, for the AGREE Next Steps Consortium. AGREE II: Advancing guideline development, reporting and evaluation in healthcare. CMAJ 2010;182:E839-842. 2. AGREE Next Steps Consortium (2017). The AGREE II Instrument [Electronic version]. Retrieved [Month, Day, Year], from http://www.agreetrust.org.
Reference: 1. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017 Sep 21;358:j4008.
Reference: Maher CG, Sherrington C, Herbert RD, Moseley AM, Elkins M. Reliability of the PEDro Scale for rating quality of randomized controlled trials. Phys Ther 2003;83(8):713-21. DOI:10.1093/ptj/83.8.713.
Reference: Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, Leeflang MM, Sterne JA, Bossuyt PM, QUADAS-2 Group*. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Annals of internal medicine. 2011 18;155(8):529-36.
Reference: GA Wells, B Shea, D O’Connell, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses.http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp.
Reference: GA Wells, B Shea, D O’Connell, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses.http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp.
Reference: Mokkink LB, De Vet HC, Prinsen CA, Patrick DL, Alonso J, Bouter LM, Terwee CB. COSMIN risk of bias checklist for systematic reviews of patient-reported outcome measures. Quality of Life Research. 2018 May;27(5):1171-9.
Reference: NHI. Quality appraisal checklist - qualitative studies. 2012.
ElevTool - About