Skip to Main Content
Armacost Library
Ask Us

CDIS 652: Research & Experimental Design: Assess the Evidence

How can I evaluate a source?

Evaluate your sources in order to determine how appropriate a source might be for your research project. This is adapted from Meriam Library's CRAAP Test.

Currency

• When was the information published or posted?
• Has the information been revised or updated? 

Relevance

• How central (or peripheral) is this work in relation to your research?
• What are the connections between this work and your work?
• Is the intended audience disciplinary scholars, practitioners, the public, or other groups? 
• Is the information at an appropriate level (i.e. not too elementary or advanced for your needs)?

Authority

• Who is the author/editor/publisher/source/sponsor?
• What author credentials or organizational affliations, if any, are provided?

Accuracy

• Where does the information come from? Can you trace the source?
• Has the information been reviewed or refereed, and by whom?
• Are you given enough information to verify and evaluate for yourself the who, what, when, why, where, and how?   

Purpose

• Is this intended to add new knowledge to the field, explain highly specialized knowledge to non-specialists, persuade or entertain an audience, etc.?
• Is this intent stated clearly anywhere in the text or by the publisher?
• Do other sources suggest an alternative purpose for this text? (e.g. written for novices, but also useful to experts)

Assessing the Evidence

In addition to evaluating sources according to CRAAP Test criteria, consider the following which are used by communicative disorder professionals when evaluating evidence

Independent confirmation and converging evidence. How much support is evident from other studies? How might areas of disagreement in other studies detract from overall supporting evidence? How rigorously researched were these supporting documents?

Experimental control. How were research elements managed and controlled by the research design? Was a (control) group used for comparison? Were participants randomly assigned between the control and treatment groups?

Avoidance of subjectivity and bias. How were subjectivity and bias removed or reduced by the research design? Were participants, researchers, and other involved parties removed from practices that might influence data collection or measurements? Were parties blinded or masked from activities that may be influenced by bias?

Effect sizes and confidence intervals. Was statistical analysis used to measure differences when comparing one thing to another? Were confidence intervals calculated and provided as part of the study?

Relevance and feasibility. In terms of relevance and feasibility, how pertinent is this study to its own stated aims and the overall purposes of communicative studies?