Skip to main content
Armacost Library
Ask Us

CDIS 100: Intro. to Communication Sciences & Disorders: Evaluating: Quality and Suitability of Sources

The Purpose of this Page

In addition to familiarizing yourself with various types of information, you'll need to evaluate the suitability and quality of each information source you consider for your research project. Apply CRAAP Test criteria to evaluate all sources of information, and use Assessing the Evidence to introduce yourself to disciplinary considerations when evaluating evidence.  

The CRAAP Test: Evaluating Your Sources

The CRAAP Test -- Whether reading a book, article, or website, be an information skeptic--scrutinize, analyze, and evaluate your sources.

Currency: the timeliness of the information
• When was the information published or posted?
• Has the information been revised or updated? 
• Is the information current or out-of-date for your topic?
 
Relevance: how well it fits your research
• How well does this suit your topic or answer your questions? 
• Who is the intended audience?
• Would you be comfortable using this source for a research paper?
 
Authority: the production of the information
• Who is the author/publisher/source/sponsor?
• Are the author’s credentials or organizational affliations given?
• What are the author’s qualifications to write on the topic?
• Is there a way to contact the author?
 
Accuracy: the reliability, truthfulness, and correctness of the information
• Where does the information come from?
• Has the information been reviewed or refereed?
• Can you verify any of the information?
• What evidence is provided? 
 
Purpose: the reason the information was produced
• Is the purpose to inform, sell, entertain, or persuade?
• Do the authors/sponsors make their intentions clear?
• From what perspective does the author(s) approach the subject?

Assessing the Evidence

In addition to evaluating sources according to CRAAP Test criteria, consider the following which are used by communicative disorder professionals when evaluating evidence

Independent confirmation and converging evidence. How much support is evident from other studies? How might areas of disagreement in other studies detract from overall supporting evidence? How rigorously researched were these supporting documents?

Experimental control. How were research elements managed and controlled by the research design? Was a (control) group used for comparison? Were participants randomly assigned between the control and treatment groups?

Avoidance of subjectivity and bias. How were subjectivity and bias removed or reduced by the research design? Were participants, researchers, and other involved parties removed from practices that might influence data collection or measurements? Were parties blinded or masked from activities that may be influenced by bias?

Effect sizes and confidence intervals. Was statistical analysis used to measure differences when comparing one thing to another? Were confidence intervals calculated and provided as part of the study?

Relevance and feasibility. In terms of relevance and feasibility, how pertinent is this study to its own stated aims and the overall purposes of communicative studies?