AbstractBlog and podcast use is rising among learners in the health professions. The lack of a standardized method to assess the quality of these resources prompted a research agenda aimed at solving this problem. Through a rigorous research process, a list of 151 quality indicators for blogs and podcasts was formed and subsequently refined to elicit the most important quality indicators. These indicators are presented as Quality Checklists to assist with quality appraisal of medical blogs and podcasts.
Blog and podcast use in the realm of medical education is rapidly increasing (Thoma et al. 2014), especially among emergency medicine and critical care learners (Cadogan et al. 2014; Loeb et al. 2014; Mallin et al. 2014; Purdy et al. 2015). However, there is no standardized method to assess the quality of these resources. This dilemma prompted research aimed at determining which quality indicators are important for medical education blogs and podcasts. The resulting work spanned three studies and resulted in a list of quality indicators deemed valuable by content producers and general medical educators. This paper describes the research agenda and the development of two quality-appraisal tools from the results of these studies. It is hoped that these tools will facilitate the application of this literature in the evaluation of blogs and podcasts.
A systematic review of the literature was conducted and quality indicators for secondary educational resources were extracted. These quality indicators underwent a qualitative analysis to identify and categorize those that were relevant for blogs and/or podcasts. This process resulted in a list of 151 quality indicators in three main categories (credibility, content, and design) (Paterson, Thoma, et al. 2015). This list, though comprehensive, was too lengthy to be utilized in practice.
A modified Delphi study was conducted to develop consensus on the most important quality indicators. A large, internationally representative panel of expert content producers of online medical education resources completed two iterative surveys. From the list of 151 quality indicators, 14 quality indicators for blogs and 26 quality indicators for podcasts were endorsed by ≥90% of the participants (Thoma et al. 2015).
Recognizing the potential bias of content producers, a group of general medical education experts was sought to conduct a second modified Delphi study. Expert medical educators attending the Social Media summit, which preceded the International Conference on Residency Education in 2014 (Toronto), participated in two iterative surveys. From the list of 151 quality indicators, 3 quality indicators for blogs, 1 quality indicator for podcasts, and 9 quality indicators for both blogs and podcasts were endorsed by ≥90% of expert medical educators (Lin et al. 2015).
The results of this research were analyzed and amalgamated to create the Quality Checklist for Blogs (Figure 1) and the Quality Checklist for Podcasts (Figure 2), which were disseminated on the ALiEM blog (Paterson, Colmers, et al. 2015). Quality indicators that were endorsed by ≥90% of either expert content producers (Thoma et al. 2015) or medical educators (Lin et al. 2015) were included within each checklist. The combined list of endorsed quality indicators, accounting for overlapping endorsements and similarly worded quality indicators, contains 19 quality indicators for blogs and 20 quality indicators for podcasts. The resulting platform-specific tools outline the quality indicators that are of utmost importance in the appraisal of health professions blogs and podcasts. To facilitate interpretation of overall educational resource quality by end-users, the tool has stratified the checklist into ‘yes’, ‘no’, and ‘unclear’ and leaves space for further subjective comments. Currently, no evidence-based criteria differentiate quality from non-quality resources – user gestalt is currently the best guide and can be enhanced by consideration of these checklists. Further research is required to establish potential ‘cut-offs’ for checklist scores that correlate with the quality of a resource.
Figure 1: Quality Checklist for Blogs
Figure 2: Quality Checklist for Podcasts
The Quality Checklists were designed with medical education resource producers, editors, end-users, and researchers in mind. General principles of good design were employed to make the checklists user-friendly and functional, and the author group piloted both checklists prior to publication. As with other critical appraisal tools and reporting guidelines, these checklists should guide rather than replace one’s clinical judgment and gestalt.
Cadogan, Mike, Brent Thoma, Teresa M Chan, and Michelle Lin. 2014. “Free Open Access Meducation (FOAM): The Rise of Emergency Medicine and Critical Care Blogs and Podcasts (2002–2013).” Emergency Medicine Journal 31 (e1): e76–77. doi:10.1136/emermed-2013-203502.
Lin, Michelle, Brent Thoma, N Seth Trueger, Felix Ankel, Jonathan Sherbino, and Teresa Chan. 2015. “Quality Indicators for Blogs and Podcasts Used in Medical Education: Modified Delphi Consensus Recommendations by an International Cohort of Health Professions Educators.” Postgraduate Medical Journal. doi:10.1136/postgradmedj-2014-133230.
Loeb, Stacy, Christopher E Bayne, Christine Frey, Benjamin J Davies, Timothy D Averch, Henry H Woo, Brian Stork, Matthew R Cooperberg, and Scott E Eggener. 2014. “Use of Social Media in Urology: Data from the American Urological Association (AUA).” British Journal of Urology International 113 (6): 993–98. doi:10.1111/bju.12586.
Mallin, Mike, Sarah Schlein, Shaneen Doctor, Susan Stroud, Matthew Dawson, and Megan Fix. 2014. “A Survey of the Current Utilization of Asynchronous Education among Emergency Medicine Residents in the United States.” Academic Medicine 89 (4): 598–601. doi:10.1097/ACM.0000000000000170.
Paterson, Quinten S, Isabelle N Colmers, Michelle Lin, Brent Thoma, and Teresa Chan. 2015. “The Quality Checklists for Health Professions Blogs and Podcasts.” http://www.aliem.com/quality-checklists-for-health-professions-education-blogs-and-podcasts/.
Paterson, Quinten S, Brent Thoma, W. Kenneth Milne, Michelle Lin, and Teresa M Chan. 2015. “A Systematic Review and Qualitative Analysis to Determine Quality Indicators for Health Professions Education Blogs and Podcasts.” Journal of Graduate Medical Education, no. December. doi:http://dx.doi.org/10.4300/JGME-D-14-00728.1.
Purdy, Eve, Brent Thoma, Joseph Bednarczyk, David Migneault, and Jonathan Sherbino. 2015. “The Use of Free Online Educational Resources by Canadian Emergency Medicine Residents and Program Directors.” Canadian Journal of Emergency Medicine 17 (2): 101–6. doi:10.1017/cem.2014.73.
Thoma, Brent, Teresa Chan, Javier Benitez, and Michelle Lin. 2014. “Educational Scholarship in the Digital Age: A Scoping Review and Analysis of Scholarly Products.” The Winnower 1 (e141827.77297): 1–13. doi:10.15200/winn.141827.77297.
Thoma, Brent, Teresa M Chan, Quinten S Paterson, W. Kenneth Milne, Jason L Sanders, and Michelle Lin. 2015. “Emergency Medicine and Critical Care Blogs and Podcasts: Establishing an International Consensus on Quality.” Annals of Emergency Medicine, March. doi:http://dx.doi.org/10.1016/j.annemergmed.2015.03.002.
Showing 5 Reviews
Very good work on this topic. Quality check lists exist for other forms of research and there is a need for something to evaluate blogs/podcasts.
Here are some suggestions:
The dichotomous response is limiting. Other check lists allow three responses like YES, NO and UNCERTAIN/UNSURE.
Allowing a space/text box to provide additional information could be helpful. This would be especially true with the uncertain/unsure responses.
Great work. As an end user of evidence, I would like to know if there was a comprehensive search of source material for the podcast. ie., how does the consumer know that the authors did a comprehensive search (a la systematic review). This may not have come up in the derivation.
Thank you for the opportunity to review this article. I consider myself both as an end-user and a researcher and would like to comment on the quality checklist for blogs.
About the article: Two quality checklists (for rating blogs and podcasts) have been published by Colmers and team. The background work leading to these checklists have included a systematic review of quality indicators for secondary educational resources (in press) and two modified Delphi studies (1, 2). My comments below were made after attempting to use the quality checklist on three physical therapy blogs obtained from Physiopedia (http://www.physio-pedia.com/Blogs). The three blogs were http://alteredhaemodynamics.blogspot.co.uk/, http://noijam.com/ and https://rogerkerry.wordpress.com/.
1. How should the quality checklist be scored?
It appears that the user is offered two options, ticking the box if criteria is met and not ticking the box if criteria is not met. What happens if it is unclear – does the user not tick the box? What happens if the items do not apply to all blogs? For example, some blogs do not have an editorial process before publishing an article. Does the user not tick the box or fill in “N/A”? It would be beneficial to have a detailed explanation on how to use the quality checklist and examples of when item meets or does not meet criteria. Some examples of quality checklists that have provided some explanation on usage include the PEDro rating scale: http://www.pedro.org.au/english/downloads/pedro-scale/ and the Cochrane risk of bias tool: http://handbook.cochrane.org/ (see Cpt 8). Both checklists are used to rate the quality of randomised controlled trials.
2. How should conclusions on the quality of a blog be made based on the results of the checklist?
Do users sum the score (i.e. count the number of ticks) and is there a cut-off score that determines low, moderate or high quality of a blog? Or do the users use a more complex decision making process, e.g. placing more or less weight on certain items when considering the credibility, content or design of the blog? This has certainly been a controversial issue between the PEDro rating scale and the Cochrane risk of bias tool, leading to different results and different conclusions being made on the efficacy of interventions3 (which sadly makes things very confusing for clinicians and consumers).
3. Subjectivity of items
There were several items that were hard to score as they were very subjective. Some examples include:
• “Is the author well qualified to provide information on the topic?” – What defines well qualified? Does the person need a PhD? Must the PhD be in the area which he/she is writing about? Must they be actively training or actively publishing in that area? Must they have had more than a certain number of years of experience?
• “Are the resource’s statements consistent with its references?” “Is the information presented in the resource of a consistent quality?” – What determines consistency? Will this be based on a certain number of statements? What if there are no references?
• “Is the content of the resource presented in a logical, clear and coherent way? Is the topic of the resource well defined and labelled appropriately? Does the content meet generally accepted standards for journalistic professionalism?” – I understand the importance of these quality indicator items which reflect the quality of writing. But these are very subjective to rate. A paper that reads well to me might not read well to another colleague. Maybe provide some guidelines on what you mean by “journalistic professionalism”? I would be interested in the intra and inter-reliability of raters using this quality checklist.
• “Is the resource stable (i.e. does not crash, links work, etc.)?” – Will this be based on a certain number of blog posts? Might be good to state, for e.g. the latest 10 blog entries?
4. Relevance of items
• “Are there comments from other learners/contributors that endorse or refute the information presented in the resources?” – Slightly doubtful that this item reflects credibility. A blog entry could be well-written by a credible author but the blog might lack a high public profile/social media presence. I think even if a blog does not tick this criteria, it does not mean it is not a credible blog. Depending on how conclusions about quality will be made, I will assign less weight to this item.
It might be worthwhile to consider creating two mock blogs – one of good quality and one of poor quality and use the checklist to highlight which sections of the blogs meet or does not meet the criteria. Just a suggestion that will help clarify use of the quality checklist.
1. No. of quality indicator items do not tally – It is noted in the paper that the quality checklist for blogs contain 12 quality indicator items but there seem to be 19 items on the checklist.
2. Uncertain re: blank left hand column – Is the column for numbering the quality indicator items? If it’s for numbering, the column can be removed and a number entered before each item. Otherwise, label column.
On a last note, I would like to commend the authors for forging ahead in this field and for taking appropriate steps to identify the best quality indicator items. I really enjoyed reading about the modified Delphi studies and can see that the authors have put a lot of thought in trying to remove potential biases (e.g. with the second modified Delphi study which included medical educators who were not bloggers or podcasters). Considering the large number of blogs and podcasts available, it is crucial that educators, clinicians and consumers are offered some guidance on how to separate the wheat from the chaff. Colmers and team are definitely working in the right direction with these two quality checklists. If more guidance can be provided on how to use the quality checklists, I am certain these checklists will be valuable tools.
1. Lin M, Thoma B, Trueger NS, Ankel F, Sherbino J, Chan T. Quality indicators for blogs and podcasts used in medical education: Modified delphi consensus recommendations by an international cohort of health professions educators. Postgraduate medical journal. 2015
2. Thoma B, Chan TM, Paterson QS, Milne WK, Sanders JL, Lin M. Emergency medicine and critical care blogs and podcasts: Establishing an international consensus on quality. Annals of emergency medicine. 2015
3. Armijo-Olivo S, da Costa BR, Cummings GG, Ha C, Fuentes J, Saltaji H, et al. Pedro or cochrane to assess the quality of clinical trials? A meta-epidemiological study. PLoS One. 2015;10:e0132634
I have no conflicts of interest to declare.
"Through a rigorous research process, a list of 151 quality indicators for blogs and podcasts was formed and refined to elicit the most important quality indicators."
Information Quality Assessment Parameters: (not necessarily only for medical education!)
Coverage is really the sum total of Authority, Accuracy, Bias and Currency
If you google for "ABCs of Web Literacy" and click on the [PDF] link you can find all of the above in a powerpoint slides format (PDF). This study was done by University of Pennsylvania's Penn Libraries for evaluating information on the web.
Information Quality Awareness Resources:
Google for "Information Quality Resources" and look for Marcus Zillman's PDF.
(whitepapers / virtualprivatelibrary)
I have no conflicts of interest to declare.
I read your article with interest. While you mention that you undertook a systematic review from which you hoped to derive quality indicators, I cannot see that your search is reported as the PRISMA statement requires.
I declare I have no competing interests.
Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. BMJ 2009;339:b2535, doi: 10.1136/bmj.b2535
This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.