The Quality Checklists for Health Professions Blogs and Podcasts

  1. 1.  Faculty of Medicine and Dentistry, University of Alberta, Edmonton AB, Canada
  2. 2.  College of Medicine, University of Saskatchewan, Saskatoon SK, Canada
  3. 3.  MedEdLIFE Research Collaborative
  4. 4.  University of California, San Francisco, San Francisco CA
  5. 5.  Department of Emergency Medicine, University of Saskatchewan, Saskatoon SK, Canada
  6. 6.  Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton ON

Abstract

Blog and podcast use is rising among learners in the health professions. The lack of a standardized method to assess the quality of these resources prompted a research agenda aimed at solving this problem. Through a rigorous research process, a list of 151 quality indicators for blogs and podcasts was formed and subsequently refined to elicit the most important quality indicators. These indicators are presented as Quality Checklists to assist with quality appraisal of medical blogs and podcasts.

Introduction

Blog and podcast use in the realm of medical education is rapidly increasing (Thoma et al. 2014), especially among emergency medicine and critical care learners (Cadogan et al. 2014; Loeb et al. 2014; Mallin et al. 2014; Purdy et al. 2015). However, there is no standardized method to assess the quality of these resources. This dilemma prompted research aimed at determining which quality indicators are important for medical education blogs and podcasts. The resulting work spanned three studies and resulted in a list of quality indicators deemed valuable by content producers and general medical educators. This paper describes the research agenda and the development of two quality-appraisal tools from the results of these studies. It is hoped that these tools will facilitate the application of this literature in the evaluation of blogs and podcasts.

Methods

Study #1

A systematic review of the literature was conducted and quality indicators for secondary educational resources were extracted. These quality indicators underwent a qualitative analysis to identify and categorize those that were relevant for blogs and/or podcasts. This process resulted in a list of 151 quality indicators in three main categories (credibility, content, and design) (Paterson, Thoma, et al. 2015). This list, though comprehensive, was too lengthy to be utilized in practice.

Study #2

A modified Delphi study was conducted to develop consensus on the most important quality indicators. A large, internationally representative panel of expert content producers of online medical education resources completed two iterative surveys. From the list of 151 quality indicators, 14 quality indicators for blogs and 26 quality indicators for podcasts were endorsed by ≥90% of the participants (Thoma et al. 2015).

Study #3

Recognizing the potential bias of content producers, a group of general medical education experts was sought to conduct a second modified Delphi study. Expert medical educators attending the Social Media summit, which preceded the International Conference on Residency Education in 2014 (Toronto), participated in two iterative surveys. From the list of 151 quality indicators, 3 quality indicators for blogs, 1 quality indicator for podcasts, and 9 quality indicators for both blogs and podcasts were endorsed by ≥90% of expert medical educators (Lin et al. 2015).

Results

The results of this research were analyzed and amalgamated to create the Quality Checklist for Blogs (Figure 1) and the Quality Checklist for Podcasts (Figure 2), which were disseminated on the ALiEM blog (Paterson, Colmers, et al. 2015). Quality indicators that were endorsed by ≥90% of either expert content producers (Thoma et al. 2015) or medical educators (Lin et al. 2015) were included within each checklist. The combined list of endorsed quality indicators, accounting for overlapping endorsements and similarly worded quality indicators, contains 19 quality indicators for blogs and 20 quality indicators for podcasts. The resulting platform-specific tools outline the quality indicators that are of utmost importance in the appraisal of health professions blogs and podcasts. To facilitate interpretation of overall educational resource quality by end-users, the tool has stratified the checklist into ‘yes’, ‘no’, and ‘unclear’ and leaves space for further subjective comments. Currently, no evidence-based criteria differentiate quality from non-quality resources – user gestalt is currently the best guide and can be enhanced by consideration of these checklists. Further research is required to establish potential ‘cut-offs’ for checklist scores that correlate with the quality of a resource.

 

 


 

Figure 1: Quality Checklist for Blogs

 

 

Figure 2: Quality Checklist for Podcasts

 

Discussion

The Quality Checklists were designed with medical education resource producers, editors, end-users, and researchers in mind. General principles of good design were employed to make the checklists user-friendly and functional, and the author group piloted both checklists prior to publication. As with other critical appraisal tools and reporting guidelines, these checklists should guide rather than replace one’s clinical judgment and gestalt.

 

 

 

References

Cadogan, Mike, Brent Thoma, Teresa M Chan, and Michelle Lin. 2014. “Free Open Access Meducation (FOAM): The Rise of Emergency Medicine and Critical Care Blogs and Podcasts (2002–2013).” Emergency Medicine Journal 31 (e1): e76–77. doi:10.1136/emermed-2013-203502.

Lin, Michelle, Brent Thoma, N Seth Trueger, Felix Ankel, Jonathan Sherbino, and Teresa Chan. 2015. “Quality Indicators for Blogs and Podcasts Used in Medical Education: Modified Delphi Consensus Recommendations by an International Cohort of Health Professions Educators.” Postgraduate Medical Journal. doi:10.1136/postgradmedj-2014-133230.

Loeb, Stacy, Christopher E Bayne, Christine Frey, Benjamin J Davies, Timothy D Averch, Henry H Woo, Brian Stork, Matthew R Cooperberg, and Scott E Eggener. 2014. “Use of Social Media in Urology: Data from the American Urological Association (AUA).” British Journal of Urology International 113 (6): 993–98. doi:10.1111/bju.12586.

Mallin, Mike, Sarah Schlein, Shaneen Doctor, Susan Stroud, Matthew Dawson, and Megan Fix. 2014. “A Survey of the Current Utilization of Asynchronous Education among Emergency Medicine Residents in the United States.” Academic Medicine 89 (4): 598–601. doi:10.1097/ACM.0000000000000170.

Paterson, Quinten S, Isabelle N Colmers, Michelle Lin, Brent Thoma, and Teresa Chan. 2015. “The Quality Checklists for Health Professions Blogs and Podcasts.” http://www.aliem.com/quality-checklists-for-health-professions-education-blogs-and-podcasts/.

Paterson, Quinten S, Brent Thoma, W. Kenneth Milne, Michelle Lin, and Teresa M Chan. 2015. “A Systematic Review and Qualitative Analysis to Determine Quality Indicators for Health Professions Education Blogs and Podcasts.” Journal of Graduate Medical Education, no. December. doi:http://dx.doi.org/10.4300/JGME-D-14-00728.1.

Purdy, Eve, Brent Thoma, Joseph Bednarczyk, David Migneault, and Jonathan Sherbino. 2015. “The Use of Free Online Educational Resources by Canadian Emergency Medicine Residents and Program Directors.” Canadian Journal of Emergency Medicine 17 (2): 101–6. doi:10.1017/cem.2014.73.

Thoma, Brent, Teresa Chan, Javier Benitez, and Michelle Lin. 2014. “Educational Scholarship in the Digital Age: A Scoping Review and Analysis of Scholarly Products.” The Winnower 1 (e141827.77297): 1–13. doi:10.15200/winn.141827.77297.

Thoma, Brent, Teresa M Chan, Quinten S Paterson, W. Kenneth Milne, Jason L Sanders, and Michelle Lin. 2015. “Emergency Medicine and Critical Care Blogs and Podcasts: Establishing an International Consensus on Quality.” Annals of Emergency Medicine, March. doi:http://dx.doi.org/10.1016/j.annemergmed.2015.03.002.

 

Additional Assets

Reviews

Showing 6 Reviews

  • Placeholder
    William Milne
    4

    Very good work on this topic. Quality check lists exist for other forms of research and there is a need for something to evaluate blogs/podcasts.

    Here are some suggestions:
    The dichotomous response is limiting. Other check lists allow three responses like YES, NO and UNCERTAIN/UNSURE.

    Allowing a space/text box to provide additional information could be helpful. This would be especially true with the uncertain/unsure responses.

    This review has 1 comments. Click to view.
    • Placeholder
      Isabelle Colmers

      We appreciate your suggestion to include an intermediate category of the checklist. This did generate discussion among our group! On the one hand, for producers and editors, our scale acts as a guide and reminder of the elements of a high quality resource. For these users, an intermediate element of the scale has little bearing. Further, having a “not applicable” box may discourage consideration of quality indicators. For example, if a blog had no editorial process, an important question to consider is “Why not?”. Even the best writers have editors. A second opinion on the content, layout, etc of the article would increase the quality of the work.
      On the other hand, for end-users this scale may be more difficult to use and interpret if it were dichotomous. Since we do not yet have the research to substantiate cutoffs for high vs medium vs low quality of a resource, interpreting the quality of the resource is guided by the checklist but ultimately done subjectively by the end-user. Including an “unclear” criterion may, as you suggest, be rather helpful in guiding the user’s interpretation of the overall quality of a resource. Having these criteria will be beneficial for future validation of the checklist and development of criteria for high/medium/low quality resources. In the end, we have updated the checklists to include a third “unclear” category.

      Lastly, great suggestion to in a space for subjective interpretation - we have added this in to the revised checklists.

  • Placeholder
    Sandy Dong
    3

    Great work. As an end user of evidence, I would like to know if there was a comprehensive search of source material for the podcast. ie., how does the consumer know that the authors did a comprehensive search (a la systematic review). This may not have come up in the derivation.

    This review has 1 comments. Click to view.
    • Placeholder
      Isabelle Colmers

      Dear Dr. Dong,

      Thank you for this question, it is an important one. The quality checklists combines the quality indicators that achieved ≥90% agreement by expert consensus groups of either blog and podcast producers (Thoma 2015) or medical educators (Lin 2015). The original list of 151 quality indicators was identified using a systematic review of the literature and qualitative analysis. You can rest assured, there was a comprehensive search strategy that did include terms around “podcast” and “blog”! The original systematic review article by Paterson et al can be found here (Epub ahead of print): http://dx.doi.org/10.4300/JGME-D-14-00728.1. A sample search strategy is included with the article as well.

  • 20140131 194713 1 1
    Li Khim Kwah
    Confidence in paper
    Quality of figures
    Quality of writing
    Originality of work
    3

    Thank you for the opportunity to review this article. I consider myself both as an end-user and a researcher and would like to comment on the quality checklist for blogs.

    About the article: Two quality checklists (for rating blogs and podcasts) have been published by Colmers and team. The background work leading to these checklists have included a systematic review of quality indicators for secondary educational resources (in press) and two modified Delphi studies (1, 2). My comments below were made after attempting to use the quality checklist on three physical therapy blogs obtained from Physiopedia (http://www.physio-pedia.com/Blogs). The three blogs were http://alteredhaemodynamics.blogspot.co.uk/, http://noijam.com/ and https://rogerkerry.wordpress.com/.

    Major comments/recommendations

    1. How should the quality checklist be scored?
    It appears that the user is offered two options, ticking the box if criteria is met and not ticking the box if criteria is not met. What happens if it is unclear – does the user not tick the box? What happens if the items do not apply to all blogs? For example, some blogs do not have an editorial process before publishing an article. Does the user not tick the box or fill in “N/A”? It would be beneficial to have a detailed explanation on how to use the quality checklist and examples of when item meets or does not meet criteria. Some examples of quality checklists that have provided some explanation on usage include the PEDro rating scale: http://www.pedro.org.au/english/downloads/pedro-scale/ and the Cochrane risk of bias tool: http://handbook.cochrane.org/ (see Cpt 8). Both checklists are used to rate the quality of randomised controlled trials.

    2. How should conclusions on the quality of a blog be made based on the results of the checklist?
    Do users sum the score (i.e. count the number of ticks) and is there a cut-off score that determines low, moderate or high quality of a blog? Or do the users use a more complex decision making process, e.g. placing more or less weight on certain items when considering the credibility, content or design of the blog? This has certainly been a controversial issue between the PEDro rating scale and the Cochrane risk of bias tool, leading to different results and different conclusions being made on the efficacy of interventions3 (which sadly makes things very confusing for clinicians and consumers).

    3. Subjectivity of items
    There were several items that were hard to score as they were very subjective. Some examples include:
    • “Is the author well qualified to provide information on the topic?” – What defines well qualified? Does the person need a PhD? Must the PhD be in the area which he/she is writing about? Must they be actively training or actively publishing in that area? Must they have had more than a certain number of years of experience?
    • “Are the resource’s statements consistent with its references?” “Is the information presented in the resource of a consistent quality?” – What determines consistency? Will this be based on a certain number of statements? What if there are no references?
    • “Is the content of the resource presented in a logical, clear and coherent way? Is the topic of the resource well defined and labelled appropriately? Does the content meet generally accepted standards for journalistic professionalism?” – I understand the importance of these quality indicator items which reflect the quality of writing. But these are very subjective to rate. A paper that reads well to me might not read well to another colleague. Maybe provide some guidelines on what you mean by “journalistic professionalism”? I would be interested in the intra and inter-reliability of raters using this quality checklist.
    • “Is the resource stable (i.e. does not crash, links work, etc.)?” – Will this be based on a certain number of blog posts? Might be good to state, for e.g. the latest 10 blog entries?

    4. Relevance of items
    • “Are there comments from other learners/contributors that endorse or refute the information presented in the resources?” – Slightly doubtful that this item reflects credibility. A blog entry could be well-written by a credible author but the blog might lack a high public profile/social media presence. I think even if a blog does not tick this criteria, it does not mean it is not a credible blog. Depending on how conclusions about quality will be made, I will assign less weight to this item.
    It might be worthwhile to consider creating two mock blogs – one of good quality and one of poor quality and use the checklist to highlight which sections of the blogs meet or does not meet the criteria. Just a suggestion that will help clarify use of the quality checklist.

    Minor comments/recommendations

    1. No. of quality indicator items do not tally – It is noted in the paper that the quality checklist for blogs contain 12 quality indicator items but there seem to be 19 items on the checklist.

    2. Uncertain re: blank left hand column – Is the column for numbering the quality indicator items? If it’s for numbering, the column can be removed and a number entered before each item. Otherwise, label column.

    On a last note, I would like to commend the authors for forging ahead in this field and for taking appropriate steps to identify the best quality indicator items. I really enjoyed reading about the modified Delphi studies and can see that the authors have put a lot of thought in trying to remove potential biases (e.g. with the second modified Delphi study which included medical educators who were not bloggers or podcasters). Considering the large number of blogs and podcasts available, it is crucial that educators, clinicians and consumers are offered some guidance on how to separate the wheat from the chaff. Colmers and team are definitely working in the right direction with these two quality checklists. If more guidance can be provided on how to use the quality checklists, I am certain these checklists will be valuable tools.

    References

    1. Lin M, Thoma B, Trueger NS, Ankel F, Sherbino J, Chan T. Quality indicators for blogs and podcasts used in medical education: Modified delphi consensus recommendations by an international cohort of health professions educators. Postgraduate medical journal. 2015
    2. Thoma B, Chan TM, Paterson QS, Milne WK, Sanders JL, Lin M. Emergency medicine and critical care blogs and podcasts: Establishing an international consensus on quality. Annals of emergency medicine. 2015
    3. Armijo-Olivo S, da Costa BR, Cummings GG, Ha C, Fuentes J, Saltaji H, et al. Pedro or cochrane to assess the quality of clinical trials? A meta-epidemiological study. PLoS One. 2015;10:e0132634

    I have no conflicts of interest to declare.

    This review has 1 comments. Click to view.
    • Placeholder
      Isabelle Colmers

      Dear Dr. Kwah,

      Many thanks for taking the time to provide an extensive and thoughtful review of our article! Your interpretation of the article is correct and we especially commend you on trialing the checklists on three different blogs. You brought up some excellent points that we have addressed in our manuscript and in the comments below.

      Major:
      1. Thank you for this question. Our checklist was originally inspired by the PRISMA checklist (Moher et al 2009), a list used by authors and editors to promote good reporting and transparency for systematic reviews. However, in addition to good reporting elements, our scale includes qualitative elements similar to the Cochrane Risk of Bias or PEDro scales that you referenced.
      You are correct in pointing out that other rating scales have an explanation for its users. We have decided against including this for two reasons: 1) when we conducted the research on these elements, the consensus group did not have any further explanation of items; and 2) we feel the elements are largely self-explanatory. That said, based on user experience as the checklists are trialed further, we may need to revisit the need for further explanation.
      We appreciate your suggestion to include an intermediate category of the checklist. This did generate discussion among our group! On the one hand, for producers and editors, our scale acts as a guide and reminder of the elements of a high quality resource. For these users, an intermediate element of the scale has little bearing. Further, having a “not applicable” box may discourage consideration of quality indicators. For example, if a blog had no editorial process, an important question to consider is “Why not?”. Even the best writers have editors. A second opinion on the content, layout, etc of the article would increase the quality of the work.
      On the other hand, for end-users this scale may be more difficult to use and interpret if it were dichotomous. Since we do not yet have the research to substantiate cutoffs for high vs medium vs low quality of a resource, interpreting the quality of the resource is guided by the checklist but ultimately done subjectively by the end-user. Including an “unclear” criterion may, as you suggest, be rather helpful in guiding the user’s interpretation of the overall quality of a resource. Having these criteria will be beneficial for future validation of the checklist and development of criteria for high/medium/low quality resources. In the end, we have updated the checklists to include a third “unclear” category.

      2. This is a good question. The intention was to develop a tool that lists the indicators of quality based on our research findings. Unfortunately we do not yet have evidence to determine the cutoff for high vs moderate vs low quality of a blog and deciding arbitrary cutoffs may not be particularly helpful. Until this research is conducted, the user of the checklists will need to use their own judgement in interpreting the quality of a blog or podcast; the more indicators present, the more likely it is of high quality. We have added an intermediate category in order to help guide the user’s gestalt. Ultimately, this tool cannot yet be used for evidence-based quality rating - but it can encourage users and producers to consider what makes a quality resource.

      3. It can be difficult to derive an objective score using a subjective tool. As the quality indicators are written, there are inherent assumptions that must be made by the user of the checklist. While it can be useful to thoroughly define all terms used, this was not done in the previous studies; however, the participants in the Delphi studies all seemed to have an inherent knowledge of the meaning of each item - shown by the fact that these quality indicators made the 90% cutoff. While the quality indicators at this point cannot be further defined (due to the fact that it would undermine the work previously done), we feel that the defined terms are enough to guide the users and producers in assessing quality with their gestalt ultimately clearing up any uncertainty.
      For example, the qualifications of the author should always be listed. There won’t be a dichotomous cutoff for what constitutes a “well qualified” author, but the user will consider how much authority the author holds. Regarding “consistency”, we feel the term is self-explanatory. This point encourages users and producers to consider the primary literature and ensure there is no major disjoint. If there are no references, this may be seen as a red flag.
      Regarding “coherence”, you’re right - again, this is subjective. Readability is different among different users, but arguably a high quality resources can tailor its content to appease its intended audience. For stability, again this will depend on the users’ experience. Without having any issues, it’s a pass. Constant issues are a fail.
      Ultimately, each item may be scored differently by different users (for the more subjectives items anyway). We are okay with this, though, as certain resources appeal to certain users moreso than others, and quality resources are produced in a way that they reach their intended audience.

      4. You’re correct - lack of endorsement does not decrease a resource’s quality. But its presence does strengthen it. Additionally, a refutal may cast doubt on an item’s credibility. I like to this of this item as a specific marker: a positive endorsement is positive for quality, whereas a lack of endorsement does not rule out a quality resource.
      Your suggestion to create mock blogs is a great idea that we’ll be sure to consider. Ideally, we would love to hear of others’ real experiences (such as your own) in their application and interpretation of the checklists with real blogs and podcasts.

      Minor:
      1. You are correct. However, the final list of quality indicators used the combined results from both papers (Lin 2015 and Thoma 2015), as they use different expert groups to identify the quality measures (medical educators and content producers, respectively).

      2. We have numbered the items for clarity.

      Thank you kindly. We appreciate your comprehensive review.

  • Placeholder
    L Mohan Arun
    Confidence in paper
    Quality of figures
    Quality of writing
    Originality of work
    2

    "Through a rigorous research process, a list of 151 quality indicators for blogs and podcasts was formed and refined to elicit the most important quality indicators."

    Information Quality Assessment Parameters: (not necessarily only for medical education!)
    Authority
    Accuracy
    Bias
    Currency
    Coverage

    Coverage is really the sum total of Authority, Accuracy, Bias and Currency

    If you google for "ABCs of Web Literacy" and click on the [PDF] link you can find all of the above in a powerpoint slides format (PDF). This study was done by University of Pennsylvania's Penn Libraries for evaluating information on the web.

    Information Quality Awareness Resources:
    Google for "Information Quality Resources" and look for Marcus Zillman's PDF.
    (whitepapers / virtualprivatelibrary)

    I have no conflicts of interest to declare.

    This review has 1 comments. Click to view.
    • Placeholder
      Isabelle Colmers

      Dear Dr. Arun,

      Thank you for sharing this resource with us. Our themes were created based on a systematic search of the evidence and subsequent thematic analysis. It is great to note that when we break down the individual items, our tools seem to overlap on most of the key elements. Additionally, these resources you’ve shared are the exact types of resources that were analysed in our first study (Systematic review and qualitative analysis). The difference between your resources and these checklists are the relevance to blogs and podcasts in medical education. Though the lists are similar, at least it has now been shown through the scientific method to be applicable to these newer resources.

  • Placeholder
    mono joli
    Quality of figures
    Quality of writing
    Originality of work
    0

    Thanks for your marvelous posting! I quite enjoyed reading it, you happen to be a great author. I will remember to bookmark your blog and will eventually come back very soon. 

  • Placeholder
    Tom Roper
    Confidence in paper
    Quality of figures
    Quality of writing
    Originality of work
    -5

    I read your article with interest. While you mention that you undertook a systematic review from which you hoped to derive quality indicators, I cannot see that your search is reported as the PRISMA statement requires.
    I declare I have no competing interests.
    Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. BMJ 2009;339:b2535, doi: 10.1136/bmj.b2535

    This review has 1 comments. Click to view.
    • Me
      Teresa Chan

      Hi Tom:
      Thanks for your comments! This is a review paper that works to synthesize the materials that have been previously published in other journals. The systematic review is found at this following link:
      http://www.jgme.org/doi/abs/10.4300/JGME-D-14-00728.1

      The citation for the work is here, for your reference.
      Paterson, Quinten S, Brent Thoma, W. Kenneth Milne, Michelle Lin, and Teresa M Chan. 2015. “A Systematic Review and Qualitative Analysis to Determine Quality Indicators for Health Professions Education Blogs and Podcasts.” Journal of Graduate Medical Education, no. December. doi:http://dx.doi.org/10.4300/JGME-D-14-00728.1.

      As it states in the actual paper by Paterson et al.:
      "All efforts were made to adhere to guidelines set by the PRISMA statement and SRQR."

      However, as we did a special kind of synthetic analysis in that review, the reporting was a bit different from PRISMA. Of note, some of the reporting materials you may be looking for are within the online supplement for that article.

      Thanks for reading our work!
      Regards,
      Teresa Chan

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.