Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on July 15th, 2024 and was peer-reviewed by 7 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 12th, 2024.
  • The first revision was submitted on December 2nd, 2024 and was reviewed by 3 reviewers and the Academic Editor.
  • A further revision was submitted on January 24th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on February 19th, 2025.

Version 0.3 (accepted)

· Feb 19, 2025 · Academic Editor

Accept

Thank you for addressing the concerns of the reviewers. The reviewers and the editor agree with the revisions made and recommend publication of your manuscript.

Reviewer 5 ·

Basic reporting

The articles covers all the basic reporting criteria

Experimental design

Original primary research within Aims and Scope of the journal. Yes

Validity of the findings

Conclusions are well stated, linked to original research question & limited to supporting results.

Version 0.2

· Jan 9, 2025 · Academic Editor

Minor Revisions

As noted by the reviewer, some final clarifications are needed

Reviewer 5 ·

Basic reporting

The study tries to evaluate the content of the Youtube videos about postdural puncture headache topic.
The authors have made significant changes, but I think there some points that still can be modified.
The hypothesis says :
That professionally generated content would have higher quality scores and better adherence to clinical guidelines than other sources and that higher audience engagement metrics would be positively correlated with content quality and reliability.
I think that is expected given that professional video are made by people who have the knowledge and resources to make a better quality videos.
I think a better hypothesis can be high engagement score correlates well with the quality scores.

I think given the low quality of the patients' videos quality score, a recommendation can be made that youtube should list the quality score and label these videos as non educational so the audience does not get mislead by the patient videos.

Experimental design

Just make a clear distinction between the professional and patient experience videos

Validity of the findings

Findings are valid

Reviewer 6 ·

Basic reporting

No comment

Experimental design

no comment

Validity of the findings

no comment

Reviewer 7 ·

Basic reporting

The author meticulously revised the manuscript, addressing my inquiries in a comprehensive and unbiased manner. Currently, no novel issues have arisen.

Experimental design

The experimental design is flawless.

Validity of the findings

The research holds considerable significance.

Additional comments

I have no additional comments.

Version 0.1 (original submission)

· Nov 12, 2024 · Academic Editor

Major Revisions

Several reviewers were required, as the initial reviewers who commented did not provide sufficient input to help the authors improve their study.

Please address all the comments from all the reviewers. You do not need to respond to the comments from Reviewers 1 - 3.

·

Basic reporting

An original research article written in fluent English. The article is about the content adequacy, reliability, and quality of YouTube videos on PDPH. Literature references are up to date.

Experimental design

The research article is consistent with the journal's objectives. The material method of the study is well designed.

Validity of the findings

The findings of the study are clearly stated and the discussion section is written in an understandable manner.

Additional comments

The article worth publishing.

Reviewer 2 ·

Basic reporting

....

Experimental design

...

Validity of the findings

...

Additional comments

....

Reviewer 3 ·

Basic reporting

Clear and unambiguous, professional English used throughout

Experimental design

Research question well defined, relevant & meaningful. Methods described with sufficient detail & information to replicate.

Validity of the findings

All underlying data provided; they are robust, statistically sound, & controlled. Conclusions are well stated, linked to original research question & limited to supporting results.

Additional comments

Dear author
An article that will contribute to the literature with its different subject and beautiful writing. Best regards

·

Basic reporting

no comment

Experimental design

Preparing a flow chart for selecting the 71 suitable videos from the 150 videos watched will improve your writing.

Validity of the findings

no comment

Additional comments

References are not written according to journal rules. Re-arrangement is recommended.

Reviewer 5 ·

Basic reporting

Abstract
This stud is a cross sectional study that is used to evaluate the content adequacy, reliability, and quality of YouTube videos about posdural puncture headache in English-language. Two independent reviewers assessed the videos using the DISCERN instrument, Journal of American Medical Association (JAMA) benchmark criteria, and Global Quality Scale (GQS). Correlations between video characteristics and their reliability, content adequacy, and quality scores were examined.

Experimental design

A cross sectional study

Validity of the findings

The findings are valid in terms it measured what needs to be measured.

Additional comments

The study found that vYouTube videos on PDPH uploaded by health-related websites or physicians are better than those uploaded by patients which is expected. Patients did not study medicine and talk out of their experience .
I think the study can be improved by ;
- Clarifying the goal of the study, why the study is done.
- Writing a hypothesis
- excluding videos done by patients .
- Looking more into the content , did the video explain the problem, the course of disease, the methods of management.
- The Videos posted by the patients should analyzed differently as patient experience, influence on audience, recommendations to the general public when watching these videos.

Reviewer 6 ·

Basic reporting

You tube video Views, likes and dislikes may have several confounding factors, this also depends upon quality of health care worker, gender based variation and qualification of health care worker.

How authors assessed you tube video presenting management of PDPH in respect evidence based practice followed or not. or it was based merely on clinical experience of health care worker then mention their qualification.

Please mentioned weather detail of treatment strategies discussed in all video or not and their scientific evidence. As many of video uploaded by patients itself which maybe lack wit scientific evidence.

Experimental design

Mention, Aim of the study, primary and secondary outcomes

If possible provide study flow diagram of inclusion and exclusion and enrollment of videos

Please mention different management strategies , conservative, procedural discussed in video or not.

Validity of the findings

Appropriate

Additional comments

.

Reviewer 7 ·

Basic reporting

A very interesting study, it can arouse great interest of readers. In terms of research design, it is relatively standardized.
But I still have a few questions for you to explain.

Experimental design

1.In line 97 ' Video sources were categorized as health-related websites, physicians, and patients. ' How is this source identified ? What is the identity of health-related websites ? Is there a possibility of crossover in this classification ?
2.In line 100 ' the video power index ( VPI ) ' I do not think this is a very objective indicator, as far as I know, it has great operability, and can not explain anything.
3.In line 139, ' Spearmanís Rho correlation coefficient was used to examine the relationships between non-normally distributed variables.corresponding, correlations with r values between 0.80-1.0, 0.60-0.79. 0.40-0.59,0.20-0.39, and less than 0.20 were considered very strong, strong, moderate, weak, and very weak, respectively. ' I don 't think this classification is significant, just provide r in the results.

Validity of the findings

4.Although the results are great, this conclusion is a little obvious. From a research perspective, this is impeccable. But it lacks a little practical significance. It is more meaningful if some more valuable suggestions can be added to the discussion. For example, whether it is possible to develop an evaluation tool to require YouTube video to improve quality in the future, the author should increase more thinking.

Additional comments

Thank you for your invitation and hope to establish good communication.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.