Just enter your email
Dear authors, great work!
This paper would help me a lot to communicate the importance of sharing things early and getting early feedback for possible errors. I would add one sub items to no 9: "Neither represent quality of articles". There's this chronic habit of SE Asia countries including Indonesia to use those indexing as indicator of quality. The govt put high score to articles that are indexed by both service. They even now delete DOAJ as one of reputed indexing service. So we need to clear out the misunderstanding between getting indexed and assessing scientific results.
You might consider mentioning these two earlier pieces covering similar ground:
Peter Suber, "Open access: six myths to put to rest," The Guardian, October 21, 2013.
Peter Suber, "A field guide to misunderstandings about open access," SPARC Open Access Newsletter, April 2, 2009.
Toward the end of Myth 6, you cite "Suber (2007)". But not even I can tell which work you had in mind. Could you list the work in the bibliography?
If you're referring to rights-retention OA policies in that passage, then I also recommend this more detailed work:
Stuart Shieber and Peter Suber, Good practices for university open-access policies, Harvard Open Access Project, 2012 - present (continuously updated).
Thank you for citing our JLSC article "Institutional Repositories and Academic Social Networks: Competition or Complement? A Study of Open Access Policy Compliance vs. ResearchGate Participation." Note that the version of record is open access at http://doi.org/10.7710/2162-3309.2183, so you might prefer to cite that instead of the repository version. Also, there is no in-text reference to our article; the article appears only in the reference list. Perhaps it belongs under Myth 6, paragraph 4? Thank you.
Thanks for informative work about the open access publishing misleading myths that may be some people use to withstand against open access movement. I always encourage researchers to publish their papers in the OA platforms https://figshare.com/articles/Improving_Research_Visibility_Part_4_Open_Access_Repositories/5010749 . Thank you and well done.
I add my thanks for addressing myths around open with evidence. When I share it, and I have continually since it became available, I have encouraged people to put Figure 1 and Figure 5 (page 18) on their pin boards and to refer to Fig 1 when someone trots out an open myth, and Figure 5 for how to publish consistent with the mission and funding levels of a US land-grant university. One question we got yesterday when we shared this preprint full text in paper with a visiting scholar in a brief meeting was about article versions. I think article versions is covered under Myth 8 in the text; wanted to let you know our schol comm librarian and a grad student who worked with her developed this visual to help with article version questions: file:///Users/annviera/Downloads/ArticleVersions_final.pdf
I like the style of writing. Fairly impartial.
Myth 7 I would remove the words near final (the brackets explain the usual and sometimes publishers do allow a final version to be used)
I (and others) often consider 'gold' to be free to access published version on publisher site permitting copying and re-use. We find too many terms cause confusion - diamond is a form of gold with no cost, bronze without explicit licence. I often wondered why we made up colour terms instead of just saying what something is.
I wonder if it can be evidenced more clearly that non-APC open access is funded from institutional grants or perhaps amend the wording if this is just an example of possible funding options that could be mentioned.
I support many of the ideas presented in this paper, in particular the critical perspective on journal peer review. However, I believe that the objections against the use of the journal impact factor as a measure of quality for authors (myth 2) can be presented in a more convincing way. The authors now start their discussion of myth 2 using the least convincing argument, namely the argument based on the skewness of citation distributions, For the reasons discussed in https://arxiv.org/abs/1703.02334, this statistical argument is not convincing. Other objections against the use of the journal impact factor as a measure of quality for authors are much more important.