PeerJ Preprints: Ethical Issueshttps://peerj.com/preprints/index.atom?journal=peerj&subject=7500Ethical Issues articles published in PeerJ PreprintsA guide to applying the Good Publication Practice 3 Guidelines in the Asia-Pacific regionhttps://peerj.com/preprints/278922019-08-192019-08-19Blair HespKatsuhisa AraiMagdalene ChuStefanie ChuahJose Miguel CuramengSandeep KamatZhigang MaAndrew SakkoHazel Fernandez
Numerous recommendations and guidelines aim to improve the quality, timeliness and transparency of medical publications. However, these guidelines use ambiguous language that can be challenging to interpret, particularly for speakers of English as a second language. Cultural expectations within the Asia-Pacific region raise additional challenges. Several studies have suggested that awareness and application of ethical publication practices in the Asia-Pacific region is relatively low compared with other regions. However, guidance on applying ethical publication practice guidelines in the Asia-Pacific region is lacking. This review aims to improve publication practices in the Asia-Pacific region by providing guidance on applying the 10 principles of the Good Publication Practice 3 (GPP3) guidelines and the International Committee of Medical Journal Editors (ICMJE) criteria for authorship. Recommendations are provided for encore presentations, applying the ICMJE authorship criteria in the context of regional cultural expectations, and the role of study sponsors and professional medical writers. Ongoing barriers to compliance with guidelines are also highlighted, and additional guidance is provided to support authors submitting manuscripts for publication. The roles of regional journals, regulatory authorities and professional bodies in improving practices are also discussed.
Numerous recommendations and guidelines aim to improve the quality, timeliness and transparency of medical publications. However, these guidelines use ambiguous language that can be challenging to interpret, particularly for speakers of English as a second language. Cultural expectations within the Asia-Pacific region raise additional challenges. Several studies have suggested that awareness and application of ethical publication practices in the Asia-Pacific region is relatively low compared with other regions. However, guidance on applying ethical publication practice guidelines in the Asia-Pacific region is lacking. This review aims to improve publication practices in the Asia-Pacific region by providing guidance on applying the 10 principles of the Good Publication Practice 3 (GPP3) guidelines and the International Committee of Medical Journal Editors (ICMJE) criteria for authorship. Recommendations are provided for encore presentations, applying the ICMJE authorship criteria in the context of regional cultural expectations, and the role of study sponsors and professional medical writers. Ongoing barriers to compliance with guidelines are also highlighted, and additional guidance is provided to support authors submitting manuscripts for publication. The roles of regional journals, regulatory authorities and professional bodies in improving practices are also discussed.Ten simple rules for a successful remote postdochttps://peerj.com/preprints/279072019-08-182019-08-18Kevin R BurgioCaitlin McDonough MacKenzieStephanie B BorrelleS. K. Morgan ErnestJacquelyn L GillKurt E IngemanAmy K TefferEthan P White
Postdoctoral positions are temporary full-time positions typically taken between completion of a PhD and the start of a permanent position. Postdocs are expected to move for short-term positions which can often be problematic for early-career researchers, especially those from under-represented groups in STEM. However, the proliferation of computational research has changed how scientists can conduct science, opening the door to postdoctoral work being conducted remotely. Research activities primarily involving quantitative analysis, modeling, writing, and data collection can take place anywhere and therefore can all be conducted on a remote or semi-remote basis. We offer 10 simple rules for overcoming challenges and leveraging the unique opportunities presented by remote postdoc positions, derived from our experiences as either remote postdocs or the PIs who have mentored them. We believe that not only will these suggestions increase the desirability of remote postdoc positions whenever they are feasible, but that they also contain good practices for facilitating better communication both within labs more generally and in other long-distance collaborations.
Postdoctoral positions are temporary full-time positions typically taken between completion of a PhD and the start of a permanent position. Postdocs are expected to move for short-term positions which can often be problematic for early-career researchers, especially those from under-represented groups in STEM. However, the proliferation of computational research has changed how scientists can conduct science, opening the door to postdoctoral work being conducted remotely. Research activities primarily involving quantitative analysis, modeling, writing, and data collection can take place anywhere and therefore can all be conducted on a remote or semi-remote basis. We offer 10 simple rules for overcoming challenges and leveraging the unique opportunities presented by remote postdoc positions, derived from our experiences as either remote postdocs or the PIs who have mentored them. We believe that not only will these suggestions increase the desirability of remote postdoc positions whenever they are feasible, but that they also contain good practices for facilitating better communication both within labs more generally and in other long-distance collaborations.Twisted tale of the tiger: the case of inappropriate data and deficient sciencehttps://peerj.com/preprints/273492019-07-312019-07-31Qamar QureshiRajesh GopalYadvendradev V Jhala
Publications in peer reviewed journals are often looked upon as tenets on which future scientific thought is built. Published information is not always flawless and errors in published research should be expediently reported, preferably by a peer review process. We review a recent publication by Gopalaswamy et al (doi:10.1111/2041-210X.12351) that challenges the use of “double sampling” in large scale animal surveys. Double sampling is often resorted to as an established economical and practical approach for large scale surveys since it calibrates abundance indices against absolute abundance, thereby potentially addressing the statistical shortfalls of indices. Empirical data used by Gopalaswamy et al. to test their theoretical model, relate to tiger sign and tiger abundance referred to as an Index Calibration experiment (IC-Karanth). These data on tiger abundance and signs should be paired in time and space to qualify as a calibration experiment for double sampling, but original data of IC-Karanth show lags of (up to) several years. Further, data points used in the paper do not match the original sources. We show that by use of inappropriate and incorrect data collected through a faulty experimental design, poor parameterization of their theoretical model, and selectively-picked estimates from literature on detection probability, the inferences of this paper are highly questionable. We highlight how the results of Gopalaswamy et al. were further distorted in popular media. If left unaddressed, Gopalaswamy et al. paper could have serious implications on statistical design of large-scale animal surveys by propagating unreliable inferences.
Publications in peer reviewed journals are often looked upon as tenets on which future scientific thought is built. Published information is not always flawless and errors in published research should be expediently reported, preferably by a peer review process. We review a recent publication by Gopalaswamy et al (doi:10.1111/2041-210X.12351) that challenges the use of “double sampling” in large scale animal surveys. Double sampling is often resorted to as an established economical and practical approach for large scale surveys since it calibrates abundance indices against absolute abundance, thereby potentially addressing the statistical shortfalls of indices. Empirical data used by Gopalaswamy et al. to test their theoretical model, relate to tiger sign and tiger abundance referred to as an Index Calibration experiment (IC-Karanth). These data on tiger abundance and signs should be paired in time and space to qualify as a calibration experiment for double sampling, but original data of IC-Karanth show lags of (up to) several years. Further, data points used in the paper do not match the original sources. We show that by use of inappropriate and incorrect data collected through a faulty experimental design, poor parameterization of their theoretical model, and selectively-picked estimates from literature on detection probability, the inferences of this paper are highly questionable. We highlight how the results of Gopalaswamy et al. were further distorted in popular media. If left unaddressed, Gopalaswamy et al. paper could have serious implications on statistical design of large-scale animal surveys by propagating unreliable inferences.Plan S in Latin America: A precautionary notehttps://peerj.com/preprints/278342019-07-112019-07-11Humberto DebatDominique Babini
Latin America has historically led a firm and rising Open Access movement and represents the worldwide region with larger adoption of Open Access practices. Argentina has recently expressed its commitment to join Plan S, an initiative from a European consortium of research funders oriented to mandate Open Access publishing of scientific outputs. Here we suggest that the potential adhesion of Argentina or other Latin American nations to Plan S, even in its recently revised version, ignores the reality and tradition of Latin American Open Access publishing, and has still to demonstrate that it will encourage at a regional and global level the advancement of non-commercial Open Access initiatives.
Latin America has historically led a firm and rising Open Access movement and represents the worldwide region with larger adoption of Open Access practices. Argentina has recently expressed its commitment to join Plan S, an initiative from a European consortium of research funders oriented to mandate Open Access publishing of scientific outputs. Here we suggest that the potential adhesion of Argentina or other Latin American nations to Plan S, even in its recently revised version, ignores the reality and tradition of Latin American Open Access publishing, and has still to demonstrate that it will encourage at a regional and global level the advancement of non-commercial Open Access initiatives.Public opinion of captive cetacean attractions: A critique of Wassermannet al. (2018)https://peerj.com/preprints/278522019-07-112019-07-11Heather M Manitzas HillKelly Jaakkola
Wassermann et al. (2018, https://doi.org/10.7717/peerj.5953) argued that previous public opinion research about marine mammal attractions should be considered unreliable due to possible biases in study design, which may have influenced participants’ responses. As in all scientific endeavors, reducing bias in order to gather more objective, evidence-based information is a worthy and commendable goal. Unfortunately, Wassermann et al. fell short in their efforts to produce an unbiased investigation into the beliefs of the general public about captive marine mammal attractions, due to a number of methodological flaws and biases in their own study. Specific concerns include a non-representative sample, methodological issues with data collection and coding procedures, a lack of reliability between data published and data provided, a failure to demonstrate inter-coder reliability, a failure to control for sequence effects in quantitative data, misrepresentation of databetween text and tables, and biased over-interpretation of qualitative responses. These errors undermine the authors’ conclusions and indeed render their findings uninterpretable. To achieve the goal of an unbiased understanding of public opinion about marine mammal attractions, further research on this topic is warranted using rigorous and sound scientific methodology.
Wassermann et al. (2018, https://doi.org/10.7717/peerj.5953) argued that previous public opinion research about marine mammal attractions should be considered unreliable due to possible biases in study design, which may have influenced participants’ responses. As in all scientific endeavors, reducing bias in order to gather more objective, evidence-based information is a worthy and commendable goal. Unfortunately, Wassermann et al. fell short in their efforts to produce an unbiased investigation into the beliefs of the general public about captive marine mammal attractions, due to a number of methodological flaws and biases in their own study. Specific concerns include a non-representative sample, methodological issues with data collection and coding procedures, a lack of reliability between data published and data provided, a failure to demonstrate inter-coder reliability, a failure to control for sequence effects in quantitative data, misrepresentation of databetween text and tables, and biased over-interpretation of qualitative responses. These errors undermine the authors’ conclusions and indeed render their findings uninterpretable. To achieve the goal of an unbiased understanding of public opinion about marine mammal attractions, further research on this topic is warranted using rigorous and sound scientific methodology.Practical considerations for collaborative research between the pharmaceutical industry and external investigatorshttps://peerj.com/preprints/277852019-06-052019-06-05Maureen LloydCynthia K BarbitschMary Voehl HirschAntonia PanayiEric Southam
Traditionally, clinical research has been conducted via either industry sponsored studies or non-industry investigator sponsored studies. Collaborative Research provides a relatively new mechanism for industry and non-industry partners to work together in the pursuit of effective and safe treatments for the patient. The aims of this article are to provide both industry and non-industry investigators with a greater insight into the complex processes that are currently employed by industry when entering into Collaborative Research agreements, and to encourage consistency and transparency in approach across companies.
In Collaborative Research, instead of being limited to providing funding and/or product, the industry partner contributes expertise complementary to that of the non-industry partner, who is the sponsor of the study. Collaborative Research may be conducted before, during or after regulatory approval of a drug or medical device, and may be interventional, observational or preclinical.
A collaboration requires appropriate process and governance frameworks to be established in order to be successful. Important considerations include the routes for submitting a request, the review and approval process, due diligence criteria, budgeting and contracting processes, permissible interactions during the execution of the research, the closing out of the research, and dispute resolution. It is also necessary to have in place an agreed communication strategy and a risk control framework. Clear and specific contract language around roles and responsibilities, intellectual property, rights to data, registration and disclosure of publications, and an understanding of adverse event reporting procedures are other critical facets of Collaborative Research that are essential to avoid delays and disputes.
With no global standards for Collaborative Research, it is important that partners establish practical procedures, good ongoing communication, alignment of goals, and transparent interactions and disclosure to jointly advance the science of new, safe and effective therapies.
Traditionally, clinical research has been conducted via either industry sponsored studies or non-industry investigator sponsored studies. Collaborative Research provides a relatively new mechanism for industry and non-industry partners to work together in the pursuit of effective and safe treatments for the patient. The aims of this article are to provide both industry and non-industry investigators with a greater insight into the complex processes that are currently employed by industry when entering into Collaborative Research agreements, and to encourage consistency and transparency in approach across companies.In Collaborative Research, instead of being limited to providing funding and/or product, the industry partner contributes expertise complementary to that of the non-industry partner, who is the sponsor of the study. Collaborative Research may be conducted before, during or after regulatory approval of a drug or medical device, and may be interventional, observational or preclinical.A collaboration requires appropriate process and governance frameworks to be established in order to be successful. Important considerations include the routes for submitting a request, the review and approval process, due diligence criteria, budgeting and contracting processes, permissible interactions during the execution of the research, the closing out of the research, and dispute resolution. It is also necessary to have in place an agreed communication strategy and a risk control framework. Clear and specific contract language around roles and responsibilities, intellectual property, rights to data, registration and disclosure of publications, and an understanding of adverse event reporting procedures are other critical facets of Collaborative Research that are essential to avoid delays and disputes.With no global standards for Collaborative Research, it is important that partners establish practical procedures, good ongoing communication, alignment of goals, and transparent interactions and disclosure to jointly advance the science of new, safe and effective therapies.UK universities compliance with the Concordat to Support Research Integrity: findings from cross-sectional time-serieshttps://peerj.com/preprints/276222019-03-302019-03-30Elizabeth Wager
Background. The Concordat to Support Research Integrity published in 2012 recommends that UK research institutions should provide a named point of contact to receive concerns about research integrity (RI). The Concordat also requires institutions to publish annual RI statements.
Objective. To see whether contact information for a staff member responsible for RI was readily available from UK university websites and to see how many universities published annual RI statements.
Methods. UK university websites were searched in mid-2012, mid-2014 and mid-2018. The availability of contact details for RI inquiries, other information about RI and, specifically, an annual RI statement, was recorded.
Results. The proportion of UK universities publishing an email address for RI inquiries rose from 23% in 2012 (31/134) to 55% in 2018. The same proportion (55%) published at least one annual RI statement in 2018, but only 3 provided statements for all years from 2012/13. There was great variation in the titles used for the staff member with responsibility from RI which made searching difficult.
Conclusion. Over 6 years after the publication of the Concordat to Support Research Integrity, nearly half of UK universities are not complying with all its recommendations and do not provide contact details for a staff member with responsibility for RI or an annual statement.
Background. The Concordat to Support Research Integrity published in 2012 recommends that UK research institutions should provide a named point of contact to receive concerns about research integrity (RI). The Concordat also requires institutions to publish annual RI statements.Objective. To see whether contact information for a staff member responsible for RI was readily available from UK university websites and to see how many universities published annual RI statements.Methods. UK university websites were searched in mid-2012, mid-2014 and mid-2018. The availability of contact details for RI inquiries, other information about RI and, specifically, an annual RI statement, was recorded.Results. The proportion of UK universities publishing an email address for RI inquiries rose from 23% in 2012 (31/134) to 55% in 2018. The same proportion (55%) published at least one annual RI statement in 2018, but only 3 provided statements for all years from 2012/13. There was great variation in the titles used for the staff member with responsibility from RI which made searching difficult.Conclusion. Over 6 years after the publication of the Concordat to Support Research Integrity, nearly half of UK universities are not complying with all its recommendations and do not provide contact details for a staff member with responsibility for RI or an annual statement.#Pay4Reviews: Academic publishers should pay scientists for peer-reviewhttps://peerj.com/preprints/275732019-03-082019-03-08Rodolfo Jaffé
The exploitation of scientists by traditional academic publishers is widespread, as they monopolize the right to distribute scientific papers, strip authors of their own article’s copyrights, and charge them if they wish to read papers from their peers. It is then up to scientists to free themselves (and their papers) from the tyranny of academic publishers by refusing to perform free peer-reviews for them and by publishing open-access when possible. Starved of peer-reviewers, academic publishers would have nothing to publish, while subscription fees are doomed to disappear in an age of open-science. This system would also create incentives to perform peer-review: #Pay4Reviews
The exploitation of scientists by traditional academic publishers is widespread, as they monopolize the right to distribute scientific papers, strip authors of their own article’s copyrights, and charge them if they wish to read papers from their peers. It is then up to scientists to free themselves (and their papers) from the tyranny of academic publishers by refusing to perform free peer-reviews for them and by publishing open-access when possible. Starved of peer-reviewers, academic publishers would have nothing to publish, while subscription fees are doomed to disappear in an age of open-science. This system would also create incentives to perform peer-review: #Pay4Reviews"Blacklists" and "whitelists" to tackle predatory publishing : A cross-sectional comparison and thematic analysishttps://peerj.com/preprints/275322019-02-132019-02-13Michaela StrinzelAnna SeverinKatrin MilzowMatthias Egger
Background. Despite growing awareness of predatory publishing and research on its market characteristics, the defining attributes of fraudulent journals remain controversial. We aimed to develop a better understanding of quality criteria for scholarly journals by analysing journals and publishers indexed in blacklists of predatory journals and whitelists of legitimate journals and the lists’ inclusion criteria. Methods. We searched for blacklists and whitelists in early 2018. Lists that included journals across disciplines were eligible. We used a mixed methods approach, combining quantitative and qualitative analyses. To quantify overlaps between lists in terms of indexed journals and publishers we employed the Jaro-Winkler string metric and Venn diagrams. To identify topics addressed by the lists’ inclusion criteria and to derive their broader conceptual categories, we used a qualitative coding approach. Results. Two blacklists (Beall’s and Cabell’s) and two whitelists (DOAJ and Cabell’s) were eligible. The number of journals per list ranged from 1404 to 12357 and the number of publishers from 473 to 5638. Seventy-three journals and 42 publishers were included both in a blacklist and whitelist. A total of 198 inclusion criteria were examined. Seven thematic themes were identified: (i) peer review, (ii) editorial services, (iii) policy, (iv) business practices, (v) publishing, archiving and access, (vi) website and (vii) indexing and metrics. Business practices accounted for almost half of blacklists’ criteria, whereas whitelists gave more emphasis to criteria related to policy and guidelines. Criteria were grouped into four broad concepts: (i) transparency, (ii) ethics, (iii) professional standards and (iv) peer review and other services. Whitelists gave more weight to transparency whereas blacklists focused on ethics and professional standards. The criteria included in whitelists were easier to verify than those used in blacklists. Both types of list gave relatively little emphasis to the quality of peer review. Conclusions. There is overlap between journals and publishers included in blacklists and whitelists. Blacklists and whitelists differ in their criteria for quality and the weight given to different dimensions of quality. Aspects that are central but difficult to verify receive insufficient attention.
Background. Despite growing awareness of predatory publishing and research on its market characteristics, the defining attributes of fraudulent journals remain controversial. We aimed to develop a better understanding of quality criteria for scholarly journals by analysing journals and publishers indexed in blacklists of predatory journals and whitelists of legitimate journals and the lists’ inclusion criteria. Methods. We searched for blacklists and whitelists in early 2018. Lists that included journals across disciplines were eligible. We used a mixed methods approach, combining quantitative and qualitative analyses. To quantify overlaps between lists in terms of indexed journals and publishers we employed the Jaro-Winkler string metric and Venn diagrams. To identify topics addressed by the lists’ inclusion criteria and to derive their broader conceptual categories, we used a qualitative coding approach. Results. Two blacklists (Beall’s and Cabell’s) and two whitelists (DOAJ and Cabell’s) were eligible. The number of journals per list ranged from 1404 to 12357 and the number of publishers from 473 to 5638. Seventy-three journals and 42 publishers were included both in a blacklist and whitelist. A total of 198 inclusion criteria were examined. Seven thematic themes were identified: (i) peer review, (ii) editorial services, (iii) policy, (iv) business practices, (v) publishing, archiving and access, (vi) website and (vii) indexing and metrics. Business practices accounted for almost half of blacklists’ criteria, whereas whitelists gave more emphasis to criteria related to policy and guidelines. Criteria were grouped into four broad concepts: (i) transparency, (ii) ethics, (iii) professional standards and (iv) peer review and other services. Whitelists gave more weight to transparency whereas blacklists focused on ethics and professional standards. The criteria included in whitelists were easier to verify than those used in blacklists. Both types of list gave relatively little emphasis to the quality of peer review. Conclusions. There is overlap between journals and publishers included in blacklists and whitelists. Blacklists and whitelists differ in their criteria for quality and the weight given to different dimensions of quality. Aspects that are central but difficult to verify receive insufficient attention.Phylotocol: Promoting transparency and overcoming bias in phylogeneticshttps://peerj.com/preprints/265852018-12-232018-12-23Melissa B DeBiasseJoseph F Ryan
The integrity of science requires that the process be based on sound experimental design and objective methodology. Strategies that increase reproducibility and transparency in science protect this integrity by reducing conscious and unconscious biases. Given the large number of analysis options and the constant development of new methodologies in phylogenetics, this field is one that would particularly benefit from more transparent research design. Here, we introduce phylotocol (fī·lō·´tə·kôl), an a priori protocol-driven approach in which all analyses are planned and documented at the start of a project. The phylotocol template is simple and the implementation options are flexible to reduce administrative burdens and allow researchers to adapt it to their needs without restricting scientific creativity. While the primary goal of phylotocol is to increase transparency and accountability, it has a number of auxiliary benefits including improving study design and reproducibility, enhancing collaboration and education, and increasing the likelihood of project completion. Our goal with this Point of View article is to encourage a dialogue about transparency in phylogenetics and the best strategies to bring transparent research practices to our field.
The integrity of science requires that the process be based on sound experimental design and objective methodology. Strategies that increase reproducibility and transparency in science protect this integrity by reducing conscious and unconscious biases. Given the large number of analysis options and the constant development of new methodologies in phylogenetics, this field is one that would particularly benefit from more transparent research design. Here, we introduce phylotocol (fī·lō·´tə·kôl), an a priori protocol-driven approach in which all analyses are planned and documented at the start of a project. The phylotocol template is simple and the implementation options are flexible to reduce administrative burdens and allow researchers to adapt it to their needs without restricting scientific creativity. While the primary goal of phylotocol is to increase transparency and accountability, it has a number of auxiliary benefits including improving study design and reproducibility, enhancing collaboration and education, and increasing the likelihood of project completion. Our goal with this Point of View article is to encourage a dialogue about transparency in phylogenetics and the best strategies to bring transparent research practices to our field.