This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ PrePrints) and either DOI or URL of the article must be cited.
Cite this article
Badgett RG, Dylla DP, Megison SD, Harmon EG.2014. An experimental search strategy retrieves more precise results than PubMed and Google for questions about medical interventions. PeerJ PrePrints2:e604v1https://doi.org/10.7287/peerj.preprints.604v1
Objective: To compare the precision of a search strategy designed specifically to retrieve randomized controlled trials (RCTs) and systematic reviews of RCTs with search strategies designed for broader purposes. Methods: We designed an experimental search strategy that automatically revised searches up to five times by using increasingly restrictive queries as long at least 50 citations were retrieved. We compared the ability of the experimental and alternative strategies to retrieve studies relevant to 312 test questions. The primary outcome, search precision, was defined for each strategy as the proportion of relevant, high quality citations among the first 50 citations retrieved. Results: The experimental strategy had the highest median precision (5.5%; interquartile range [IQR]: 0% - 12%) followed by the narrow strategy of the PubMed Clinical Queries (4.0%; IQR: 0% - 10%). The experimental strategy found the most high quality citations (median 2; IQR: 0 - 6) and was the strategy most likely to find at least one high quality citation (73% of searches; 95% confidence interval 68% - 78%). All comparisons were statistically significant. Conclusions: The experimental strategy performed the best in all outcomes although all strategies had low precision.
This manuscript will be a submission to PeerJ for peer review.
Precision by number of iterations used by the experimental search engine
Without a doubt it beats Google search engines. I think it's good for preprints, thesis retrievals, and general non-indexed material. I tend to run bots and non-conventional methods on top of systematic searches, and find the information it yields mind boggling. Like looking for gems in a sea.