Visitors   Views   Downloads

The positive predictive value of genetic screening tests

View preprint
RT @BirgitFunke: How to get elective genomic testing right- more difficult than we thought! @PeerJPreprints https://t.co/gnGlKh26bJ #Genet…
How to get elective genomic testing right- more difficult than we thought! @PeerJPreprints https://t.co/gnGlKh26bJ #Genetics #Genomics #PublicHealth #MedicalGenetics
20 days ago
The positive predictive value of genetic screening tests https://t.co/tzE3bNwwTf https://t.co/87Fba477wB
An interesting @PeerJPreprints article: The positive predictive value of genetic screening tests https://t.co/YI0mndE9tD
RT @m_disrupt: The first paper one of our founders @jhagenk published since we launched MDisrupt: The positive predictive value of genetic…
21 days ago
The first paper one of our founders @jhagenk published since we launched MDisrupt: The positive predictive value of genetic screening tests https://t.co/hWEIx0YVAA. Please note it hasn’t been peer reviewed yet!
NOT PEER-REVIEWED
"PeerJ Preprints" is a venue for early communication or feedback before peer review. Data may be preliminary.

Supplemental Information

Figure 1. Intended Use or Purpose of the Test

The clinical implications of false results must be considered when determining the appropriate sensitivity and specificity for the intended use. Tests intended for higher risk populations (diagnostic testing) may tolerate more false positives than tests intended for low risk populations (screening). The false positive rate is reflected in the test’s clinical specificity.

DOI: 10.7287/peerj.preprints.27922v1/supp-1

Table 1. Generic test development definitions and examples of their application to genetic testing

DOI: 10.7287/peerj.preprints.27922v1/supp-2

Figure 2. Impact of prevalence on PPV

When the disease prevalence is low in the test population, small changes in the specificity can have a large impact on the positive predictive value of the test. PPV as a function of prevalence for two tests: Test A (blue), with a sensitivity of 99% and a specificity of 99%; and Test B (red), with a sensitivity of 99% and a specificity of 96%. a) Full range of possible PPV and prevalence, from 0 to 1. b) Magnified region of prevalence <0.1, a gray line to show an example prevalence of 0.02. A decrease of only 3% in specificity can mean a 50% decrease in PPV: from 0.33 to 0.66. Adapted with permission from Romero-Brufau, et al. 45 The prevalence of the disease is equal to the a priori probability that a subject selected at random from the test population has the condition.

DOI: 10.7287/peerj.preprints.27922v1/supp-3

Table 2. Estimating the Test Specificity

Test specificity is estimated by assuming ⅓ of the overall positive rate is due to likely pathogenic variants (LP) and then calculating the specificity as described in Supplemental Figure 2 (Specificity = TN/(TN+FP) = (1 - Positive Rate)/(1 - 29/30 x Positive Rate). Note how specificity changes with overall positive rate (orange). *Some conditions on ACMG59TM have a prevalence less than 1/10,000.

DOI: 10.7287/peerj.preprints.27922v1/supp-4

Table 3. Computing the PPV from prevalence and specificity

Small decreases in specificity can have a significant impact on the PPV of likely pathogenic variants over a range of prevalences representative of monogenic inherited diseases. The PPV calculations for CDC Tier 1 (yellow) and ACMG59 (orange) conditions are as follows: PPV = sensitivity x prevalence / [sensitivity x prevalence + (1 - specificity) x (1 - prevalence)]. 46 The specificity estimates for CDC Tier 1 (yellow) and ACMG59TM (orange) conditions are from Table 2. The model is intended to provide estimates and show trends. In practice, each condition should be considered individually.

DOI: 10.7287/peerj.preprints.27922v1/supp-5

Supplemental Figure 1.​

The sensitivity and specificity of a test can be adjusted during the test development phase to allow for more or less false positives or false negatives, as appropriate for the test’s intended use. The graphs show the results obtained by the new test when it tested a cohort of known positive samples and known negative samples, as determined by an existing diagnostic test. Ideally, the new test could perfectly discriminate between positive and negative samples. In practice this is rarely the case, so thoughtful trade offs between desired sensitivity and specificity are necessary. FP = false positive, FN = false negative, PPV = positive predictive value, NPV = negative predictive value

DOI: 10.7287/peerj.preprints.27922v1/supp-6

Supplemental Table 1. Screening test and confirmatory diagnostic test pairs

Examples of screening tests and confirmatory tests. Confirmatory tests can serve as the “gold standard” or index test comparator during test validation as well as a follow up test for positive screening results in practice.

DOI: 10.7287/peerj.preprints.27922v1/supp-7

Supplemental Figure 2. Relationship between overall positive rate and specificity

In this example, the test has 100% analytical sensitivity and specificity, as determined during the test validation. After processing 3,000 samples, the lab determined that the positive rate in their setting is 5%. This means they have 150 positive results and 2850 negative results. The positive/negative cut off has been set to allow for 1/10 LP positives to be a false positive; thus 1/30 positives are false positives . Therefore, 5 of the positive results are false positives. and test specificity = 2850/(2850+5) = 99.82%. This method is used to calculate specificities for Tables 2 and 3 for varying positive test rates.

DOI: 10.7287/peerj.preprints.27922v1/supp-8

Additional Information

Competing Interests

Dr. Hagenkord is a shareholder in Color Genomics. Dr. Funke and Ms Qian is a shareholder in Veritas Genetics. Dr. Hegde is a shareholder in Perkin Elmer. Mr. Jacobs is a shareholder in Progenity. The remaining authors have no conflict of interest to declare.

Author Contributions

Jill Hagenkord analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft.

Birgit Funke analyzed the data, authored or reviewed drafts of the paper, approved the final draft.

Emily Qian analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft.

Madhuri Hegde analyzed the data, authored or reviewed drafts of the paper, approved the final draft.

Kevin B Jacobs analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft.

Matthew Ferber analyzed the data, authored or reviewed drafts of the paper, approved the final draft.

Matthew Lebo analyzed the data, authored or reviewed drafts of the paper, approved the final draft.

Adam H Buchanan analyzed the data, authored or reviewed drafts of the paper, approved the final draft.

David Bick analyzed the data, authored or reviewed drafts of the paper, approved the final draft.

Data Deposition

The following information was supplied regarding data availability:

Not applicable.

Funding

The authors received no funding for this work.


Add your feedback

Before adding feedback, consider if it can be asked as a question instead, and if so then use the Question tab. Pointing out typos is fine, but authors are encouraged to accept only substantially helpful feedback.

Some Markdown syntax is allowed: _italic_ **bold** ^superscript^ ~subscript~ %%blockquote%% [link text](link URL)
 
By posting this you agree to PeerJ's commenting policies