This page presents algorithms and explanations concerned with the quality of relationship between a binary Test and a binary Outcome.
The statistics of evaluating the relationship between tests and outcomes occurs under the following scenarios
The collection of reference data, to evaluate the quality of relationship between a test and an outcome.
Developing the parameters that can be generalized and use in clinical situation in the future
In appropriate future situations, the use of the parameter to influence diagnostic decisions.
Terminology
Defining the parameters
Outcome Positive (OPos)
Outcome Negative (ONeg)
Test Positive (TPos)
True Positive (TP)
False Positive (FP)
Test Negative (TNeg)
False Negative (FN)
True Negative (TN)
Defining the parameters:
The framework under consideration for this page is as presented in the table to the right.
True Positive (TP) if they are test positive and outcome positive
False Positive (FP) if they are test positive but outcome negative
False Negative (FN) if they are test negative but outcome positive
True Negative (TN) if they are test negative and outcome negative
The Test and Outcome must be defined. For example, we may decide to use the observation of
an unengaged head in early labour as the Test to predict the need for a Caesarean Section as the Outcome. In this scenario, an
observed unengaged head is Test Positive (TPos), and an engaged head is Test Negative (TNeg). A baby delivered by Caesarean
Section is Outcome Positive (OPos) and one delivered vaginally is Outcome Negative (ONeg). The combination of these are as follows
A baby with unengaged head in early labour and subsequently delivered by Caesarean Section is Test Positive (TPos) and
Outcome Positive (OPos), so it is a case of True Positive (TP)
A baby with unengaged head in early labour and subsequently delivered vaginally is Test Positive (TPos) and
Outcome Negative (ONeg), so it is a case of False Positive (FP)
A baby with engaged head in early labour and subsequently delivered by Caesarean Section is Test Negative (TNeg) and
Outcome Positive (OPos), so it is a case of False Negative (FN)
A baby with engaged head in early labour and subsequently delivered vaginally is Test Negative (TNeg) and
Outcome Negative (ONeg), so it is a case of True Negative (TN)
Procedures
Procedure 1. Planning
Data is collected to enable the evaluation of relationship between Test and Outcome. Although prospective collection of data can take place, the common approach is to retrospective use of data already collected for the following reasons
An easy access to greater volume of information
An ability to select representative samples
An ability to have similar number of Outcome Positives (OPos) and Outcome Negative (ONeg) so that the statistical power
for detecting both are similar
The sample size required is then determined with the help of tables in the sample size for prediction page. Once the sample size required is calculated, this number of cases in Outcome Positive, and similar numbers of Outcome Negative,
are collected for evaluation.
Procedure 2. Evaluating the quality of Prediction using the data collected.
Once the data is collected, the numbers can arranged according to the table at the top right corner of the panel.
OPos and ONeg are the numbers of cases which are Outcome Positive and Outcome Negative
TPos and TNeg are the number of cases which are Test Positive and Test Negative
TP is the number of cases that are True Positive (TPos and OPos)
FP is the number of cases that are False Positive (TPos and ONeg)
FN is the number of cases that are False Negative (TNeg and OPos)
TN is the number of cases that are True Negative (TNeg and ONeg)
From these primary numbers two sets of parameters can be calculated. listed in this panel is to assist understanding only
Parameters of quality
The True Positive Rate TPR = TP / OPos is the proportion of Outcome Positives that are Test Positive. TPR is also known as Sensitivity
The True Negative Rate TNR = TN / ONeg is the proportion of Outcome Negatives that are Test Negative. TNR is also known as Specificity
The False Positive Rate FPR = FP / ONeg is the proportion of Outcome Negatives that are Test Positive.
The False Negative Rate FNR = FN / OPos is the proportion of Outcome Positives that are Test Negative.
Alternative calculations are FPR = 1-TNR, and FNR = 1-TPR
The statistical significance of each are calculated as follows
For TPR, Standard Error SE = sqrt(TPR(1-TPR)/OPos), and one tail 95% confidence interval is >TPR-1.65SE
For TNR, Standard Error SE = sqrt(TNR(1-TNR)/ONeg), and one tail 95% confidence interval is >TNR-1.65SE
A test parameter is statistically significant if the confidence interval does not overlap the 0.5 value
Parameters for future clinical usage
The Likelihood Ratio for Test Positive LRPos = TPR / FPR is the ratio OPos/ONeg when Test Positive
The Likelihood Ratio for Test Negative LRNeg = FNR / TNR is the ratio OPos/ONeg when Test Negative
Procedure 3. Use of Likelihood Ratio to make clinical decisions.
The Likelihood Ratio can be used to modify the perception of risk in applicable clinical situations, using a Bayesian Probability algorithm
The risk or probability of an outcome before results of the Test is known is called Pre-test Probability
The risk can be converted to Pre-test Odd = Pre-test Probability / (1 - Pre-test Probability)
The Post-test Odd is then obtained Post-test Odd = Pre-test Odd * Likelihood Ratio
The Post-test Probability = Post-test Odd/ (1 + Post-test Odd)
If there are more than one Test to an Outcome, and providing the Tests are not tautological (so strongly correlated they are
repeat of the same test), the Post-test Probability after one Test becomes the Pre-test Probability of the next Test, so that, with increasing information, the perception of risks is modified.
References
True & False Positives & Negative (Sensitivity and specificity) :
Practical Statistics for medical Research. (1994) F.Altman. Chapman Hall, London.
ISBN 0 412 276205 (First Ed. 1991) p.409-417
Likelihood Ratio :
Simel D.L., Samsa G.P., Matchar D.B. (1991) Likelihood ratios with confidence:
sample size estimation for diagnostic test studies. J. Clin. Epidemiology
vol 44 No. 8 pp 763-770
Pre and post test probability :
Deeks J.J, and Morris J.M. (1996) Evaluating diagnostic tests. In Bailliere's
Clinical Obstetrics and Gynaecology Vol.10 No. 4, December 1996
ISBN 0-7020-2260-8 p. 613-631.
Fagan T.J. (1975) Nomogram for Bayer's Theorem. New England J. Med. 293:257
General :
Sackett D, Haynes R, Guyatt G, Tugwell P. (1991) Clinical Epidemiology: A Basic
Science for Clinical Medicine. Second edition. ISBN 0-316-76599-6.
Example
This panel provides examples to demonstrate the concepts and formulations described in the previous panel. Please Note
that the numbers in these examples are entirely artificially created to demonstrate the algorithms, and they do not reflect any real clinical information. Please also note : The numbers presented in this page are adjusted to 2 decimal places, so may be slightly different to that produced with different rounding precision
We are midwives wishing to establish a method of assessing the risk or probability of Caesarean Section in women who are admitted to the labour ward in early labour. Outcome Positive (OPos) is Caesarean Section (CS), and Outcome Negative (ONeg) is vaginal delivery (VD).
Study 1 . Parity :
We wish to use the parity of the woman as the Test, as we know that women having their first baby are
More likely to require a Caesarean Section. Test Positive (TPos) is nulliparous pregnancy (NP), Test Negative (TNeg) is Multiparous Pregnancy (MP)
For this modelling exercise we selected 30 women who were delivered by Caesarean Section (OPos) and 30 women who delivered vaginally (ONeg), and obtained the following observations. Please note: this small number is to make understanding easier. In a proper modelling exercise a much larger number of deliveries are required.
True Positive Rate TPR = 12 / 30 = 0.4, SE = sqrt(0.4(0.6)/30) = 0.09. This is less than 0.5. The conclusion is therefore nullipara is not a significant predictor of Caesarean Section
True Negative Rate TNR = 27 / 30 = 0.9, SE = sqrt(0.9(0.1)/30) = 0.06 95% Confidence Interval = >0.9-1.65(0.06) = >0.81. This is greater than 0.5 amd not overlapping 0.5, indicating that multiparity is a significant predictor of vaginal delivery
Likelihood Ratio Test Positive LRPos = TPR/FPR = 0.4/0.1 = 4.0
Likelihood Ratio Test Negative LRNeg = FNR/TNR = 0.6/0.9 = 0.67
We use the two Likelihood Ratios in a public hospital that has an overall Caesarean Section Rate of 20%
Pre-test Probability = 0.2, Pre-test Odd = (0.2/(1-0.2)) = 0.25
For nullipara LRPos = 4.0, Post-Test Odd = 0.25*4 = 1, Post-test Probability = 1/(1+1) = 0.5
For multipara LRNeg = 0.67, Post-Test Odd = 0.25*0.67 = 0.17, Post-test Probability = 0.17/(1+0.17) = 0.14
In this public hospital with overall CS rate of 20%, nullipara CS rate is 50%, multipara CS rate is 14%
We use the two Likelihood Ratios in a private hospital that has an overall Caesarean Section Rate of 35%
Pre-test Probability = 0.35, Pre-test Odd = (0.35/(1-0.35)) = 0.54
For nullipara LRPos = 4.0, Post-Test Odd = 0.54*4 = 2.15, Post-test Probability = 2.15/(1+2.15) = 0.68
For multipara LRNeg = 0.67, Post-Test Odd = 0.54*0.67 = 0.36, Post-test Probability = 0.36/(1+0.36) = 0.27
In this private hospital with overall CS rate of 35%, nullipara CS rate is 68%, multipara CS rate is 27%
Study 2 . Head Engagement :
We wish to use whether the head is engaged when admitted in early labour as the Test,
as we know that those with an unengaged head in early labour are more likely to require a Caesarean Section.
Test Positive (TPos) is head unengaged (HU), Test Negative (TNeg) is head engaged (HE)
In this modelling exercise, we use records of 122 cases of Caesarean Section (OPos) and 122 cases of vaginal delivery (ONeg).
CS (OPos)
VD (ONeg)
HU (TPos)
TP=39
FP=26
HE (TNeg)
FN=91
TN=104
The observations are as follows
Unengaged head and CS, True Positive TP = 39
Unengaged head and VD, False Positive FP = 26, False Negative FN = 91, True Negative TN = 104
Likelihood Ratio Test Positive LRPos = TPR/FPR = 0.3/0.2 = 1.5
Likelihood Ratio Test Negative LRNeg = FNR/TNR = 0.7/0.8 = 0.88
We use the two Likelihood Ratios in a public hospital that has an overall Caesarean Section Rate of 20%
Pre-test Probability = 0.2, Pre-test Odd = (0.2/(1-0.2)) = 0.25
For unengaged head LRPos = 1.5, Post-Test Odd = 0.25*1.5 = 0.38, Post-test Probability = 0.38/(1+0.38) = 0.27
For engaged head LRNeg = 0.88, Post-Test Odd = 0.25*0.88 = 0.22, Post-test Probability = 0.22/(1+0.22) = 0.18
In this public hospital with overall CS rate of 20%, those with unengaged head in early labour have CS rate of 27%,
those with engaged head 18%
We use the two Likelihood Ratios in a private hospital that has an overall Caesarean Section Rate of 35%
Pre-test Probability = 0.35, Pre-test Odd = (0.35/(1-0.35)) = 0.54
For unengaged head LRPos = 1.5, Post-Test Odd = 0.54*1.5 = 0.81, Post-test Probability = 0.81/(1+0.81) = 0.45
For engaged head LRNeg = 0.88, Post-Test Odd = 0.54*0.88 = 0.47, Post-test Probability = 0.47/(1+0.47) = 0.32
In this private hospital with overall CS rate of 35%, those with unengaged head in early labour have CS rate of 45%,
those with engaged head 32%
Study 3 . Combining the two : From the previous two studies we know the following
For parity, LRPos (nullipara) = 4.0, LRNeg (multipara) = 0.67
For head engagement, LRPos (head unengaged) = 1.5, LRNeg (head engaged) = 0.88
In the public hospital with an overall Caesarean Section Rate of 20%
Pre-test Probability = 0.2, Pre-test Odd = (0.2/(1-0.2)) = 0.25
For nullipara Post-Test Odd = 1, Post-test Probability = 0.5. This can be used as the Pre-test Probability and Odd for the next stage
For unengaged head LRPos = 1.5, Post-Test Odd = 1*1.5 = 1.5, Post-test Probability = 1.5/(1+1.5) = 0.60
For engaged head LRNeg = 0.88, Post-Test Odd = 1*0.88 = 0.88, Post-test Probability = 0.88/(1+0.88) = 0.47
For multipara Post-Test Odd = 0.17, Post-test Probability = 0.14
For unengaged head LRPos = 1.5, Post-Test Odd = 0.17*1.5 = 0.24, Post-test Probability = 0.24/(1+0.24) = 0.20
For engaged head LRNeg = 0.88, Post-Test Odd = 0.17*0.88 = 0.14, Post-test Probability = 0.14/(1+0.14) = 0.13
In the private hospital with an overall Caesarean Section Rate of 35%
Pre-test Probability = 0.35, Pre-test Odd = (0.35/(1-0.35)) = 0.54
For nullipara Post-Test Odd = 2.15, Post-test Probability = 0.68. This can be used as the Pre-test Probability and Odd for the next stage
For unengaged head LRPos = 1.5, Post-Test Odd = 2.15*1.5 = 3.23, Post-test Probability = 3.23/(1+3.23) = 0.76
For engaged head LRNeg = 0.88, Post-Test Odd = 2.15*0.88 = 1.90, Post-test Probability = 1.90/(1+1.90) = 0.68
For multipara Post-Test Odd = 0.36, Post-test Probability = 0.27
For unengaged head LRPos = 1.5, Post-Test Odd = 0.36*1.5 = 0.54, Post-test Probability = 0.54/(1+0.54) = 0.35
For engaged head LRNeg = 0.88, Post-Test Odd = 0.36*0.88 = 0.32, Post-test Probability = 0.32/(1+0.32) = 0.24
Pre-test Probability
Likelihood Ratio
Post-test Probability
Public Hospital
Nullipara
0.20
4.00
0.50
Multipara
0.20
0.67
0.14
Unengaged Head
0.20
1.50
0.27
Engaged Head
0.20
0.88
0.18
Nullipara+Unengaged Head
0.50
1.50
0.60
Nullipara+Engaged Head
0.50
0.88
0.47
Multipara+Unengaged Head
0.14
1.50
0.20
Multipara+Engaged Head
0.14
0.88
0.13
Private Hospital
Nullipara
0.35
4.00
0.68
Multipara
0.35
0.67
0.27
Unengaged Head
0.35
1.50
0.45
Engaged Head
0.35
0.88
0.32
Nullipara+Unengaged Head
0.68
1.50
0.76
Nullipara+Engaged Head
0.68
0.88
0.65
Multipara+Unengaged Head
0.27
1.50
0.35
Multipara+Engaged Head
0.27
0.88
0.24
Summary
The table to the right shows how the Likelihood ratios can be used to modify diagnosis, in terms of probability,
and how multiple tests can be sequentially integrated, so that clinical decisions can be made and modified as additional
information becomes available.
The end results is independent of how the sequence is arranged, so that using the unengaged head to modify decisions made
with parity, or using parity to modify decisions made with unengaged head, will produce the same results. The only thing users
need to be careful about is that, when multiple Tests are used, each should represent an independent predictor. Tests so
closely correlated that they represents multiple versions of the same thing leads to inappropriate weighting of some predictors,
and will produce misleading results.
Javascript Programs
Prediction Statistics : The data is in 4 columns.
- Each row a separate study
- Col 1 = Number of True Positives (TP), Test Positive and Outcome Positive
- Col 2 = Nmber of False Positives (FP), Test Positive and Outcome Negative
- Col 3 = Nmber of False Negatives (FN), Test Negative and Outcome Positive
- Col 4 = Number of True Negatives (TN), Test Negative and Outcome Negative
Likelihood Ratio : The data is in 2 columns.
- Each row a separate study
- Col 1 is True Positive Rate (TPR, Sensitivity)
- Col 2 is False Positive Rate (FPR, 1-specificity, 1-TNR)
Post-test Probability : The data is in 2 columns.
- Each row a separate study
- Col 1 is Pre-test Probability
- Col 2 is Likelihood Ratio
Post-test Probability
R Codes
This panel shows the 3 programs in R Codes
Program 1: Prediction
# Pgm 1: Prediction
# common subroutines
StandardError <- function(n,p) # Standard Error of a probability p, given sample size n
{
return (sprintf("%1.3f", sqrt(p * (1.0 - p) / n))) # Standard Error
}
# Main Program
myDat = ("
TPos FPos FNeg TNeg
12 3 18 27
39 26 91 104
")
df <- read.table(textConnection(myDat),header=TRUE) # conversion to data frame
df # display input data (true positive, false positive, false negative, and true negative)
TPR <- vector() # true positive rate
TPRSE <- vector() # SE of true positive rate
FPR <- vector() # false positive rate
FPRSE <- vector() # SE of false positive rate
# FNR (false negative rate) = 1 - TPR and has the same SE value
# TNR (true negative rate) = 1 - FPR and has the same SE value
LRPos <- vector() # likelihood ratio test positive
LRNeg <- vector() # likelihood ratio test negative
for(i in 1:nrow(df))
{
tp = df$TPos[i] # true positive
fp = df$FPos[i] # false positive
fn = df$FNeg[i] # false negative
tn = df$TNeg[i] # true negative
testPos = tp + fn
tpr = tp / testPos
TPR <- append(TPR,sprintf("%1.3f",tpr))
TPRSE <- append(TPRSE,StandardError(testPos,tpr))
fpr = fp / testPos
FPR = append(FPR,sprintf("%1.3f",fpr))
FPRSE <- append(FPRSE,StandardError(testPos,fpr))
LRPos <- append(LRPos, sprintf("%1.3f",tpr / fpr))
LRNeg <- append(LRNeg, sprintf("%1.3f",(1-tpr) / (1-fpr)))
}
df$TPR <- TPR
df$TPRSE <- TPRSE
df$FPR <- FPR
df$FPRSE <- FPRSE
df$LRPos <- LRPos
df$LRNeg <- LRNeg
df # display input data + true and false positive rates, their Standard Errors, and Likelihood Ratios
Program 3: Post-test Probability from Pre-test Probability and Likelihood Ratio
# Pgm 3: Post-test Probability from Pre-test Probability and Likelihood Ratio
myDat = ("
PreProb LR
0.5 1.5
0.5 0.88
0.14 1.5
0.14 0.88
")
df <- read.table(textConnection(myDat),header=TRUE) # conversion to data frame
df # display pre test probability and Likelihood Ratio
PostProb <- vector() # Post test probability
for(i in 1:nrow(df))
{
odds = df$PreProb[i] / (1.0 - df$PreProb[i]) * df$LR[i]
PostProb <- append(PostProb, sprintf("%1.3f",odds / (1.0 + odds)))
}
df$PostProb <- PostProb
df # display input data + post test probability
The outputs are
> df # display pre test probability and Likelihood Ratio
PreProb LR
1 0.50 1.50
2 0.50 0.88
3 0.14 1.50
4 0.14 0.88
> df # display input data + post test probability
PreProb LR PostProb
1 0.50 1.50 0.600
2 0.50 0.88 0.468
3 0.14 1.50 0.196
4 0.14 0.88 0.125