Content Disclaimer Copyright @2020. All Rights Reserved. 
Links : Home Index (Subjects) Contact StatsToDo 
Explanations
Javascript Program
R Codes
D
E
F
G
Kappa for nominal data was first described by Fleiss in 1969. It is a measurement of concordance or agreement between two or more judges, in the way they classify or categorise subjects into different groups or categories. The following terms are often used
Data entry and interpretation are best described by using the default example provided in the Javascript program panel. In this example, A school provides 5 councillors which assesses then advises students on future careers, and we wish to evaluate how much the assessments of the students from these 5 councillor agree with each other. We used a class of 10 students in their final school year, and each of the 5 councillors interviews every student, and classify them into the following categories.
If the raw data is already collated into a table of counts, this can also be used as entry data, using the buttons on the right side of the program panel. Results of Analysis Kappa in this example is 0.41, with a Standard Error of 0.08, and the 95% confidence interval od 0.27 to 0.57. This is the level of agreements between the 5 counsellors. Conventionally, a Kappa of <0.2 is considered poor agreement, 0.210.4 fair, 0.410.6 moderate, 0.610.8 strong, and more than 0.8 near complete agreement. Given Kappa is an estimate from a sample, its Standard Error (se) and 95% confidence interval can be estimated. In this example, the 95% confidence interval did nor traverse the null (0) value, allowing a conclusion to be made that there are significant agreements between the councillors. ReferencesFleiss J L (1971) Measuring nominal scale agreement amongst many raters. Psychological Bulletin 76:378382 Siegel S and Castellan Jr. N.J. Nonparametric Statistics for the Behavioral Sciences (1988) International Edition. McGrawHill Book Company New York. ISBN 0070573573 p. 284291
# Kappa.R
# Kappa for nominal data
# Kappa Algorithm CalKappa < function(dfCount) { print("Matrix of counts") print(dfCount) n = nrow(dfCount) # n rows g = ncol(dfCount) # catagories cols r = sum(dfCount[1,]) print(c(n, g, r)) Nk = n*r Pe = 0; EP3 = 0; Cj = rep(0,g) Pj = rep(0,g) for(j in 1 : g) { for(i in 1:n)Cj[j] = Cj[j] + dfCount[i,j] * 1 Pj[j] = Cj[j] / Nk Pe = Pe + Pj[j] * Pj[j] EP3 = EP3 + Pj[j] * Pj[j] * Pj[j] } Nk = r * (r  1) Pa = 0; Si = rep(0, n) for(i in 1 : n) { for(j in 1 : g) { k = dfCount[i,j] Si[i] = Si[i] + k * (k  1) } Si[i] = Si[i] / Nk Pa = Pa + Si[i] } Pa = Pa / n Kappa = (PaPe)/(1.0Pe) f1 = 2.0 / (1.0 * n * r * (r1)) f2 = Pe  (2*r3)*Pe*Pe + 2*(r2)*EP3 f3 = (1Pe) * (1Pe) se = sqrt(f1 * f2 / f3); z = Kappa / se; p = 1  pnorm(z) print(paste("Kappa=", Kappa," SE=", se )) print(paste("z=", z, " p=", p)) print(paste0("95% CI = ", (Kappa  1.96 * se), " to ", (Kappa + 1.96 * se))) } #Program 1: Converting raw data to table of counts and calculate Kappa datRaw = (" 1 2 2 2 2 1 1 3 3 3 3 3 3 3 3 1 1 1 1 3 1 1 1 3 3 1 2 2 2 2 1 1 1 1 1 2 2 2 2 3 1 3 3 3 3 1 1 1 3 3 ") dfRaw < read.table(textConnection(datRaw),header=FALSE) # conversion to data frame #dfRaw # check means and SDs minv = min(dfRaw) maxv = max(dfRaw) nc = maxv  minv + 1 c(minv, maxv, nc) dfCount < matrix(0, ncol = nc, nrow = nrow(dfRaw)) for(i in 1:nrow(dfRaw)) for(j in 1:ncol(dfRaw)) { v = dfRaw[i,j] dfCount[i,v] = dfCount[i,v] + 1 } CalKappa(dfCount) [1] "Matrix of counts" V1 V2 V3 1 1 4 0 2 2 0 3 3 0 0 5 4 4 0 1 5 3 0 2 6 1 4 0 7 5 0 0 8 0 4 1 9 1 0 4 10 3 0 2 [1] 10 3 5 [1] "Kappa= 0.417892156862745 SE= 0.0766306770750035" [1] "z= 5.45332721585803 p= 2.47179898771321e08" [1] "95% CI = 0.267696029795738 to 0.568088283929752" > # Program 2: Input table of counts and calculate Kappa datCount = (" 1 4 0 2 0 3 0 0 5 4 0 1 3 0 2 1 4 0 5 0 0 0 4 1 1 0 4 3 0 2 ") dfCount < read.table(textConnection(datCount),header=FALSE) # conversion to data frame CalKappa(dfCount) [1] "Matrix of counts" V1 V2 V3 1 1 4 0 2 2 0 3 3 0 0 5 4 4 0 1 5 3 0 2 6 1 4 0 7 5 0 0 8 0 4 1 9 1 0 4 10 3 0 2 [1] 10 3 5 [1] "Kappa= 0.417892156862745 SE= 0.0766306770750035" [1] "z= 5.45332721585803 p= 2.47179898771321e08" [1] "95% CI = 0.267696029795738 to 0.568088283929752" >
Contents of Codes:3
Contents of D:4
Contents of E:5
Contents of F:6
Contents of G:7
