![]() | Content Disclaimer Copyright @2020. All Rights Reserved. |
Links : Home Index (Subjects) Contact StatsToDo |
Explanations
Javascript Program
R Codes
D
E
F
This page provides 4 programs used in the comparison of two Crnbach Alphas
The pilot study and sample size estimation are used in the planning stage of a research projecr. The sample size estimate is used when the researcher has some backgorund information for the estimate. In a completely new project, where the environment of the project is unknown, the pilot study is required to compare the outcomes of different sample size, and determine when further increase in sample size no longer further improves the confidnce interval in a meaningful way The power estimate and confidence interval are carried out at the end of a project, using the data already collected. The power estimate provides an index whether the sample size was adequate for the requirements of the model. The confidence intrval provides an estimate of the precision of the effect size thus obtained. One and two tail models The programs provides results in both the one and two tail model. The two tail model is commonly presented, and this describes the range of the effect size for precision estimates and for comparison with other data. In many cases, when Cronbach Alpha is used as a tool during the development of a multivariate measurement (such as when Factor Analysis or Multiple Regression is used), the researcher may have a minimum effect size in mind, and wishes to know if the two Alphas differ enough to stsatisfy this requirement. Under these considerations the one tail model is more appropriate as this requires a smaller sample size. ReferencesBonett D G (2002) Sample Size Requirements for Comparing Two Alpha Coefficients Applied Psychological Measurement, Vol. 27 No. 1, January 2003, 72 - 74Duhachek A and Iacobucci D (2004) Alpha's Standard Error (ASE): An Accurate and Precise Confidence Interval Estimate. Journal of Applied Psychology Vol. 89, No. 5, 792-808 Johanson GA and Brooks GP (2010) Initial Scale Development: Sample Size for Pilot Studies. Educational and Psychological Measurement Vol.70,Iss.3;p.394-400
Sample Size
Power
Confidence Interval
Pilot Studies
E
F
# Program 1: Sample Size (per Crobach Alpha) # Subroutine ssAlpha <- function(alpha,beta,k1,k2,delta,tail) # alpha and beta Type I & II error { # k1 and k2 number of items delta calculated from a1 and ca2 za = qnorm(alpha / tail) zb = qnorm(beta) n = ceiling(2 * (k1 /(k1 - 1) + k2 / (k2 - 1)) * (za + zb)**2 / log(delta)**2 + 2) return (n) } # Main program # Input Data # Sig = Probability of Type I Error # Power = 1 - Probability of Type II Error # K1 = number of items in Cronbach Alpha 1 # Ca1 = Value of Cronbach Alpha 1 # K2 = number of items in Cronbach Alpha 2 # Ca2 = Value of Cronbach Alpha 2 dat = (" Sig Power K1 Ca1 K2 Ca2 0.05 0.80 5 0.7 7 0.3 0.01 0.95 5 0.7 7 0.3 0.05 0.80 7 0.8 9 0.2 0.01 0.95 7 0.8 9 0.2 ") df <- read.table(textConnection(dat),header=TRUE) # conversion to data frame df # optional display of input data # vectors for results Delta <- vector() SSiz_1 <- vector() SSiz_2 <- vector() # Calculations for(i in 1 : nrow(df)) { sig = df$Sig[i] beta = 1 - df$Power[i] k1 = df$K1[i] ca1 = df$Ca1[i] k2 = df$K2[i] ca2 = df$Ca2[2] delta = (1 - ca1) / (1 - ca2) Delta <- append(Delta, delta) # 1 tail SSiz_1 <- append(SSiz_1,ssAlpha(sig,beta,k1,k2,delta,1)) # 2 tail SSiz_2 <- append(SSiz_2,ssAlpha(sig,beta,k1,k2,delta,2)) } # include into data frame for display df$Delta <- Delta df$SSiz_1 <- SSiz_1 df$SSiz_2 <- SSiz_2 df # Input data + resultsThe results are : Delta = Effect Size, SSiz_1 and SSiz_2 are sample size for each Cronbach Alpha, 1 taii and 2 tail > df # Input data + results Sig Power K1 Ca1 K2 Ca2 Delta SSiz_1 SSiz_2 1 0.05 0.80 5 0.7 7 0.3 0.4285714 44 55 2 0.01 0.95 5 0.7 7 0.3 0.4285714 109 122 3 0.05 0.80 7 0.8 9 0.2 0.2857143 21 25 4 0.01 0.95 7 0.8 9 0.2 0.2857143 49 55 #Program 2: Power Estimates comparing 2 Cronbach Alphas # Subroutines # estimate sample size ssAlpha <- function(alpha,beta,k1,k2,delta,tail) # alpha and beta Type I & II error { # k1 and k2 number of items delta calculated from a1 and ca2 za = qnorm(alpha / tail) zb = qnorm(beta) n = ceiling(2 * (k1 /(k1 - 1) + k2 / (k2 - 1)) * (za + zb)**2 / log(delta)**2 + 2) return (n) } # iterate beta until sample size compatible betaAlpha <- function(sig,n,k1,k2,delta,tail) { bL = 0 bR = 1 bM = (bL + bR) / 2 nM = 1 for(i in 1 : 100) # interation should not need more than 100 cycles { if(bM<0.00001 | bM>0.99999) return(c(resAr[0],bM)) nM = ssAlpha(sig,bM,k1,k2,delta,tail) - n if(abs(nM)<0.0001) return(bM) if(nM>0) { bL = bM } else { bR = bM } bM = (bL + bR) / 2; } return(bM) } # Main Program # Input Data # Sig = Probability of Type I Error # n = sample size each Cronbach Alpha (assuming same sample size for both) # K1 = number of items in Cronbach Alpha 1 # Ca1 = Value of Cronbach Alpha 1 # K2 = number of items in Cronbach Alpha 2 # Ca2 = Value of Cronbach Alpha 2 dat = (" Sig n K1 Ca1 K2 Ca2 0.05 55 5 0.7 7 0.3 0.01 55 5 0.7 7 0.3 0.05 10 7 0.8 9 0.5 0.01 10 7 0.8 9 0.5 ") df <- read.table(textConnection(dat),header=TRUE) # conversion to data frame df # optional display of input data # vectors for results Delta <- vector() Power_1 <- vector() Power_2 <- vector() # Calculations for(i in 1 : nrow(df)) { sig = df$Sig[i] n = df$n[i] k1 = df$K1[i] ca1 = df$Ca1[i] k2 = df$K2[i] ca2 = df$Ca2[i] delta = (1 - ca1) / (1 - ca2) Delta <- append(Delta,delta) Power_1 <- append(Power_1, 1 - betaAlpha(sig,n,k1,k2,delta,1)) Power_2 <- append(Power_2, 1 - betaAlpha(sig,n,k1,k2,delta,2)) } df$Delta <- Delta df$Power_1 <- Power_1 df$Power_2 <- Power_2 df # display input data and reaultsThe results are: Delta= Effect Size, Power_1 and Power_2 are powers (1 - β) of the model 1 and 2 tail > df # display input data and reaults Sig n K1 Ca1 K2 Ca2 Delta Power_1 Power_2 1 0.05 55 5 0.7 7 0.3 0.4285714 0.8750000 0.7968750 2 0.01 55 5 0.7 7 0.3 0.4285714 0.6796875 0.5859375 3 0.05 10 7 0.8 9 0.5 0.4000000 0.3125000 0.2187500 4 0.01 10 7 0.8 9 0.5 0.4000000 0.1250000 0.0781250 # Program 3: Confidence Intervals # Subroutine CiAlpha <- function(pc,n,k1,k2,delta,tail) # pc = percent ci, n = sample size { # k1 k2 = number of items, delta = effect size logDelta = log(delta) prob = (1 - pc / 100) / tail za = qnorm(1 - prob) logci = abs(logDelta) * za * sqrt(2 * k1 / ((k1 - 1) * (n - 2)) + 2 * k2 / ((k2 - 1) * (n - 2))) LL = exp(logDelta - logci) UL = exp(logDelta + logci) print(c(LL, UL, UL-LL)) return (c(LL, UL, UL-LL)) } # Main program confidence interval # Input Data # Pc = pecent confidence usiually 90 95 or 99 # n = sample size each Cronbach Alpha (assuming same sample size for both) # K1 = number of items in Cronbach Alpha 1 # Ca1 = Value of Cronbach Alpha 1 # K2 = number of items in Cronbach Alpha 2 # Ca2 = Value of Cronbach Alpha 2 dat = (" Pc n K1 Ca1 K2 Ca2 95 55 5 0.7 7 0.3 99 55 5 0.7 7 0.3 95 20 7 0.8 8 0.5 99 20 7 0.8 8 0.5 ") df <- read.table(textConnection(dat),header=TRUE) # conversion to data frame df # optional display of input data # vectors for results Lower and Upper limit of confidence interval 1 and 2 tail Delta <- vector() Lower_1 <- vector() Upper_1 <- vector() Lower_2 <- vector() Upper_2 <- vector() # Calculations for(i in 1 : nrow(df)) { pc = df$Pc[i] n = df$n[i] k1 = df$K1[i] ca1 = df$Ca1[i] k2 = df$K2[i] ca2 = df$Ca2[i] delta = (1 - ca1) / (1 - ca2) Delta <- append(Delta, delta) # one tail resAr = CiAlpha(pc,n,k1,k2,delta,1) Lower_1 <- append(Lower_1, resAr[1]) Upper_1 <- append(Upper_1, resAr[2]) # two tail resAr = CiAlpha(pc,n,k1,k2,delta,2) Lower_2 <- append(Lower_2, resAr[1]) Upper_2 <- append(Upper_2, resAr[2]) } # Pool results in data frame for disolay df$Delta <- Delta df$Lower_1 <- Lower_1 df$Upper_1 <- Upper_1 df$Lower_2 <- Lower_2 df$Upper_2 <- Upper_2 df # show input data and resultsThe results are as follows
> df # show input data and results. Pc n K1 Ca1 K2 Ca2 Delta Lower_1 Upper_1 Lower_2 Upper_2 1 95 55 5 0.7 7 0.3 0.4285714 0.2813464 0.6528375 0.2595525 0.7076545 2 99 55 5 0.7 7 0.3 0.4285714 0.2363259 0.7772041 0.2217114 0.8284348 3 95 20 7 0.8 8 0.5 0.4000000 0.1864158 0.8582965 0.1610502 0.9934791 4 99 20 7 0.8 8 0.5 0.4000000 0.1358638 1.1776497 0.1210075 1.3222323 # Pilot Studies comparing 2 Cronbach Alphas # Subroutine function CiAlpha <- function(pc,n,k1,k2,delta,tail) # pc = percent ci, n = sample size { # k1 k2 = number of items, delta = effect size logDelta = log(delta) prob = (1 - pc / 100) / tail za = qnorm(1 - prob) logci = abs(logDelta) * za * sqrt(2 * k1 / ((k1 - 1) * (n - 2)) + 2 * k2 / ((k2 - 1) * (n - 2))) LL = exp(logDelta - logci) UL = exp(logDelta + logci) print(c(LL, UL, UL-LL)) return (c(LL, UL, UL-LL)) } # Main Program # Input Data pc = 95 # percent confidence k1 = 5 # number of items in alpha 1 ca1 = 0.7 # number of items in alpha 1 k2 = 7 # number of items in alpha 2 ca2 = 0.3 # number of items in alpha 2 intv = 5 # sample size intervals for testing maxN = 100 # maximum sample size delta = (1 - ca1) / (1 - ca2) # delta effect size # Vectors of results SSiz <- vector() # sample size # 1 tail CI1 <- vector() # confidence interval 1 tail Diff1 <- vector() # difference from previous row 1 tail CaseDec1 <- vector() # decrease per case 1 tail PcDec1 <- vector() # % decrease per case 1 tail # 2 tail CI2 <- vector() # confidence interval 2 tail Diff2 <- vector() # difference from previous row 2 tail CaseDec2 <- vector() # decrease per case 2 tail PcDec2 <- vector() # % decrease per case 2 tail # First row n = intv SSiz <- append(SSiz,n) # 1 tail resAr = CiAlpha(pc,n,k1,k2,delta,1) print(resAr) ci1 = resAr[3] # ci 1 tail CI1 <- append(CI1,sprintf(ci1, fmt="%#.4f")) # confidence interval 1 tail Diff1 <- append(Diff1,"") # difference from previous row 1 tail CaseDec1 <- append(CaseDec1,"") # decrease per case 1 tail PcDec1 <- append(PcDec1,"") # % decrease per case 1 tail # 2 tail resAr = CiAlpha(pc,n,k1,k2,delta,2) print(resAr) ci2 = resAr[3] # ci 2 tail CI2 <- append(CI2,sprintf(ci2, fmt="%#.4f")) # confidence interval 2 tail Diff2 <- append(Diff2,"") # difference from previous row 2 tail CaseDec2 <- append(CaseDec2,"") # decrease per case 2 tail PcDec2 <- append(PcDec2,"") # % decrease per case 2 tail # Subsequent rowa while(n < maxN) { n = n + intv SSiz <- append(SSiz,n) # 1 tail oldci1 = ci1 resAr = CiAlpha(pc,n,k1,k2,delta,1) ci1 = resAr[3] # ci 1 tail CI1 <- append(CI1,sprintf(ci1, fmt="%#.4f")) # confidence interval 1 tail diff = oldci1 - ci1 Diff1 <- append(Diff1,sprintf(diff, fmt="%#.4f")) # difference from previous row 1 tail decpercase = diff / intv CaseDec1 <- append(CaseDec1,sprintf(decpercase, fmt="%#.4f")) # decrease per case 1 tail pcdec = decpercase / oldci1 * 100 PcDec1 <- append(PcDec1,sprintf(pcdec, fmt="%#.1f")) # % decrease per case 1 tail # 2 tail oldci2 = ci2 resAr = CiAlpha(pc,n,k1,k2,delta,2) ci2 = resAr[3] # ci 2 tail CI2 <- append(CI2,sprintf(ci2, fmt="%#.4f")) # confidence interval 2 tail diff = oldci2 - ci2 Diff2 <- append(Diff2,sprintf(diff, fmt="%#.4f")) # difference from previous row 2 tail decpercase = diff / intv CaseDec2 <- append(CaseDec2,sprintf(decpercase, fmt="%#.4f")) # decrease per case 2 tail pcdec = decpercase / oldci2 * 100 PcDec2 <- append(PcDec2,sprintf(pcdec, fmt="%#.1f")) # % decrease per case 2 tail } # Combine all result vectors into single data frame for display df <- data.frame(SSiz,CI1,Diff1,CaseDec1,PcDec1,CI2,Diff2,CaseDec2,PcDec2) df # Results tableThe results are as follows. Please note:
df # Results SSiz CI1 Diff1 CaseDec1 PcDec1 CI2 Diff2 CaseDec2 PcDec2 1 5 2.4405 3.4754 2 10 1.1211 1.3194 0.2639 10.8 1.4403 2.0351 0.4070 11.7 3 15 0.8193 0.3018 0.0604 5.4 1.0241 0.4162 0.0832 5.8 4 20 0.6742 0.1450 0.0290 3.5 0.8321 0.1920 0.0384 3.7 5 25 0.5856 0.0886 0.0177 2.6 0.7174 0.1147 0.0229 2.8 6 30 0.5245 0.0611 0.0122 2.1 0.6395 0.0780 0.0156 2.2 7 35 0.4792 0.0454 0.0091 1.7 0.5822 0.0573 0.0115 1.8 8 40 0.4438 0.0354 0.0071 1.5 0.5379 0.0443 0.0089 1.5 9 45 0.4152 0.0286 0.0057 1.3 0.5023 0.0356 0.0071 1.3 10 50 0.3915 0.0237 0.0047 1.1 0.4729 0.0294 0.0059 1.2 11 55 0.3715 0.0201 0.0040 1.0 0.4481 0.0248 0.0050 1.0 12 60 0.3542 0.0173 0.0035 0.9 0.4268 0.0213 0.0043 0.9 13 65 0.3392 0.0151 0.0030 0.9 0.4083 0.0185 0.0037 0.9 14 70 0.3259 0.0133 0.0027 0.8 0.3920 0.0163 0.0033 0.8 15 75 0.3140 0.0118 0.0024 0.7 0.3775 0.0145 0.0029 0.7 16 80 0.3034 0.0106 0.0021 0.7 0.3645 0.0130 0.0026 0.7 17 85 0.2937 0.0096 0.0019 0.6 0.3528 0.0117 0.0023 0.6 18 90 0.2850 0.0088 0.0018 0.6 0.3421 0.0107 0.0021 0.6 19 95 0.2769 0.0080 0.0016 0.6 0.3323 0.0098 0.0020 0.6 20 100 0.2695 0.0074 0.0015 0.5 0.3233 0.0090 0.0018 0.5
Contents of E:14
Contents of F:15
Contents of D:3
Contents of E:4
Contents of F:5
|