Content Disclaimer Copyright @2020. All Rights Reserved. |
Links : Home Index (Subjects) Contact StatsToDo
Explanations and References
Friedman's Two Way Analysis of Variance is a method to compare matched samples in multiple groups,
where the data is nonparametric. It is the nonparametric equivalent of the Two Way Analysis of Variance,
and similar to Wilcoxon's Matched Pair Ranks Test, except that there is usually more than 2 matched groups.
Jvascript Program
The advantage of Friedman's Test over unpaired comparison such as Kruskall Wallace One Way Analysis of Variance is that the comparison between groups are carried out within each case, so that variance between cases are bypassed. The data to be entered consists of a multiple column table, where the columns represent groups, and the rows individual cases. The number in each cell is the ranking from the row subject to the column group. Actual measurements can be used, providing the units measure the same thing across the groups, and the data in each trow are ranked before the statistics are calculated. ExampleThe use of the method and interpretation of results are best demonstrated by the example provided in Javascript panel
We want to compare pain relieve medication for chronic arthritis sufferers. We have 4 analgesics (A1, A2, A3, and A4). We recruited 17 arthritis sufferers, and treat them with the 4 medications in random order, then ask each sufferer to comparatively rate the effectiveness of pain relief, from 1 for poorest to 4 the best. If a sufferer cannot distinguish between medications, then the same rating can be given to more than one medication. The table to the left show the ratings provided by the 17 sufferers to the 4 analgesics.
The program firstly re-ranks the data, as shown in the table to the right to the right. Each row is ranked from 1 to 4 for the 4 groups. Where there are ties, the ranks are averaged. The table is then summarised as a table of counts, where the numbers for each rank in each group are shown. This provides a better visual interpretation of how the preferences are distributed, and the numbers can be used to produce graphics such as bar charts. The first statistics is to test the data against the null hypothesis that there is no difference between the groups. From this set of data, F=12.7136, df = 3, p=0.0053, so the differences between the groups are statistically significant. The second analysis should be carried out only if a significant difference is found. It is a post hoc analysis to see which pair of groups are significantly different. The calculation produces the least significant difference in sum of ranks between the groups. From this set of data, this is 22.1 at the p=0.05 level, and 26.3 at the p=0.01 level. Differences between any two groups exceeding these values are statistically significant. From our data, the only pair that is different is between analgesic 1 and 4. The sum of ranks for A1 is 38 and for A4 is 67.5, a difference of 29.5, which is larger than the least significant difference at the p=0.01 level of 26.3. We can therefore conclude that arthritis sufferers prefer analgesic 4 more than analgesic 1, but no conclusions can be drawn on other comparisons. References Siegel S and Castellan Jr. N.J. Nonparametric Statistics for the Behavioral Sciences (1988) International Edition. McGraw-Hill Book Company New York. ISBN 0-07-057357-3 p. 174-183 Siegel S and Castellan Jr. N.J. Nonparametric Statistics for the Behavioral Sciences (1988) International Edition. McGraw-Hill Book Company New York. ISBN 0-07-057357-3 Table M. Critical values for Friedman's F for small samples p. 353
R uses a different format for data entry for Friedman's Two Way Analysis of Variance than that used in the Javascript program. The data entry requires a 3 column matrix, where:
Friedman's Two Way Analysis of Variance for non-parametric data # Step 1. Data table to data frame dataFrame_1 (same as Javascript Program) # original data. Rows represent subjects or cases, 4 columns representing 4 tests, cells = scores txtTable = (" 2 4 1 3 1 3 2 4 3 1 2 4 1 1 2 3 1 2 3 4 2 3 1 4 1 2 3 4 1 2 4 3 2 3 1 4 3 1 4 2 1 1 2 3 3 1 2 4 3 4 1 2 3 4 1 2 2 1 3 4 1 1 3 2 1 2 4 3 2 3 1 2 1 4 3 2 1 2 4 3 1 4 3 2") dataFrame_1<- read.table(textConnection(txtTable),header=FALSE) # make data frame dataFrame_1 #Optional show dataframe_1 in consoleIf the printing command is activated, the data, as entered will be printed > dataFrame_1 #Optional show dataframe_1 in console V1 V2 V3 V4 1 2 4 1 3 2 1 3 2 4 3 3 1 2 4 4 1 1 2 3 5 1 2 3 4 6 2 3 1 4 7 1 2 3 4 ....Having checked that the data is correctly entered and converted to a data frame, the second step is to convert this into a data frame with the format required by Friedman's Test in R. This is done as follows # Step 2. Convert data frame to the format used by R library for Friedman's Test # format copied from https://www.statology.org/friedman-test-r/ Case = c() # vector identufy case bumber Tests = c() # vector identify test number Score = c() # vector identify test scores for(i in 1 : length(dataFrame_1[,1])){ for(j in 1 : length(dataFrame_1[1,])){ Case <- append(Case,i) Tests <- append(Tests,j) Score <- append(Score,dataFrame_1[i,j]) } } dataFrame_2 <- data.frame(Case,Tests,Score) dataFrame_2 #Optional show dataframe_2 in consoleIf dataFrame_2 is printed, it should show the correct format for R, as follows Case Tests Score 1 1 1 2 2 1 2 4 3 1 3 1 4 1 4 3 5 2 1 1 6 2 2 3 7 2 3 2 8 2 4 4 9 3 1 3 10 3 2 1 11 3 3 2 ...Once the correct format is assured, statistical testing using the Friedman's Two Eay Analysis can be carried out, as follows # Step 3. Statistics Testing #Friedman's Two Way Analysis of Variancw friedman.test(y=dataFrame_2$Score, groups=dataFrame_2$Tests, blocks=dataFrame_2$Case) #Post hoc pairwise comparison using Wilcoxon's Test, with Bonferroni's adjustment pairwise.wilcox.test(dataFrame_2$Score, dataFrame_2$Tests, p.adj = "bonf")The results are as follows Friedman rank sum test data: dataFrame_2$Score, dataFrame_2$Tests and dataFrame_2$Case Friedman chi-squared = 12.714, df = 3, p-value = 0.005299 > > #Post hoc pairwise comparison using Wilcoxon's Test, with Bonferroni's adjustment > pairwise.wilcox.test(dataFrame_2$Score, dataFrame_2$Tests, p.adj = "bonf") Pairwise comparisons using Wilcoxon rank sum test data: dataFrame_2$Score and dataFrame_2$Tests 1 2 3 2 0.53466 - - 3 0.29068 1.00000 - 4 0.00043 0.25891 0.31005Please note that, in the post hoc comparison between individual tests, the Javascript program uses the ranks of the scores to calculate, but the R program uses the original scores. The actuasl numerical results are therefore different, although the conclusions are the same, that other than a significant difference was found between groups 1 and 4, all other comparisons showed no significant difference. Please also note that the R program reports an error that exact probability from Bonferroni's adjustment cannot be estimated. This is not caused by any flaws in the design or calculations, but because Bonferroni's adjustment is inherently imprecise, so the results are the best estimates only. |