The reasons for the lower power in the non-parametric tests are due to its ability to lose precision and its false sense of securing data. Another reason for the lower power is due to the ability to test distributions only and that it cannot deal with highly ordered interactions.
By assuming the researcherâ€™s negative speculation about the problem to be solved to be false (null hypothesis), non-parametric tests will then have less power to find a meaningful relationship as compared to the counterparts (parametric tests).
Non-parametric tests tend to be non-sensitive, hence their inability to detect an effect of the independent variable on the dependent variable. Its power efficiency, therefore, tends to be too lower as compared to the other parametric type of statistical test.
A larger sample size is therefore needed for the non-parametric test as opposed to the parametric tests to detect any effect that might be present at any given significant level. The power efficiency of two sets of tests can be expressed as shown below:
Considering two sets of tests A and B, the power efficiency of the two tests A compared with the test B = N (B) / N (A) * 100
Where: N (A) â€“ represents the sample size that is needed for showing a statistically significant effect usually at a test level of about five percent. On the other hand, N (B) represents the sample dimension that is required for showing a statistically significant effect at a level of about five percent for test B.