Monday, February 8, 2016

Non Parametric tests

NONPARAMETRIC  TESTS
Why NP Test?
When the sample distribution is unknown.
When the population distribution is abnormal i.e., data involves too many variables.
Make minimal assumptions about the underlying distribution of the data.
Broad categories of NP Tests
The NP tests can be grouped into three Broad categories based on how the data are organized:
A one-sample test - analyzes one field.
A test for related samples -  compares two or more fields for the same set of cases.
An independent-samples test  - analyzes one field that is grouped by categories of another field.
Various NP tests
There are a number of nonparametric tests. The important one are;
The Runs Test
Chi-Square Test
Tests the hypothesis that the observed frequencies do not differ from their expected values.
Example;
A large hospital schedules discharge support staff assuming that patients leave the hospital at a fairly constant rate throughout the week. However, because of increasing complaints of staff shortages, the hospital administration wants to determine whether the number of discharges varies by the day oftheweek. C:\ProgramFiles\SPSSInc\PASWStatistics18\Samples\English\dischargedata.sav












Data
Hypothesis
Use Chi-Square Test to test the assumption that patients leave the hospital at a constant rate.
Computations
Test Results
Discussion
chi-square statistic equals 29.389. This is computed by squaring the residual for each day, dividing by its expected value, and summing across all days.
Number of expected values that can vary before the rest are completely determined.
For a one-sample chi-square test, df is equal to the number of rows minus 1.
Asymp. Sig. is the estimated probability of obtaining a chi-square value greater than or equal to 29.389 if patients are discharged evenly across the week.
The low significance value (.000) suggests that the average rate of patient discharges really does differ by day of the week.
One-Sample Kolmogorov-Smirnov
The One-Sample Kolmogorov-Smirnov procedure is used to test the null hypothesis that a sample comes from a particular distribution. (Diagnostic test).
Computational Procedure
It involves finding the largest difference (in absolute value) between two cumulative distribution functions (CDFs)--one computed directly from the data; the other, from mathematical theory.
Example
An insurance analyst wants to model the number of automobile accidents per driver. She has randomly sampled data on drivers in a certain region. She wants to test to confirm that the number of accidents (X) follows a Poisson distribution.
This example uses the file autoaccidents.sav.
Results and discussion
The Poisson distribution is indexed by only one parameter--the mean. This sample of drivers averaged about 1.72 accidents over the past five years.
The next three rows fall under the general category Most Extreme Differences. The differences referred to are the largest positive and negative points of divergence between the empirical and theoretical CDFs.
The first difference value, labeled Absolute, is the absolute value of the larger of the two difference values .
This value will be required to calculate the test statistic.
The Positive difference is the point at which the empirical CDF exceeds the theoretical CDF by the greatest amount.
At the opposite end of the continuum, the Negative difference is the point at which the theoretical CDF exceeds the empirical CDF by the greatest amount.
The Z test statistic is the product of the square root of the sample size and the largest absolute difference between the empirical and theoretical CDFs.
Unlike much statistical testing, a significant result here is bad news. The probability of the Z statistic is below 0.05, meaning that the Poisson distribution with a parameter of 1.72 is not a good fit for the number of accidents within the past five years in this sample of drivers.
Generally, a significant Kolmogorov-Smirnov test means one of two things--either the theoretical distribution is not appropriate, or an incorrect parameter was used to generate that distribution.
Looking at the previous results, it is hard for the analyst to believe that the Poisson distribution is not the appropriate one to use for modeling automobile accidents.
Poisson is often used to model rare events and, fortunately, automobile accidents are relatively rare.
The analyst wonders if gender may be confounding the test. The total sample average assumes that males and females have equal numbers of accidents, but this is probably not true. She will split the sample by gender, using each gender's average as the Poisson parameter in separate tests.
The analyst wonders if gender may be confounding the test. The total sample average assumes that males and females have equal numbers of accidents, but this is probably not true.
She will split the sample by gender, using each gender's average as the Poisson parameter in separate tests.
The statistics table provides evidence that a single Poisson parameter for both genders may not be correct.
Males in this sample averaged about two accidents over the past five years, while females tended to have fewer accidents.
When assessing goodness of fit, remember that a statistically significant Z statistic means that the chosen distribution does not fit the data well.
Unlike the previous test, however, we see a much better fit when splitting the file by gender.
Increasing the Poisson parameter from 1.72 to 1.98 clearly provides a better fit to the accident data for men.
Similarly, decreasing the Poisson parameter from 1.72 to 1.47 provides a better fit to the accident data for women
Summary
Using the One-Sample Kolmogorov-Smirnov Test procedure, we found that, overall, the number of automobile accidents per driver do not follow a Poisson distribution.
However, once we split the file on gender, the distributions of accidents for males and females can individually be considered Poisson.
Conclusion
These results demonstrate that the one-sample Kolmogorov-Smirnov test requires not only that we choose the appropriate distribution but the appropriate parameter(s) for it as well.
If we want to compare the distributions of two variables, the two-sample Kolmogorov-Smirnov test in the Two-Independent-Samples Tests procedure is to be used.
The Runs Test Procedure
Many statistical tests assume that the observations in a sample are independent; in other words, that the order in which the data were collected is irrelevant.
If the order does matter, then the sample is not random, and we cannot draw accurate conclusions about the population from which the sample was drawn.
Therefore, it is prudent to check the data for a violation of this important assumption.
We can use the Runs Test procedure to test whether the order of values of a variable is random.
The procedure first classifies each value of the variable as falling above or below a cut point and then tests to ensure that there is no order to the resulting sequence.
The cut point is based either on a measure of central tendency ( mean, median, or mode) or a custom value.
We can obtain descriptive statistics and/or quartiles of the test variable.
Example
An e-commerce firm enlisted beta testers to browse and then rate their new Web site. Ratings were recorded as soon as each tester finished browsing. The team is concerned that ratings may be related to the amount of time spent browsing.
The ratings are collected in the file siteratings.sav. Test  the hypothesis that time spent in browsing is correlated with site rating.

Nonparametric Tests for Two Independent Samples
The nonparametric tests for two independent samples are useful for determining whether or not the values of a particular variable differ between two groups.
This is especially useful when the assumptions of the t test are not met.
When we want to test for differences between two groups, the independent-samples t test comes naturally to mind.
However, despite its simplicity, power, and robustness, the independent-samples t test is invalid when certain critical assumptions are not met.
These assumptions center around the parameters of the test variable (in this case, the mean and variance) and the distribution of the variable itself.
Most important, the t test assumes that the sample mean is a valid measure of center. While the mean is valid when the distance between all scale values is equal, it's a problem when our test variable is ordinal because in ordinal scales the distances between the values are arbitrary.
Furthermore, because the variance is calculated using squared distances from the mean, it too is invalid if those distances are arbitrary.
Finally, even if the mean is a valid measure of center, the distribution of the test variable may be so non-normal that it makes us suspicious of any test that assumes normality.
If any of these circumstances is true for our analysis, we should consider using the nonparametric procedures designed to test for the significance of the difference between two groups.
 They are called nonparametric because they make no assumptions about the parameters of a distribution, nor do they assume that any particular distribution is being used.
Two popular nonparametric tests of location (or central tendency)--the Mann-Whitney and Wilcoxon tests--and a test of location and shape--the two-sample Kolmogorov-Smirnov test.
From the above two tests, Mann-Whitney and Wilcoxon tests is commonly used test.
Mann-Whitney and Wilcoxon tests
We can use the Mann-Whitney and Wilcoxon statistics to test the null hypothesis that two independent samples come from the same population.
Their advantage over the independent-samples t test is that Mann-Whitney and Wilcoxon do not assume normality and can be used to test ordinal variables.
Physicians randomly assigned female stroke patients to receive only physical therapy or physical therapy combined with emotional therapy. Three months after the treatments, the Mann-Whitney test is used to compare each group's ability to perform common activities of daily life.
Data File
The results are in the file adl.sav. Test to determine whether the two groups' abilities differ.
The U statistic is simple (but tedious) to calculate. For each case in group 1, the number of cases in group 2 with higher ranks is counted. Tied ranks count as 1/2. This process is repeated for group 2. The Mann-Whitney U statistic displayed in the table is the smaller of these two values.
References
Siegel, S., and N. J. Castellan. 1988. Nonparametric statistics for the behavioral sciences. New York: McGraw-Hill, Inc..
Conover, W. J. 1980. Practical Nonparametric Statistics, 2nd ed. New York: John Wiley and Sons.
Daniel, W. W. 1995. Biostatistics, 6th ed. New York: John Wiley and Sons.
Norusis, M. 2004. SPSS 13.0 Guide to Data Analysis. Upper Saddle-River, N.J.: Prentice Hall, Inc..

Norusis, M. 2004. SPSS 13.0 Statistical Procedures Companion. Upper Saddle-River, N.J.: Prentice Hall, Inc..