ijsmr

ijsmr logo-pdf 1349088093

Comparison of Some Methods of Testing Statistical Hypotheses: (Part I. Parallel Methods)
Pages 174-197
K.J. Kachiashvili
DOI:
http://dx.doi.org/10.6000/1929-6029.2014.03.02.11
Published: 14 May 2014


Abstract: The article focuses on the discussion of basic approaches to hypotheses testing, which are Fisher, Jeffreys, Neyman, Berger approaches and a new one proposed by the author of this paper and called the constrained Bayesian method (CBM). Wald and Berger sequential tests and the test based on CBM are presented also. The positive and negative aspects of these approaches are considered on the basis of computed examples. Namely, it is shown that CBM has all positive characteristics of the above-listed methods. It is a data-dependent measure like Fisher’s test for making a decision, uses a posteriori probabilities like the Jeffreys test and computes error probabilities Type I and Type II like the Neyman-Pearson’s approach does. Combination of these properties assigns new properties to the decision regions of the offered method. In CBM the observation space contains regions for making the decision and regions for no-making the decision. The regions for no-making the decision are separated into the regions of impossibility of making a decision and the regions of impossibilityof making a unique decision. These properties bring the statistical hypotheses testing rule in CBM much closer to the everyday decision-making rule when, at shortage of necessary information, the acceptance of one of made suppositions is not compulsory. Computed practical examples clearly demonstrate high quality and reliability of CBM. In critical situations, when other tests give opposite decisions, it gives the most logical decision. Moreover, for any information on the basis of which the decision is made, the set of error probabilities is defined for which the decision with given reliability is possible.

Keywords: Hypotheses testing, -value, likelihood ratio, frequentist approaches, Bayesian approach, constrained Bayesian method, decision regions.

Download Full Article

ijsmr logo-pdf 1349088093

Conditional Two Level Mixture with Known Mixing Proportions: Applications to School and Student Level Overweight and Obesity Data from Birmingham, England
Pages 298-308
Shakir Hussain, Mehdi AL-Alak and Ghazi Shukur
DOI:
http://dx.doi.org/10.6000/1929-6029.2014.03.03.9
Published: 05 August 2014


Abstract: Two Level (TL) models allow the total variation in the outcome to be decomposed as level one and level two or ‘individual and group’ variance components. Two Level Mixture (TLM) models can be used to explore unobserved heterogeneity that represents different qualitative relationships in the outcome.

In this paper, we extend the standard TL model by introducing constraints to guide the TLM algorithm towards a more appropriate data partitioning. Our constraints-based methods combine the mixing proportions estimated by parametric Expectation Maximization (EM) of the outcome and the random component from the TL model. This forms new two level mixing conditional (TLMc) approach by means of prior information. The new framework advantages are: 1. avoiding trial and error tactic used by TLM for choosing the best BIC (Bayesian Information Criterion), 2. permitting meaningful parameter estimates for distinct classes in the coefficient space and finally 3. allowing smaller residual variances. We show the benefit of our method using overweight and obesity from Body Mass Index (BMI) for students in year 6. We apply these methods on hierarchical BMI data to estimate student multiple deprivation and school Club effects.

Keywords: Parametric Expectation Maximization, Multilevel Mixture, Conditional Multilevel Mixture Known Mix, Overweight and Obesity Data.
Download Full Article

ijsmr logo-pdf 1349088093

Constrained Bayesian Method of Composite Hypotheses Testing: Singularities and Capabilities
Pages 135-167
K.J. Kachiashvili
DOI:
http://dx.doi.org/10.6000/1929-6029.2016.05.03.1
Published: 16 July 2016


Abstract: The paper deals with the constrained Bayesian Method (CBM) for testing composite hypotheses. It is shown that, similarly to the cases when CBM is optimal for testing simple and multiple hypotheses in parallel and sequential experiments, it keeps the optimal properties at testing composite hypotheses. In particular, it easily, without special efforts, overcomes the Lindley’s paradox arising when testing a simple hypothesis versus a composite one. The CBM is compared with Bayesian test in the classical case and when the a priori probabilities are chosen in a special manner for overcoming the Lindley’s paradox. Superiority of CBM against these tests is demonstrated by simulation. The justice of the theoretical judgment is supported by many computation results of different characteristics of the considered methods.

Keywords: CBM, Bayesian test, Composite hypotheses, Lindley’s paradox, Hypotheses testing.
Download Full Article

ijsmr logo-pdf 1349088093

Confidence Intervals for the Population Correlation Coefficient ρ
Pages 99-111
Shipra Banik and B.M. Golam Kibria
DOI:
http://dx.doi.org/10.6000/1929-6029.2016.05.02.4
Published: 02 June 2016


Abstract: Computing a confidence interval for a population correlation coefficient is very important for researchers as it gives an estimated range of values which is likely to include an unknown population correlation coefficient. This paper studied some confidence intervals for estimating the population correlation coefficient ρ by means of a Monte Carlo simulation study. Data are randomly generated from several bivariate distributions with a various values of sample sizes. Assessment measures such as coverage probability, mean width and standard deviation of the width are selected for performances evaluation. Two real life data are analyzed to demonstrate the application of the proposed confidence intervals. Based on our findings, some good confidence intervals for a population correlation coefficient are suggested for practitioners and applied researchers.

Keywords: Bivariate distribution, Bootstrapping, Correlation coefficient, Confidence interval, Simulation study.
Download Full Article

ijsmr logo-pdf 1349088093

Control Charts for Skewed Distributions: Johnson’s Distributions
Pages 217-223
Bachioua Lahcene
DOI:
http://dx.doi.org/10.6000/1929-6029.2015.04.02.8
Published: 21 May 2015


Abstract: In this study, some important issues regarding process capability and performance have been highlighted, particularly in case when the distribution of a process characteristic is non-normal. The process capability and performance analysis has become an inevitable step in quality management of modern industrial processes. Determination of the performance capability of a stable process using the standard process capability indices (Cp, Cpk) requires that the quality characteristics of the underlying process data should follow a normal distribution. Statistical Process Control charts widely used in industry and services by quality professionals require that the quality characteristic being monitored is normally distributed. If, in contrast, the distribution of this characteristic is not normal, any conclusion drawn from control charts on the stability of the process may be misleading and erroneous. In this paper, an alternative approach has been suggested that is based on the identification of the best distribution that would fit the data. Specifically, the Johnson distribution was used as a model to normalize real field data that showed departure from normality. Real field data from the construction industry was used as a case study to illustrate the proposed analysis.

Keywords: Statistical Process Control, Shewhart control charts, non-normal data, Johnson System of distributions.

Download Full Article