ijsmr
|
|
Abstract: Bioequivalence data often do not follow the normality assumption on the linear (original) scale, therefore in that situation, the use of the logarithmic transformation is recommended. In the bioequivalence analysis, confusion arises about the use of geometric mean ratio when the logarithmic transformation is recommended by the regulatory authorities. The purpose of this research paper is to clear this confusion. Different average bioequivalence criteria are also reviewed in this paper. Keywords: Geometric mean, Average bioequivalence, logarithmic transformation, Bioequivalence ranges, normal and log-normal distribution.Download Full Article |
|
|
Abstract: Exposure-crossover design offers a non-experimental option to control for stable baseline confounding through self-matching while examining causal effect of an exposure on an acute outcome. This study extends this approach to longitudinal data with repeated measures of exposure and outcome using data from a cohort of 340 older medical patients in an intensive care unit (ICU). The analytic sample included 92 patients who received ≥1 dose of haloperidol, an antipsychotic medication often used for patients with delirium. Exposure-crossover design was implemented by sampling the 3-day time segments prior (Induction) and posterior (Subsequent) to each treatment episode of receiving haloperidol. In the full cohort, there was a trend of increasing delirium severity scores (Mean±SD: 4.4±1.7) over the course of the ICU stay. After exposure-crossover sampling, the delirium severity score decreased from the Induction (4.9) to the Subsequent (4.1) intervals, with the treatment episode falling in-between (4.5). Based on a GEE Poisson model accounting for self-matching and within-subject correlation, the unadjusted mean delirium severity scores was -0.55 (95% CI: -1.10, -0.01) points lower for the Subsequent than the Induction intervals. The association diminished by 32% (-0.38, 95%CI: -0.99, 0.24) after adjusting only for ICU confounding, while being slightly increased by 7% (-0.60, 95%CI: -1.15, -0.04) when adjusting only for baseline characteristics. These results suggest that longitudinal exposure-crossover design is feasible and capable of partially removing stable baseline confounding through self-matching. Loss of power due to eliminating treatment-irrelevant person-time and uncertainty around allocating person-time to comparison intervals remain methodological challenges. Keywords: Exposure-crossover design, self-matching, confounding, causal effects, generalized estimating equation.Download Full Article |
|
|
Abstract: Propensity score matching is a useful tool to analyze observational data in clinical investigations, but it is often executed in an overly simplistic manner, failing to use the data in the best possible way. This review discusses current best practices in propensity score matching, outlining the method’s essential steps, including appropriate post-matching balance assessments and sensitivity analyses. These steps are summarized as eight key traits of a propensity matched study. Further, this review illustrates these traits through a case study examining the impact of access site in percutaneous coronary intervention (PCI) procedures on bleeding complications. Through propensity score matching, we find that bleeding occurs significantly less often with radial access procedures, though many other outcomes show no significant difference by access site, a finding that mirrors the results of randomized controlled trials. Lack of attention to methodological principles can result in results that are not biologically plausible. Keywords: Propensity Score Matching, Observational Data, Clinical Investigations. Download Full Article |
|
|
Abstract: Melanoma of the skin is the fifth and seventh most commonly diagnosed carcinoma in men and women, respectively, in the USA. So far, gene signatures prognostic for overall and distant metastasis-free survival, for example, have been promising in the identification of therapeutic targets for primary and metastatic melanoma. But most of these gene signatures have been selected using statistics that depend entirely on the parametric distributions of the data (e.g. t-statistics). In this study, we assessed the impact of relaxing the parametric assumptions on the power of the models used for gene selection. We developed a semi-parametric model for feature selection that does not depend on the distributions of the covariates. This copula-based model only assumed that the marginal distributions of the covariates are continuous. Simulations indicated that the copula-based model had reasonable power at various levels of the false discovery rate (FDR). These results were validated in a publicly-available melanoma dataset. Relaxing parametric assumptions on microarray data may yield procedures that have good power for differential gene expression analysis. Keywords: Copula, False discovery rate, Melanoma, Microarray, Power. |
|
|
Abstract: In this study, we validate the smooth test of goodness-of-fit for the proportionality of the hazard function in the two-sample problem in cancer survival studies. The smooth test considered here is an extension of Neyman’s smooth test for proportional hazard functions. Simulations are conducted to compare the performance of the smooth test, the data-driven smooth test, the Kolmogorov-Smirnov proportional hazards test and the global test, in terms of power. Eight real cancer datasets from different settings are assessed for the proportional hazard assumption in the Cox proportional hazard models, for validation. The smooth test performed best and is independent of the number of covariates in the Cox proportional hazard models. Keywords: Cancer, Cox proportional hazards model, Global test, Neyman’s smooth test, Two-sample problem. |


