No Need for Discreet Cutoff P-Values in Research?

Article

Relying on non-continuous variables to measure medical outcomes may question the statistical significance of results, AAOS 2016 study shows.

Relying on non-continuous variables to measure medical outcomes may call into question the statistical significance of results, new research suggests.

In a study presented on March 4 at the annual meeting of the American Academy of Orthopaedic Surgeons in Orlando, orthopaedic trauma surgeon Paul Tornetta III, M.D.,  of Boston University's School of Medicine, presented data revealing that in trials relying on events, a change in just a median of four events could change a study from statistically significant to insignificant, and a median change in five events could make an insignificant result significant.

"This highlights the need for readers to understand how p values relate to study findings and that using a discreet cut-off for p-value in determining importance is likely not appropriate," Dr. Tornetta and his colleagues concluded.

The typical statistical cutoff for significance is p < 0.05. But categorical outcomes are dependent on the number of events of a particular outcome, which means that a small, incremental change can easily alter statistical significance.

The researchers analyzed 198 fracture care studies published in The Journal of Bone & Joint Surgery and the Journal of Orthopaedic Trauma over 20 years. For each, they reported the p-value of the outcomes and then recorded the number of events needed to change the findings from significant to insignificant or vice versa. The studies included 230 outcomes originally reported as significant and 539 reported as statistically insignificant.

The median number of patients included in the studies was 95 (range from 12 to 6,000) and the median p-value for significant findings was 0.003. The median p value for insignificant findings was 0.6. In evaluating 769 outcomes, researchers found that a median of only four events would flip studies with reported p values of <0.05 to over 0.05 and five events would make significant trials initially reporting p ≥ 0.05.

The median of five events needed to flip the significance of trials was 8.9 percent of the events in one arm and a mere 3.8 percent of the events in the total study population - and the average loss to follow-up in the studies was nearly that high at 3 percent. Randomized trials were no different than non-randomized trials in the analysis, the researchers reported.

"The statistical outcomes of comparison trials that rely on non-continuous variables such as infection, nonunion, secondary procedures, etc. may not be as stable as previously thought," the researchers concluded.

 

 

References:

“Statistical Significance in Trauma Research: Too Unstable to Trust?”

 Paul Tornetta III, M.D., March 4 2016, 2016 annual meeting of the American Academy of Orthopaedic Surgeons in Orlando, Florida. 

Related Videos
Deepak Sambhara, MD | Image Credit: American Society of Retina Specialists
David Brown, MD | Image Credit: Retina Consultants of Texas
Rebecca A. Andrews, MD: Issues and Steps to Improve MDD Performance Measures
© 2024 MJH Life Sciences

All rights reserved.