Special Feature: When Survival is Not the Same as Mortality
By Gordon D. Rubenfeld, MD, MSc
In common discourse we use survival and mortality interchangeably to refer to death. In fact, survival and mortality have very specific, and very different, definitions to a clinical investigator. They require different analyses and different methods to express a treatment effect. Failing to understand the difference between survival and mortality can lead to misinterpreting clinical studies.
Mortality is a probability. The observed data are the number of deaths divided by the total number of patients. For example, in the study by Luhr and colleagues of mortality in acute respiratory failure in 3 Scandinavian countries, 91 of 221 ARDS patients died, for a 90-day mortality of 41.2%.1 Whether it is reported, mortality is always measured at some specified time. Survival is a rate. The observed data include whether a patient is alive or dead and when he died (or was last seen alive). It is expressed as the number of deaths divided by the amount of time over which all of the study patients were observed. For example, 1.3 deaths per 100 patient-days is a survival rate. The same data can also be expressed by examining the distribution of survival times, for example, as the mean or median survival time in hours, months, or years. Survival is not measured at a specific time, but is truncated by the length of the study. A 5-year study of congestive heart failure cannot show survival time beyond 5 years. A 28-day study of ARDS, at best, can show that a treatment prolongs survival by 28 days.
Often we are interested in the effect of a treatment on outcome. The effect of treatment on mortality is usually expressed as the ratio of mortality probabilities (relative risk or risk ratio) or the difference in the mortality probabilities (risk difference or attributable risk). Sometimes it is mathematically advantageous to present the ratio of the odds of death. The odds ratio is usually, but not always, close to the risk ratio.
The inverse of the mortality risk difference is the number needed to treat in order to save 1 life. Mortality differences are very persuasive and convenient ways to express the results of a trial. The treatment and control arms in the low tidal volume ARDS Network study2 had mortality risks of 31% and 39.8%, respectively. The risk ratio was 0.78, which means that treated patients had 78% of the chance of dying that control patients had, or a 22% (1-risk ratio) reduction in their mortality. This means that 1 life is saved for every 11 patients treated.2
The effect of treatment on survival is expressed as the difference in median survival. For example, a recent systematic review concluded that cisplatin-based chemotherapy prolongs median survival by 1.5-3.0 months in stage IIIB-IV non-small-cell lung carcinoma.3 The effect of treatment on survival can also be expressed as a hazard ratio. Hazard ratios look like relative risks with ratios greater than 1 indicating that the treatment is associated with increased rate of death and hazard ratios less than 1 indicating that the treatment is associated with a decreased rate of death. But hazard ratios are not the same as relative risks. A treatment that reduces mortality from 10 deaths per patient day to 5 deaths per patient day will yield a hazard ratio of 0.5 but may only prolong life by a few hours with no difference in mortality at the end of the study. When authors present mortality and survival information along with P values in the same sentence, it can be very confusing.
A randomized, double-blind, placebo-controlled clinical trial of adrenal hormone replacement in septic shock was recently published.4 In this study, 299 mechanically ventilated patients in vasopressor-dependent septic shock with elevated lactate levels were randomized, within 3 hours of shock onset, to receive hydrocortisone 50 mg IV Q 6 hours and fludrocortisone 50 mg per nasogastric tube Q day for 7 days. Relative adrenal insufficiency (nonresponders) was defined as a cortisol increase £ 9 ug/dL after administration of ACTH 250 mg IV. The article’s abstract states, "In nonresponders, there were 73 deaths (63%) in the placebo group and 60 deaths (53%) in the corticosteroid group (hazard ratio, 0.67; 95% confidence interval, 0.47-0.95; P = .02)." With the mortality and survival data placed together in the same sentence, readers may be tempted to infer that the statistically significant P value applies to everything in the sentence.
Do corticosteroids reduce mortality by 33% (1-0.67)? Is 1 life saved for every 10 patients treated as suggested by the 63% and 53% mortality data? Neither of these statements is supported by the data. The mortality in the treated group (53%) is not statistically significantly different than the mortality in the placebo group (63%). As the Table shows, corticosteroids improve survival, but have no statistically significant effect on mortality at 28 days in any of the subgroups. Instead of presenting the simple, statistically negative comparison of mortality at 28 days, the authors present a sophisticated regression analysis of mortality. The adjusted odds ratio controls for unlucky randomization when there are chance differences between the treatment and control groups.5
In addition to making the analysis less transparent, there is another price for expressing the results of the study as an odds ratio. Corticosteroids in all patients reduced mortality from 61% to 55% with a (not statistically significant) risk ratio of 0.89 or 11% reduction in mortality. How can the adjusted odds ratio make the treatment look like it reduces mortality by 35%, with an odds ratio of 0.65? The answer is that the odds ratio cannot be interpreted as a "percent reduction in mortality" or as a risk ratio when the mortality rates are above 10 or 15%. In this study the mortality rates are over 50%, causing the odds ratio to greatly overestimate the benefit of therapy compared to the risk ratio.
Ideally, survival, mortality, and adjusted analyses should all tell the same story. When they do not, as in this study, readers are left in a quandary. Which analysis tells the truth? At best, critical readers can conclude that corticosteroids prolong time until death in the study patients with septic shock with no statistically significant effect on mortality. In a subset of patients with limited adrenal reserve, therapy prolongs time until death and reduces mortality but only reduces mortality after analysis in a regression model. The truth is that corticosteroids may or may not save lives in septic shock, but this particular study does not provide particularly compelling evidence of efficacy.
This confusion between survival and mortality is common. One of the randomized trials evaluating lung-protective ventilation in ARDS states in the abstract, "After 28 days, 11 of 29 patients (38%) in the protective-ventilation group had died, as compared with 17 of 24 (71%) in the conventional-ventilation group (P < 0.001)."6 Readers may be tempted to think that this P value means that the treatment reduces 28-day mortality from 71% to 38%. In fact, this highly significant P value comes from the survival analysis and tells us nothing about the comparison of mortality. Comparing the 71% to the 38% mortality in these 53 patients provides a P value of 0.03. This is not nearly as persuasive particularly since this study required a P < 0.001 for significance based on the number of interim analyses. Again, readers are tempted to apply the compelling P values from a survival analysis to the weak statistical evidence from the mortality data.
Why do the survival analyses yield results that conflict with and are often more persuasive than the mortality analyses in these studies? Survival analysis techniques are designed to detect differences in survival time. Imagine a study where everyone is dead at the end of the study. The risk ratio measured at the end of the study for the treatment is 1.0 (no effect) since the mortality is 100% divided by 100%. Survival analysis can take data from this "negative" study and tell us which treatment prolonged life the longest even if everyone is dead at the end of the study. This is extremely useful information if the study is a 5-year study of severe congestive heart failure and the treatment prolongs median survival by 13 months. Survival analyses can be run on ICU studies that stop observing patients after a relatively short period. In these studies, a statistically significant result with a hazard ratio below 1.0 may mean that the treatment only prolongs survival by a few hours or days. In the corticosteroid study mentioned above, patients who received the treatment, at best, lived 12 days longer than the controls, without any statistically significant effect on mortality at 28 days. In fact, the problem with survival analysis in critical care studies is that they are too sensitive to finding statistically significant differences in time until death that have no clinical significance.7
These statistical issues beg an important conceptual question. What is the right time point to measure mortality differences in critical care studies? There is nothing magical about 28 days, and the correct time point to measure mortality is often debated. There is always a tradeoff in selecting study end points between sensitivity to treatment effect and clinical significance. End points close to the therapy and disease (7-day mortality, for example) are most likely to detect a specific effect of the treatment but are not clinically significant.8 Measuring mortality 5 years after critical illness would arguably be more clinically relevant, but would be expensive and might miss important clinical effects that would be washed out by 5 years.
In relying on mortality at some fixed time point we rely on an implicit, and potentially flawed, assumption. While patients in both groups will continue to die after the end of the study, we assume that the difference in mortality observed created by the therapy will remain fixed—that the treated patients won’t "catch up" to the controls. Survival analysis doesn’t fix this problem, because it, too, is truncated at the end of the short-term observation period. Some readers try to surmount this problem by looking at the shape of the survival curves to see if they are "coming together" or not. This is problematic. Although survival curves are rarely drawn with confidence intervals, rest assured that there are few enough deaths toward the tails of the survival curves to make any "eyeball" inferences about coming together or moving apart dangerous. Although the optimal time point for assessing mortality differences in the ICU is unknown, survival analysis will not substitute for better data on the long-term effects of critical illness and its therapies.
Should survival analysis be abandoned from critical care studies? Not at all. Sometimes, investigators are interested in very sensitive outcome measures. Phase II studies of new therapies and studies that identify risk factors for poor outcome are examples of studies in which a very sensitive outcome measure is useful. In these situations a sensitive outcome measure is more important than a clinically significant one because the data will be used to generate and test new hypotheses. Survival analysis can be used to measure "time until" a variety of events. Time until extubation, time until vasopressor withdrawal, or time until developing renal failure can all be studied using survival analysis techniques.
Obviously, decisions to use any treatment in medicine are driven by individual patient, clinician, and hospital factors. However, one of the most important factors is, or should be, the evidence that the treatment is beneficial to patients. In trying to understand this evidence it is extremely important for readers to understand that improving survival may not reduce mortality.
References
1. Luhr OR, et al. Incidence and mortality after acute respiratory failure and acute respiratory distress syndrome in Sweden, Denmark, and Iceland. The ARF Study Group. Am J Respir Crit Care Med. 1999; 159(6):1849-1861.
2. ARDS Network. Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. The Acute Respiratory Distress Syndrome Network. N Engl J Med. 2000;342(18):1301-1308.
3. Sorenson S, Glimelius B, and Nygren P. A systematic overview of chemotherapy effects in non-small cell lung cancer. Acta Oncol. 2001;40(2-3):327-339.
4. Annane D, et al. Effect of treatment with low doses of hydrocortisone and fludrocortisone on mortality in patients with septic shock. JAMA. 2002;288(7): 862-871.
5. Enas GG, et al. Baseline comparability in clinical trials: Prevention of "poststudy anxiety." Drug Information Journal. 1990;24:541-548.
6. Amato MBP, et al. Effect of a protective-ventilation strategy on mortality in the acute respiratory distress syndrome. N Engl J Med. 1998;338(6):347-354.
7. Knaus WA, et al. Use of predicted risk of mortality to evaluate the efficacy of anticytokine therapy in sepsis. The rhIL-1ra Phase III Sepsis Syndrome Study Group. Crit Care Med. 1996;24(1):46-56.
8. Rubenfeld GD, et al. Outcomes research in critical care: Results of the American Thoracic Society Critical Care Assembly Workshop on Outcomes Research. The Members of the Outcomes Research Workshop. Am J Respir Crit Care Med. 1999;160(1):358-367.
In common discourse we use survival and mortality interchangeably to refer to death. In fact, survival and mortality have very specific, and very different, definitions to a clinical investigator.Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.