FAQs for Quality Reports
- How did we decide when to color-code performance on a numeric indicator red or green?
- Why is Hospital A "average," and Hospital B "better than average," when Hospital B has a worse percentage than Hospital A?
- How does risk adjustment work?
- Where did these indicators come from?
- Where are the data sources for these numbers?
- What are some of the known limitations of our report on these indicators and safe practices?
1. How did we decide when to color-code performance on a numeric indicator red or green?
We use an objective statistical test. We apply the red and green coloring only if the difference from the national average is big enough to be "statistically significant" - and is not just random variation. We use standard statistical techniques to construct 99% confidence limits around our performance. If the national average is within the confidence interval, we consider our results "near the national average." Otherwise, we color-code our performance better (green) or worse (red) than the national average. In cases where the indicator does not have a predetermined high or low desired direction, we still color-code statistically significant results, but we use blue and orange rather than red and green.
We use the same approach to color-code the average Kentucky hospital red or green, with one twist. To avoid applying a noticeably more sensitive test to the Kentucky average than we apply to ourselves, we substitute Norton Healthcare's number of cases in testing the Kentucky average. This approach means that Kentucky actually differs significantly from the national average on more indicators than those shown.
Because patient satisfaction surveys can be based on very large samples, standard statistical tests can be too sensitive, color-coding average performance as red or green. To avoid this problem, we have not allowed the sample size for significance testing to be larger than that required for 80% power. This refinement rarely changes the color coding, but it keeps the statistical test from labeling a 0.4% difference red or green.
In addition, color coding to red or green is based on the score being significantly different using the Wilson Score 99% confidence limits and must be outside the 25th to 75th percentile range based on the national distribution of scores.
2. Why is Hospital A "average," and Hospital B "better than average," when Hospital B has a worse percentage than Hospital A?
Hospital A didn't have as many cases as Hospital B.
Standard statistical techniques don't look just at how much a hospital's performance differs from the nation's. These techniques ensure that the difference isn't just random variation. Statistical techniques become more sensitive (have more "power"), when they're based on more cases. A hospital that has more cases is more likely to be shown in red or green than a hospital with fewer cases is.
Example: If the national complication rate for some indicator is 8%, a hospital with 50 cases and a 2% complication rate will be shown as average. Meanwhile, even though its complication rate is higher than 2%, a hospital with 500 cases and a 4% complication rate will be shown as better than average. Standard statistical techniques compare each hospital to the national average - not to another hospital - and the question they ask is, "Is this difference more than the luck of the draw?"
While situations, such as that described in the question, seem odd at first, they make sense. It may help to consider an extreme example.
Imagine a hospital that had only one case, a case that did not have a complication. Even though the hospital's complication rate is 0%, you probably aren't impressed that the hospital's rate is better than the national average of 8%. What if the hospital had two cases and no complications, is it truly better than the national average? You probably still think you have too few cases to make a judgment. The math behind the statistical comparison agrees with you, and it determines how many cases it takes ensure the results aren't just random variability. The more cases behind a statistic, the more likely the statistic is to be colored better than (green) or worse than (red) the national average.
Patient satisfaction note: Patient satisfaction results are not always compared to the same U.S. average. Kosair Children's Hospital is compared to a pediatric average for the inpatient and emergency department surveys.
3. How does risk adjustment work?
Risk adjustment is a mathematical calculation that takes into account differences in patients and procedures. We use either the analysis provided by the national organization that supplies the comparative data, or - if that is unavailable - we use standardization by the indirect method, which is the usual approach.
Example: Imagine results like those shown in the table. The "All patients" row shows that the U.S. complication rate is 8%, while the hospital's complication rate is 16% - twice the national average.
Now imagine classifying patients as high or low risk. For example, low-risk patients might be patients under age 75 who have no serious medical conditions, while high-risk patients would include everyone else. Looking at the table above, suppose that the hospital's low-risk patients do slightly better than the national average (3.5% vs. 4%), and so do its high-risk patients (24.3% vs. 25%). The hospital's overall complication rate is high because it sees a larger proportion of high-risk patients than the average U.S. hospital does. Instead of showing the hospital's complication rate as twice as high as the national average, it should be shown as slightly better than the national average.
Calculation. Indirect standardization predicts what each hospital's rate would be if it had the same complication rates as the nation has in each risk group. So, looking at the national percentages, we predict that 4% of the hospital's low-risk patients and 25% of its high-risk patients will have a complication. That means that the hospital's predicted number of complications for all patients is 4% of 200 + 25% of 300 = 8 + 75 = 83 complications. A standardized ratio may be calculated by dividing the hospital's actual number of complications by the predicted number, which is 80/83 = 0.964. The hospital does not actually have 100% of the complications it is predicted to have using national averages; it has only 96.4% of the national average. Multiplying the 0.964 ratio times the national rate of 8%, gives a risk-adjusted rate of 7.7%.
So, if we don't risk-adjust the hospital's rate, it has a complication rate equal to twice the national average. If we do risk-adjust the hospital's rate, we give it credit for its tougher cases. We say the hospital's risk-adjusted complication rate is 7.7%, compared to the U.S. rate of 8% -- which gives a much more accurate representation of the hospital's performance.
4. Where did these indicators come from?
Norton Healthcare is responding to lists of indicators and safe practices endorsed by national healthcare organizations. Click on an indicator to find the national organization endorsing the particular indicator. Table A gives the items on each national list and shows where each item is in the Norton Healthcare Quality Report. We show our data for the entire list of indicators. This comprehensiveness is part of our assurance to the public that we give a complete picture of our quality.
Here are the organizations and the lists included in this report. Click the links for background information on the organizations, as well as detailed definitions and supporting research for the indicators and safe practices:
National Quality Forum (NQF)
Note: Norton Healthcare is a member of NQF.
Joint Commission for Accreditation of Healthcare Organizations (JCAHO)
JCAHO and the Centers for Medicare and Medicaid Services (CMS)
- Indicators (these are included in the NQF Hospital Care indicators)
The Agency for Healthcare Research and Quality (AHRQ)
Note: For consistency with other indicators in the Norton Healthcare Quality Report, we show the AHRQ indicators as percentages rather than as rates per 1,000.
Additional pediatric indicators come from
JCAHO's ORYX indicators, which we collect through:
The BENCHmarking Effort for Networking Children's Hospitals (MMP)
- Asthma readmissions and returns to the Emergency Department
The Vermont Oxford Network
- Neonatal mortality by birth weight
5. Where are the data sources for these numbers?
AHRQ Patient Safety Indicators (PSIs) and Inpatient Quality Indicators (IQIs).
We display these statistics as percentages, not as rates per 1,000.
We use COMPdata, excluding psychiatric and rehabilitation hospitals, and the most recent software programs available from The Agency for Healthcare Research and Quality (AHRQ) to calculate risk-adjusted (but not "smoothed") rates for the hospital PSIs and IQIs. The PSIs and IQIs include indicators for geographic areas. We show a Kentucky average for all PSIs and IQIs, not just the ones that AHRQ defines as geographic area rates.
COMPdata contains data for inpatients and outpatient surgery patients discharged from Kentucky hospitals. The data available to hospitals participating in COMPdata include: patient age (but not name or identifier), some diagnosis and procedure codes, diagnosis-related group (DRG), and admission and discharge dates. To ensure a direct comparison, whenever we use COMPdata for the Kentucky average, we use COMPdata for Norton Healthcare data, even though we have more diagnosis and procedure codes available about our patients.
As outlined by AHRQ, IQIs are adjusted with the most current version of APR-DRGs from 3M. For those indicators that use special diagnosis codes called E-codes, we compute the Kentucky average using only hospitals that report E-codes to COMPdata. (Norton Healthcare hospitals report E-codes to COMPdata. A few Kentucky hospitals do not.)
JCAHO/CMS data, including the Surgical Infection Prevention (SIP) program. These include several of the NQF Hospital Care indicators.
These data come from nurses reviewing paper and electronic medical records. After various audits and reliability checks, we enter the data into a computerized tool (CART) and send the results to national databases for edits and risk adjustment. We display data from JCAHO feedback reports, and we use JCAHO or CMS public sites to calculate the Kentucky median performance.
Additional cardiovascular procedure data.
Norton Audubon Hospital and Norton Hospital are participants in databases maintained by The Society for Thoracic Surgeons (STS). Nurses review medical records and enter the data into a computerized database. We then submit the data to STS. STS maintains the risk-adjustment models for these statistics.
Additional infection control data.
Infection control nurses review medical records according to National Nosocomial Infection Surveillance (NNIS) guidelines from the CDC (Centers for Disease Control and Prevention).
Nursing skill mix and hours per patient day.
We calculated these statistics using data from Norton Healthcare's time and attendance and clinical information systems. As part of our work to provide high-quality nursing data, Norton Healthcare recently became a participant in the National Database of Nursing Quality Indicators (NDNQI).
Nursing work environment survey.
Using the nationally endorsed questionnaire, we surveyed a simple random sample of nurses at each hospital.
Nurse turnover rates.
Norton Healthcare Human Resources Department statistics were used to calculate turnover rates of nursing staff. The national comparative data are from the National Association of Healthcare Recruiters (NAHCR). The national data combine Licensed Practical Nurses (LPNs) and unlicensed assistive personnel (UAPs) into a single number.
Patient falls and our use of restraints.
Nurses report patient falls and enter them into a computerized database.
Use of restraints is monitored in a one-day prevalence study, which is a count of the number of patients in restraints at a specific point in time. The survey is conducted at a randomly designated date and time within each quarter on all reporting units.
We submit our data on patient falls and restraints to the National Database of Nursing Quality Indicators. National comparison data on patient falls and restraints are a product of the American Nurses Association's National Database of Nursing Quality Indicators (NDNQI)®.
Patients developing in-hospital pressure ulcers.
Data about patients developing in-hospital pressure ulcers are collected in a one-day prevalence study, which is a count of the number of patients with hospital-associated pressure ulcers at a specific point in time. "Hospital associated" refers to any pressure ulcer that is newly developed after admission to a facility. A team of qualified nurses perform the survey assessments. Individuals who will be conducting the clinical assessment must be trained and skilled in pressure ulcer identification and staging as well as have the ability to distinguish pressure wounds from other types of wounds. This information is submitted to the National Database of Nursing Quality Indicators. National comparison data are a product of the American Nurses Association's National Database of Nursing Quality Indicators (NDNQI)®.
Asthma readmissions and returns to the Emergency Department.
The displayed data are from feedback reports from the BENCHmarking Effort for Networking Children's Hospitals (MMP).
Neonatal mortality by birth weight.
This is our calculation based on birth-weight category results from the Vermont-Oxford Network. The displayed U.S. average has been adjusted to Norton Suburban Hospital and Kosair Children's Hospital's mix of birth weights, and is not exactly equal to the U.S. average.
Norton Healthcare uses an external company, Press Ganey Associates, Inc., to conduct and analyze mail surveys of a statistically valid random sample of our patients. Because the questions and methods differ from one survey to the next, it is not valid to compare the results shown here to results from patient satisfaction surveys other than those conducted by Press Ganey Associates, Inc. Norton Healthcare hospitals are participating in the national Patient Perspectives on Care project known as HCAHPS, so future data will allow for better patient experience comparisons across different hospitals.
6. What are some of the known limitations of our report on these indicators and safe practices?
Perhaps the most important limitation is that the nationally endorsed lists cover so little of what prospective patients might want to know about a hospital's performance. Much more extensive information is needed to evaluate hospital care at the level of specific procedures and conditions - rather than trying to capture hospital-wide complication rates, for example. There are almost no indicators that address outpatient care or events that occur after the patient's hospital stay. The current lists of indicators are essentially silent about the patient's long-term survival and condition.
Current medical records codes do not capture important factors that should be used - but can't be used - to adjust the statistics. For example, the data used by these indicators do a poor job of distinguishing an infection the patient already had, from an infection the patient developed in the hospital. The data do not distinguish an emergency case from one where more time was available to react. The data do not indicate if the patient had "do not resuscitate" orders, which would indicate that the patient's death was expected and not a result of the care provided. Hospitals also differ in their documentation and coding practices.
COMPdata does not store all diagnosis and procedure codes. We are probably not risk-adjusting the PSIs and IQIs as much as they should be. This limitation may be trivial for some of the indicators, but may lead to greater inaccuracy for high-risk patients and procedures.
The number of procedures performed is at best a proxy for other quality indicators. Some authorities suggest not using these volume-based indicators at all; others suggest using them only in conjunction with other indicators of quality of care.
We cannot be certain about the comparability of the U.S. and Kentucky averages. The U.S. average may be based on a biased sample of states or hospitals. For example, the average on a particular indicator may be too high, because it is based only upon hospitals proud or interested enough to submit their data to a national group. Presumably, the comparative average could also be too low, if - for example - states with high-risk or older populations are over-represented in the data.
Patient satisfaction. The only U.S. comparison available for our patient satisfaction results is the average of the relevant Press Ganey database. Although more than 1,300 hospitals participate in the Press Ganey surveys, those hospitals may not be representative of all hospitals in the U.S. In other words, we may be comparing ourselves to an average that is easier or tougher than it would be if it included every U.S. hospital. Also, patient satisfaction results are not risk-adjusted, so they do not take into account the different services that different hospitals provide. Patients who are in the hospital to deliver a baby may tend to rate their hospital experience differently from patients who are in the hospital because they had a heart attack. Patient satisfaction averages do not take these expected differences into account.
Data from one-day prevalence studies and limited staff questionnaires are subject to time-of-year and low-volume variability and may not accurately represent what more complete data would show.
Although we follow national definitions, we still have countless judgment calls to make about how to display data, how to classify data for some indicators, etc. We hope that the Norton Healthcare Quality Report helps contribute to the growing national and state interest in quantifying hospital quality performance and helps hasten the day when hospitals will have agreed-upon standard approaches to these decisions.
We show a quality ribbon if a hospital has the best score possible.
For more information about Norton Healthcare's Quality Report please email us.
View the Quality Report Disclaimer.