Why OIG RAT-STATS and Sampling are Hot

Cornelia Dorfschmid | May 2010

The Best Strategy for Health Care Entities Is One of Proactive Preparedness

The Centers for Medicare & Medicaid (CMS) are now combating provider fraud, waste, and abuse through nationally coordinated strategies. New or newly aligned contractors are armed with new data analytics, pattern recognition methods, and analysis tools.

They also are mandated to apply statistical methods and practices. Health care organizations will have no choice fight back by gaining knowledge of and using similar statistical methods to minimize exposure to and risk in government audits, and effectively respond and fight back any recovery demands based on sampling. In this climate of enforcement, OIG RAT-STATS and sampling are hot topics.

Contractors that are claims and payment focused, especially the MACs, ZPICs, RACs in Medicare and MICs in Medicaid, will use statistical procedures to extrapolate overpayments in their audit projects, once they have identified and reviewed payment errors. For example, RAC auditors are motivated to apply extrapolation to maximize their realized contingency amounts.

They are allowed to extrapolate from their review samples to a universe of claims. While sampling and extrapolation of overpayments are not likely in automated reviews, complex reviews on the other hand, especially for inpatient claims or high dollar value claims are prime candidates for sampling and extrapolation.

RACs can generate large recovery amounts based on the review of relatively small statistical samples. On January 28, 2010 CMS amended and increased limits to record requests that RACs can make to institutional providers.

This increases the RAC’s leverage. They will be able to generate request records based on larger statistical samples, which can ultimately achieve a greater level of statistical confidence and better recovery results through extrapolation. [1] Small samples of records that they review can quickly turn into large recovery amounts!

Given these developments on the government enforcement and audit front, the best strategy for health care entities is one of proactive preparedness and knowing the contractor’s statistical analysis tools, methods, and rules of the game.

With regard to enforcement and oversight related to Medicare Advantage and Prescription Drug plans, CMS appears to move away from a “paper exercise” [2] and increase focus on onsite audits and demonstration of effectiveness, i.e., internal controls and auditing and monitoring of operational processes.

Therefore, to demonstrate effectiveness of their internal controls and auditing and monitoring efforts, managed care plans may want to rely on sampling mechanisms for their Compliance Programs and operational units to cover a broad spectrum of issues.

Have Compliance Concerns? We Have Solutions.

Speak with an Expert Today

Statistical Sophistication

In general, providers, suppliers, and plans need to step up their internal auditing and monitoring efforts as part of an effective Compliance Program. They need to be ready to execute a response strategy, if they are faced with a government audit.  In the past couple of years recovery audits related to payment errors drove much of the industry’s compliance efforts.

Much thought and effort has gone into developing workflow procedures, assembling response teams, and planning coordinated responses to government audits, including handling document requests, demand letters, and appeals.

However, it is this author’s contention that among many providers, suppliers and plans, especially among smaller ones, there is not enough awareness of the statistical sophistication needed to maximize effectiveness in compliance auditing and monitoring and to face effectively the government enforcement activity raised to a new level of sophistication.


A solid risk management strategy to face government auditors relies on understanding the basic tools and statistical methods that they are using.  One of the prominent tools is the Department of Health and Human Services (HHS) Office of Inspector General (OIG)’s RAT-STATS, a statistical package developed, recommended by, and available free of charge from the OIG website. [3]

The goal behind RAT-STATS was to develop valuable data analytical tools that could be easily used by auditors. The most recent version is RAT-STATS 2007 Version 2 and runs on Windows.

RAT-STATS can be used for sampling and includes a random number generation module. It also contains estimation modules that allow for various kinds of extrapolation, such as “variable” and “attribute” appraisal.  Variable appraisal is typically used for estimation of amounts, such as dollars.

Therefore, variable appraisal is used for overpayment estimation. “Attribute” appraisal is the method of choice when occurrences or percentages, such as accuracy rates, participation rates, or error rates, need to be estimated. Audits that rely on attribute sampling typically involve some sort of counting and simple Yes/No checks of a sampling unit’s assessment against the audit protocol.

Government auditors often rely on sampling protocols that include the use of RAT-STATS.  CMS encourages its contractors to use OIG RAT-STATS and requires them to follow certain standards when using sampling and estimation for overpayment purposes. Some of these standards have been described in the CMS Medicare Program Integrity Manual Chapter 3. [4]  CMS mandates for their contractors that the sampling methodology to project overpayments must be reviewed by a statistician or person with equivalent expertise in probability sampling and estimation.

Futhermore, a “probability sample” is required by CMS, and statistical expertise is needed to ensure that a statistically valid sample is drawn. [5] A valid probability sample will allow for making a “fair guess” and drawing conclusions from the sample to the universe.

It would behoove a health care organization to understand and use the relevant terminology, rely on valid probability sampling, and include statistical expertise in compliance strategies. Relating to and documenting properly certain statistical concepts and items can not only make a more efficient appeals process, it also supports disclosure and internal auditing efforts.

Providers and suppliers should therefore add individuals with statistical expertise to their government audit response teams and seek their help in incorporating statistics and sampling their monitoring programs. Statistical expertise is definitely needed in appeal strategies and of benefit in the design of effective internal auditing programs.

Basic Statistical Principles, Requirements, and Best Practices

When using statistical sampling and projections from samples, the CMS contractors are to use probability samples that are statistically valid and follow a series of prescribed steps. [6]

These steps can be taken as best practices in any audit and are as follows.

  • Selecting the Review Period to be reviewed;
  • Defining the Universe, the Sampling Unit, and the Sampling Frame;
  • Designing the Sampling Plan and selecting the Sample;
  • Reviewing each of the Sampling Units and determining if there was an overpayment or an underpayment (variable) or occurrence (attribute),
  • Estimation of the overpayment or occurrence.

The Sampling Frame is the “listing” of all the possible sampling units from which the sample is selected.  The sample types can be various, such as simple, stratified, cluster etc. A typical stratification in sampling is to separate high dollar claims from low ones to reduce variability and thereby improve the precision of the estimate.

Samples can also be classified by size, such as Probe Samples (30, 20-40), Discovery Samples (50), and Full Samples. The size of the latter depends on required confidence and precision levels for the audit and the probe or discovery sample results.

In terms of size, the OIG’s Self-Disclosure Protocol [7] for providers requires 30 samples for a Probe Sample, while CMS considers 20-40 adequate in a probe review. [8]

Probe and Discovery Samples are meant to provide a dual purpose: an initial glimpse of a problem and determining the size of a Full Sample, if needed. For example, in Corporate Integrity Agreements (CIA) the OIG requires a Full Sample to be used, if the overpayment error rate, or financial error rate, in a Discovery Sample is at or above 5%.

A Full Sample is one that is capable of generating an estimate accordant to specific confidence and precision levels, i.e., accepted level of uncertainty. For instance, the OIG in the self-disclosure protocol gives such specifications and requires a two-sided confidence interval at 90% confidence and 25% precision level for overpayment estimates.  That simply means one has to have a large enough samples to get to a tolerable range around the estimate.

If one has estimated the total overpayment amount, i.e., Point Estimate, then it is required to be 90% confident that the true amount is +/- 25% of the estimated amount.  These can be taken as a best practice standard in any claims review. To conclude, the confidence and precision requirements will dictate the Full Sample size. There is no single “good” number or quick answer to what a Full Size sample should be.

The Sampling Plan should be formally documented. It should always be requested for review in any appeal situation. One of the most important aspects in audits that rely on sampling is that a “probability” sample, i.e., statistically valid random sample (SRVS), is selected. It should be described in the Sampling Plan.

The Estimate is described in a variety of ways as part of the extrapolation.   In overpayment estimations and variable appraisals, in addition to two-sided confidence intervals with Lower and Upper Limits, especially the difference between the Lower Limit of a one-sided confidence interval and the Point Estimate must be understood. Both are used by government auditors.

The most commonly generated overpayment estimate in RAT-STATS, and required in the SDP, is the Point Estimate. The Point Estimate is the unbiased amount of the overpayment dollars in the sample extrapolated to the Universe.  In other words the best guess of the true overpayment in the universe.

The estimate for the Lower Limit of the one-sided 90% confidence interval for the overpayment estimate is that amount for which one can say with 90% certainty that the true overpayment does not fall below, hence lower limit.

Although the government contractors can ask for the Point Estimate to be repaid, in most situations, the Lower Limit of the one-sided 90% confidence interval shall be used as the recovery amount, which works to the advantage of the provider or supplier [9] and less than the Point Estimate at a given sample size.

A provider’s or supplier’s strategy should incorporate these distinctions in any settlement discussions. Furthermore, if a self-disclosure to a MAC or the federal OIG is considered, the Lower Limit should also be part of the disclosure strategy and settlement negotiations, as it is to the advantage of the disclosing provider or supplier.

Myths in Sampling

  • Myth 1: Small samples are invalid.

In my many years of working with hospitals, health care providers, and their attorneys in either appeal or disclosure situations, one of the most common confusion has been between “validity” and “size.” The typical question is “How large does the sample have to be?”

There seems to be a common misconception that small samples are not valid and hence “safe” as they cannot be used for projection. Equally false is the idea that large samples are always valid and estimates from these are “good”.

A feature of probability sampling is that the level of uncertainty can be incorporated into the estimate. Naturally, smaller samples imply greater uncertainty than larger ones, but they are nevertheless valid.

However, only a valid small sample can be amended to a larger sample and the estimate thus improved (i.e., “cured”) to the confidence level and precision needed. An invalid sample, however, no matter what size, cannot be fixed, and any estimate based on it is also invalid.

If a particular probability sample design is properly executed, i.e., defining the universe, the frame, the sampling units, using proper randomization, accurately measuring the variables of interest, and using the correct formulas for estimation, then assertions that the sample and its resulting estimates are “not statistically valid” cannot legitimately be made – no matter what sample size.

On the other hand, if the validity of the sample can be challenged, the estimate of and conclusions drawn for the universe are not sustainable.  RAT-STATS, when applied correctly, aids to ensure validity, facilitates replicability of samples, and allows for proper amendment of smaller to larger size samples to reduce uncertainty of the estimate.  This is important as the work of done in the smaller sample is not lost and can be incorporated into the amended sample.

Another side of this second myth deals with the power of but also inherent risk of small valid statistical samples. Unless the audit scope is controlled, one may run the risk of becoming unintentionally aware of potentially large overpayment amounts implied by sample results. For example, a Discovery Sample of 50 claims taken over several years could unexpectedly render a very high overpayment error rate.

The valid sample may now allow overpayment extrapolation at reasonable levels of certainty and hence imply knowledge of substantial overpayments in the universe.  Retention of the claims payments, while reasonably certain of a systemic problem, can turn into a potential False Claims Act (FCA) violation.

Therefore, it is advisable to reduce disclosure risk and properly limit the Review Period and Universe, at least in routine audits. One can start small, for example, by narrowing the scope to a month or a quarter, etc.

The larger the Universe, the larger are the extrapolated amount that may be implied in the results of a valid small sample that uncovers a systemic pattern!!

  • Myth 2: The statistical portion in government audits can be considered correct.

Another misconception is that government audits that rely on probability sampling always have valid samples and estimates.  This is not always the case. The statistics may be flawed.

Challenging Validity

Once these two myths are overcome, it becomes apparent that one of the first and often fastest steps to a successful rebuttal or re-determination, and stopping recoupment is challenging the statistical validity of the sample and the estimate.

Verifying the validity of the sample can be performed independently from the criteria (medical necessity, coding, insufficient documentation, etc) tested in the audit.

It is not uncommon that a health care organization that is audited focuses on the individual claims and the alleged coverage rule violations, coding errors, billing rules, etc.

Taking on the statistics and estimation methods is typically outside the provider’s or attorney’s comfort zone and expertise and hence given lower priority. This may be a strategic mistake, especially in overpayment audits.

If a sample is not valid, any extrapolation is invalid and the extrapolated repayment can be challenged in its totality. If the sample or projection is flawed and cannot be confirmed through replication, it may be successfully challenged.

All that remains with respect to recovery amounts, if anything, would be the payments of the sample itself, hence “bringing the total dollars at issue to the “actual” alleged overpayment, and not the extrapolated alleged overpayment.” [10]

The statistical expert can, early on in the process, can assess whether the documentation provided by the government auditor supports the overpayment estimate (i.e., whether he/she can replicate the estimation steps). A professionally documented report by the government auditor should make this straightforward.

However, if the documentation is incomplete or inconsistent, which happens, it provides immediate grounds for rebuttal and challenge (i.e., appeal).  Similarly, if overpayment estimates for purposes of disclosure to a MAC or OIG are not supported by a valid and well documented sampling and estimation method that can be verified by replication, the disclosure process can turn into a lengthy and costly nightmare.

Practical Steps How to Use Statistical Concepts

Having the statistical facts and strategy in order will go a long way toward more favorable and cost effective outcomes of disclosures, recovery audits, internal audits, and any application of statistics used in auditing and monitoring.

The successful incorporation of statistics into the hospital or health care entity’s proactive strategies requires the following:

  1. Add statistical expertise in your auditing, disclosure, and appeals strategies early.  Have a statistical expert on call.
  2. Don’t assume government auditor’s statistics are always right. CMS contractors have to follow CMS Program Integrity rules for statistical extroplations, hold them to it.
  3. If audited by a RAC, MAC, ZPIC, or MIC, always attempt to verify the statistical portion as one of the initial steps. Remember, an invalid sample or extrapolation is the quickest way to stop recoupment and a successful appeal.
  4. Communicate with government auditors formally and state clearly your intent to verify and replicate the extrapolation and therefore request, if needed through your counsel: Sampling Plan, Universe, Sampling Frame, Random Numbers, Sample, and the exact software and version used to generate Random Numbers and estimates.
  5. Check the accuracy of and aim for the Lower Limit of a one-sided 90% confidence level in your defense or disclosure strategy related to overpayments. But be aware that you need to know and prepare for and possibly pay the Point Estimate, too.
  6. If a government auditor’s claims sample is found statistically valid and confirmed, then examine audit criteria claim by claim next.  Reanalyze claim by claim using an independent auditor to raise credibility. If you refute the government auditor’s claim by claim results, re-estimate the Point Estimate using RAT-STATS variable appraisal and appropriate statistical formula for the Lower Limit
  7. Don’t confuse sample size and sample validity.  Validity is critical.
  8. In internal auditing, start with small but valid samples, such as a Probe Sample. Don’t create disclosure risk by too large a scope or universe.
  9. Choose the sampling and estimation procedure that is appropriate for the review objective: dollar sampling and overpayment estimation require variable appraisal; for occurrence, percentages, and rates use attribute appraisal.
  10. OIG RAT-STATS is the tool and protocol of choice, use it.  Have your statistical expert help you with integrating it into your own routine auditing and monitoring program to achieve cost effective, representative measures to evidence the effectiveness of compliance program. Small, valid, and well crafted samples in internal routine monitoring go a long way.
  11. Keep it simple, unless you have the statistical expertise.  Everyone can understand a simple random sample, start with that.

Have Compliance Concerns? We Have Solutions.

Speak with an Expert Today

[1] CMS, Additional Documentation Limits for FY 2010 for institutional providers, January 28, 2010.

CMS increased caps to 300 for large providers and can permit RACs to exceed the cap.

[2] Brenda Tranchida, Medicare Advantage and Prescription Drug, Audit and Enforcement Overview, February 22, 2010

[3] Office of the Inspector General (OIG). In the Health and Human Services OIG, Publication of the OIG’s Self-Disclosures Protocol (1998),

RAT-STATS is available on the Internet at:;

The OIG strongly recommends RAT-STATS statistical sampling software as part of providers’ self-assessment of overpayments. RAT-STATS is also recommended by CMS, see CMS Program Integrity manual Chapter 3.  SPSS and SAS are also reputable packages that may be used. More information is available on their websites.

[4] Centers of Medicare & Medicaid, Medicare Program Integrity Manual, Chapter 3 “Verifying Potential Errors and Taking Corrective Actions,” 3.10 Use of Statistical Sampling for Overpayment Estimation.

[5] Centers of Medicare & Medicaid, Medicare Program Integrity Manual, Chapter 3 “Verifying Potential Errors and Taking Corrective Actions,” 3.10 Use of Statistical Sampling for Overpayment Estimation.

[6] See CMS Medicare Program Integrity Manual, Chapter “Steps for Conducting Statistical Sampling.”

[7] See footnote 4.

[8] See CMS Medicare Program Integrity Manual, Chapter 3.11.12 “”Probe” Reviews.”

[9] CMS Medicare Program Integrity Manual, Chapter 3 “Calculating the Estimated Overpayment” 3.10.5

[10] AHLA. Member Briefing, Recovery Audit Contractors and Medicare Audits: What Can Hospitals and Health Systems Expect as the RAC Program Expands Nationwide? January 2009

About the Author

Dr. Cornelia M. Dorfschmid has over 30 years of private and government sector experience in health care compliance consulting, the majority of which was in management and executive capacities. She is a recognized expert in the areas of claims auditing, overpayment analysis and risk management and corporate health care compliance.