Matched Random Assignment Definition Statistics

Matching is a statistical technique which is used to evaluate the effect of a treatment by comparing the treated and the non-treated units in an observational study or quasi-experiment (i.e. when the treatment is not randomly assigned). The goal of matching is, for every treated unit, to find one (or more) non-treated unit(s) with similar observable characteristics against whom the effect of the treatment can be assessed. By matching treated units to similar non-treated units, matching enables a comparison of outcomes among treated and non-treated units to estimate the effect of the treatment reducing bias due to confounding.[1][2][3]Propensity score matching, an early matching technique, was developed as part of the Rubin causal model.[4]

Matching has been promoted by Donald Rubin.[4] It was prominently criticized in economics by LaLonde (1986),[5] who compared estimates of treatment effects from an experiment to comparable estimates produced with matching methods and showed that matching methods are biased. Dehejia and Wahba (1999) reevaluted LaLonde's critique and show that matching is a good solution.[6] Similar critiques have been raised in political science[7] and sociology[8] journals.

Analysis[edit]

When the outcome of interest is binary, the most general tool for the analysis of matched data is conditional logistic regression as it handles strata of arbitrary size and continuous or binary treatments (predictors) and can control for covariates. In particular cases, simpler tests like paired difference test, McNemar test and Cochran-Mantel-Haenszel test are available.

When the outcome of interest is continuous, estimation of the average treatment effect is performed.

Matching can also be used to "pre-process" a sample before analysis via another technique, such as regression analysis.[9]

Overmatching[edit]

Overmatching is matching for an apparent confounder that actually is a result of the exposure. True confounders are associated with both the exposure and the disease, but if the exposure itself leads to the confounder, or has equal status with it, then stratifying by that confounder will also partly stratify by the exposure, resulting in an obscured relation of the exposure to the disease.[10] Overmatching thus causes statistical bias.[10]

For example, matching the control group by gestation length and/or the number of multiple births when estimating perinatal mortality and birthweight after in vitro fertilization (IVF) is overmatching, since IVF itself increases the risk of premature birth and multiple birth.[11]

It may be regarded as a sampling bias in decreasing the external validity of a study, because the controls become more similar to the cases in regard to exposure than the general population.

See also[edit]

References[edit]

  1. ^Rubin, Donald B. (1973). "Matching to Remove Bias in Observational Studies". Biometrics. 29 (1): 159–183. doi:10.2307/2529684. JSTOR 2529684. 
  2. ^Anderson, Dallas W.; Kish, Leslie; Cornell, Richard G. (1980). "On Stratification, Grouping and Matching". Scandinavian Journal of Statistics. 7 (2): 61–66. JSTOR 4615774. 
  3. ^Kupper, Lawrence L.; Karon, John M.; Kleinbaum, David G.; Morgenstern, Hal; Lewis, Donald K. (1981). "Matching in Epidemiologic Studies: Validity and Efficiency Considerations". Biometrics. 37 (2): 271–291. doi:10.2307/2530417. JSTOR 2530417. PMID 7272415. 
  4. ^ abRosenbaum, Paul R.; Rubin, Donald B. (1983). "The Central Role of the Propensity Score in Observational Studies for Causal Effects". Biometrika. 70 (1): 41–55. doi:10.1093/biomet/70.1.41. 
  5. ^LaLonde, Robert J. (1986). "Evaluating the Econometric Evaluations of Training Programs with Experimental Data". American Economic Review. 76 (4): 604–620. JSTOR 1806062. 
  6. ^Dehejia, R. H.; Wahba, S. (1999). "Causal Effects in Nonexperimental Studies: Reevaluating the Evaluation of Training Programs". Journal of the American Statistical Association. 94 (448): 1053–1062. doi:10.1080/01621459.1999.10473858. 
  7. ^Arceneaux, Kevin; Gerber, Alan S.; Green, Donald P. (2006). "Comparing Experimental and Matching Methods Using a Large-Scale Field Experiment on Voter Mobilization". Political Analysis. 14 (1): 37–62. doi:10.1093/pan/mpj001. 
  8. ^Arceneaux, Kevin; Gerber, Alan S.; Green, Donald P. (2010). "A Cautionary Note on the Use of Matching to Estimate Causal Effects: An Empirical Example Comparing Matching Estimates to an Experimental Benchmark". Sociological Methods & Research. 39 (2): 256–282. doi:10.1177/0049124110378098. 
  9. ^Ho, Daniel E.; Imai, Kosuke; King, Gary; Stuart, Elizabeth A. (2007). "Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference". Political Analysis. 15 (3): 199–236. doi:10.1093/pan/mpl013. 
  10. ^ abMarsh, J. L.; Hutton, J. L.; Binks, K. (2002). "Removal of radiation dose response effects: an example of over-matching". British Medical Journal. 325 (7359): 327–330. doi:10.1136/bmj.325.7359.327. PMC 1123834. PMID 12169512. 
  11. ^Gissler, M.; Hemminki, E. (1996). "The danger of overmatching in studies of the perinatal mortality and birthweight of infants born after assisted conception". Eur J Obstet Gynecol Reprod Biol. 69 (2): 73–75. doi:10.1016/0301-2115(95)02517-0. PMID 8902436. 

Further reading[edit]

  • Angrist, Joshua D.; Pischke, Jörn-Steffen (2009). "Regression Meets Matching". Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press. pp. 69–80. ISBN 978-0-691-12034-8. 

Applied Statistics Lesson of the Day – The Matched Pairs Experimental Design

The matched pairs design is a special type of the randomized blocked design in experimental design.  It has only 2 treatment levels (i.e. there is 1 factor, and this factor is binary), and a blocking variable divides the experimental units into pairs.  Within each pair (i.e. each block), the experimental units are randomly assigned to the 2 treatment groups (e.g. by a coin flip).  The experimental units are divided into pairs such that homogeneity is maximized within each pair.

For example, a lab safety officer wants to compare the durability of nitrile and latex gloves for chemical experiments.  She wants to conduct an experiment with 30 nitrile gloves and 30 latex gloves to test her hypothesis.  She does her best to draw a random sample of 30 students in her university for her experiment, and they all perform the same organic synthesis using the same procedures to see which type of gloves lasts longer.

She could use a completely randomized design so that a random sample of 30 hands get the 30 nitrile gloves, and the other 30 hands get the 30 latex gloves.  However, since lab habits are unique to each person, this poses a confounding variable – durability can be affected by both the material and a student’s lab habits, and the lab safety officer only wants to study the effect of the material.  Thus, a randomized block design should be used instead so that each student acts as a blocking variable – 1 hand gets a nitrile glove, and 1 hand gets a latex glove.  Once the gloves have been given to the student, the type of glove is randomly assigned to each hand; some may get the nitrile glove on their left hand, and some may get it on their right hand.  Since this design involves one binary factor and blocks that divide the experimental units into pairs, this is a matched pairs design.

Like this:

LikeLoading...

Related

Filed under Applied Statistics, Experimental Design, Statistics, Statistics Lesson of the DayTagged with blocking, completely randomized design, experimental design, experimental unit, factor, matched pairs, matched pairs design, random assignment, random sample, randomized blocked design, treatment, treatment levels

0 Thoughts to “Matched Random Assignment Definition Statistics

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *