When utilizing observational knowledge, task to a therapy group is non-random and causal inference could also be troublesome. One frequent method to addressing that is propensity rating weighting the place the propensity rating is the likelihood that an individual is assigned to the therapy arm given their observable traits. This propensity is commonly estimated utilizing a logistic regression of particular person traits on a binary variable of whether or not the person obtained the therapy or not. Propensity scores are sometimes used that to by making use of inverse likelihood of therapy weighting (IPTW) estimators to acquire therapy results adjusting for identified confounders.
A paper by Xu et al. (2010) exhibits that utilizing the IPTW method could result in an overestimate of the pseudo-sample measurement and improve the chance of a kind I error (i.e., rejecting the null speculation when it’s truly true). The authors declare that strong variance estimators can handle this downside however solely work nicely with giant pattern sizes. As an alternative, Xu and co-authors proposed utilizing standardized weights within the IPTW as a easy and straightforward to implement technique. Right here is how this works.
The IPTW method merely examines the distinction between the handled and untreated group after making use of the IPTW weighting. Let the frequency that somebody is handled be:
the place n1 is the variety of folks handled and N is the entire pattern measurement. Let z=1 if the individual is handled within the knowledge and z=0 if the individual shouldn’t be handled. Assume that every individual has a vector of affected person traits, X, that influence the chance of receiving therapy. Then one calculate the likelihood of therapy as:
Beneath commonplace IPTW, the weights used could be:
Xu and co-authors create a simulation to point out that the sort 1 error is just too excessive–usually 15% to 40%. To right this, one may use standardized weights (SW) as follows:
The previous is used for the handled inhabitants (i.e., z=1) and the latter is used within the untreated inhabitants (z=0). The authors present that underneath the standardized weights, the speed of sort 1 errors is roughly 5% as supposed. In reality, the authors additionally present that standardized weighting usually outperforms strong variance estimators as nicely for estimating primary results.
For extra data, you’ll be able to learn the total article here.