Social data is often generated by heterogeneous subgroups, each with its own traits and behaviors. The correlations between the traits, behaviors, and even how the data is collected can create subtle biases. Models trained on biased data will make invalid inferences about individuals – what’s known as ecological fallacy. The inferences can also be unfair and discriminate against individuals based on their membership in protected groups. I describe common sources of bias in heterogeneous data, including Simpson’s paradox, survivor bias, and longitudinal data fallacy. I describe a mathematical framework for de-biasing data that addresses these threats to validity of predictive models. The framework creates covariates that do not depend on sensitive features, such as gender or race, and can be used with any model to create fairer, unbiased predictions. The framework promises to learn unbiased models even in analytically challenging data environments.