This article explains a new class of automated methods based on computer vision and deep learning which can automatically analyze visual content data. Images matter because they help individuals evaluate policies, primarily through emotional resonance, and can help researchers from a variety of fields measure otherwise difficult to estimate quantities. The lack of scalable analytic methods, however, has prevented researchers from incorporating large scale image data in studies. This article offers an in-depth overview of automated methods for image analysis and explains their usage and implementation. It elaborates on how these methods and results can be validated and interpreted and discusses ethical concerns. Three examples then highlight three approaches to generating data from images. Studying protest framing on Twitter by training a classifier to label images and identify duplicates it he most technically advanced. Using hundreds of thousands of images about the 2017 Unite the Right rally in Charlottesville, VA shows how more intensive human involvement can reveal difference in international and domestic coverage of contentious politics. Submitting Facebook photographs of U.S. politicians shows how to work with the Google Vision API; this approach requires the least programming.