In this presentation, I will discuss three ongoing projects based on my dissertation. Recent innovations in the modeling of language have given new meaning to the phrase “language as a window into the mind”. These methods allow researchers to move beyond the use of linguistic data for descriptive analyses —a valuable endeavor in and of itself— to quantitative models that can identify latent patterns in large collections of text. Of particular value, I argue, is the application of these methods to study how “meaning” is represented in memory. Combined with advances in the modeling of memory retrieval, we are increasingly equipped to study how the processing of meaning informs political behavior. Leveraging these innovations, the present projects jointly highlight the value of studying “meaning” and memory organization more broadly for political science. Together the three papers showcase a series of novel methods and applications using different types of linguistic data. The first paper explores differences in how Democrats and Republicans represent political concepts in memory and their role in attitude judgments. The proposed methodological framework can be used to explore group level differences in semantic network representations. The second paper argues for a memory-centered approach to the study of ideology. Using dyadic data, we find evidence of shared ideology-like constraints in how voters organize the representations of political concepts in memory. Both the first and second papers use the same raw data, but highlight very different approaches. Whereas the first paper follows a supervised approach —in that we split subjects ex-ante by party ID— the second paper employs unsupervised methods to identify clusters latent in the data. The third paper turns to corpus-based methods in the study of meaning. In this paper, the reader will find a series of tools, including a Turing-style validation task, to facilitate model comparison and validation for word embedding models. It concludes with a series of illustrative use cases along with main takeaways for practitioners looking to implement word embeddings in their research.