|Talks|

Revisiting Representation Learning in Complex Networks with Applications to Recommender Systems

Dissertation defense
Hybrid
Past Talk
David Liu
Jul 11, 2025
12:00 pm
Jul 11, 2025
12:00 pm
In-person
4 Thomas More St
London E1W 1YW, UK
The Roux Institute
Room
100 Fore Street
Portland, ME 04101
Network Science Institute
2nd floor
Network Science Institute
11th floor
177 Huntington Ave
Boston, MA 02115
Network Science Institute
2nd floor
Room
58 St Katharine's Way
London E1W 1LP, UK

Talk recording

Increasingly, training machine learning models requires the compression of vast amounts of data and perspectives. For instance, when learning on social networks, the interactions between people are compressed into a low-dimensional space. Effective machine learning models necessitate efficient, stable, and fair representation; this dissertation identifies challenges and algorithms for achieving such representations for complex networks. First, I present work tackling the technical challenge of learning embeddings efficiently and stably. I demonstrate how we can reduce the memory footprint of graph representation learning by considering more efficient alternatives to negative sampling, utilizing dimension regularization. I also identify instability in current graph embedding algorithms to perturbations in the periphery of the network and present a meta-algorithm for mitigating such instability. Second, I show that graph representation learning broadens our approach to and understanding of algorithmic fairness. Graph representation learning enables us to measure group fairness without discrete class labels, and analyzing embeddings elicits mechanisms of unfairness in collaborative filtering. Third, I analyze the behaviors of niche-preferring users in recommendation datasets and identify the consistent presence of high-activity niche users. Motivated by this observation, I present a reweighting framework that simultaneously upweights users for exhibiting niche preferences and high activity levels. I show that upweighting along both dimensions improves recommendation performance and mitigates popularity bias, whereas prior reweighting approaches overlook user activity-level and reduce bias at the cost of performance.
About the speaker
David Liu is a final-year computer science Ph.D. candidate at Northeastern University advised by Professor Tina Eliassi-Rad. His research lies at the intersection of graph machine learning, algorithmic fairness, and the societal impacts of AI with publications at KDD, FAccT, and AIES. He has interned twice at Meta (Central Applied Science and FAIR AI) and has consulted as a sociotechnical researcher at the non-profit Taraaz. David obtained a Bachelors in Computer Science from Princeton University. His work is supported by the NSF GRFP. This fall he will begin as an Assistant Research Professor at Cornell University.
Share this page:
Jul 11, 2025