Register in advance for this meeting here. After registering, you will receive a confirmation email containing information about joining the meeting.
Low dimensional graph embeddings are a fundamental and popular tool used for machine learning on graphs. Given a graph, the basic idea is to produce a low-dimensional vector for each vertex, such that "similarity" in geometric space corresponds to "proximity" in the graph. These vectors can then be used as features in a plethora of machine learning tasks, such as link prediction, community labeling, recommendations, etc. Despite many results emerging in this area over the past few years, there is less study on the core premise of these embeddings. Can such low-dimensional embeddings effectively capture the structure of real-world (such as social) networks? Contrary to common wisdom, we mathematically prove and empirically demonstrate that popular low-dimensional graph embeddings do not capture salient properties of real-world networks. We mathematically prove that common low-dimensional embeddings cannot generate graphs with both low average degree and large clustering coefficients, which have been widely established to be empirically true for real-world networks. Empirically, we observe that the embeddings generated by popular methods fail to recreate the triangle structure of real-world networks, and do not perform well on certain community labeling tasks. This is joint work with Ashish Goel, Caleb Levy, Aneesh Sharma, and Andrew Stolman.