This talk will be hybrid in-person and remote.
Artificial intelligence is being adopted widely across society and has the potential to greatly improve human productivity. The methods, however, are far from perfect and there have been some high-profile failures. In an adversarial context, where a malicious actor actively attempts to mislead the user, the risk of such a situation increases. In high-stakes application areas, such as defense, making these methods robust to attack is critical to achieving their full potential.
In this work, I consider vulnerability and robustness to attack of three particular AI tasks with respect to graphs: vertex classification, shortest path computation, and community detection. Recent work in the machine learning literature has focused on attacking image classifiers, and in recent years similar techniques have been formulated to force misclassification of vertices. My first study focuses on exploiting features of the complex networks to make the classifier more costly to attack. In addition to a variety of real network datasets, I demonstrate the context in which these features are useful across various synthetic datasets, exploring the effects of homophily, degree heterogeneity, and clustering. My second investigation considers an attack designed to obtain a desired outcome when computing the shortest path between two nodes. I show that this problem cannot be efficiently solved within arbitrary precision, but some existing algorithms can be leveraged to find an approximate solution. Using this these algorithms as a foundation, I present the PATHATTACK algorithm, and show that it can identify solutions that substantially outperform baseline methods, and often identifies the optimal solution. In addition, I show that PATHATTACK can be used in a heuristic method when the attacker targets a node or edge, rather than a specific path. Taking inspiration from the differential privacy literature, I also propose a weight-fuzzing method to increase the attacker's required budget, with encouraging initial results. Finally, I consider attacks against community detection, in which the attacker's goal is to prevent a node from being grouped together with other nodes that may arouse suspicion. Each of these projects contributes to the robust AI literature with a firm grounding in network science.
- Tina Eliassi-Rad (Chair, Northeastern University)
- Christoph Riedl (NetSI, Northeastern University)
- Alina Oprea (Khoury College of Computer Sciences, Northeastern University)
- Scott Alfeld (Amherst College)
Ben is a fourth-year Ph.D. candidate working with Professor Tina Eliassi-Rad. His research focuses on improving the robustness of machine learning and artificial intelligence applications against adversarial attacks, with a specific interest in cybersecurity. Prior to joining Northeastern, Ben was a technical staff member of the Cyber Analytics and Decision Systems Group at MIT Lincoln Laboratory. In this role, he developed novel algorithms and architectures for nonlinear digital signal processing and large graph analysis. Ben received the B.S. degree (with highest honors) and the M.S. degree in Computer Science in 2005 from the University of Illinois at Urbana-Champaign.
No upcoming events