Vulnerability and Robustness in Artificial Intelligence for Complex Networks
Dissertation proposal
Benjamin Miller
Past Talk
Hybrid talk
Wednesday
May 4, 2022
Watch video
10:00 am
Virtual
177 Huntington Ave.
11th floor
Online
Register here

This talk will be hybrid in-person and remote.

Artificial intelligence is being adopted widely across society and has the potential to greatly improve human productivity. The methods, however, are far from perfect and there have been some high-profile failures. In an adversarial context, where a malicious actor actively attempts to mislead the user, the risk of such a situation increases. In high-stakes application areas, such as defense, making these methods robust to attack is critical to achieving their full potential.

In this work, I consider vulnerability and robustness to attack of three particular AI tasks with respect to graphs: vertex classification, shortest path computation, and community detection. Recent work in the machine learning literature has focused on attacking image classifiers, and in recent years similar techniques have been formulated to force misclassification of vertices. My first study focuses on exploiting features of the complex networks to make the classifier more costly to attack. In addition to a variety of real network datasets, I demonstrate the context in which these features are useful across various synthetic datasets, exploring the effects of homophily, degree heterogeneity, and clustering. My second investigation considers an attack designed to obtain a desired outcome when computing the shortest path between two nodes. I show that this problem cannot be efficiently solved within arbitrary precision, but some existing algorithms can be leveraged to find an approximate solution. Using this these algorithms as a foundation, I present the PATHATTACK algorithm, and show that it can identify solutions that substantially outperform baseline methods, and often identifies the optimal solution. In addition, I show that PATHATTACK can be used in a heuristic method when the attacker targets a node or edge, rather than a specific path. Taking inspiration from the differential privacy literature, I also propose a weight-fuzzing method to increase the attacker's required budget, with encouraging initial results. Finally, I consider attacks against community detection, in which the attacker's goal is to prevent a node from being grouped together with other nodes that may arouse suspicion. Each of these projects contributes to the robust AI literature with a firm grounding in network science.

Committee:      

  • Tina Eliassi-Rad (Chair, Northeastern University)                          
  • Christoph Riedl (NetSI, Northeastern University)                          
  • Alina Oprea (Khoury College of Computer Sciences, Northeastern University)                        
  • Scott Alfeld (Amherst College)
About the speaker
About the speaker