|Talks|

Efficient learning and optimization algorithms from local entropy maximization

Visiting speaker
Past Talk
Riccardo Zecchina
Politecnico di Torino & Microsoft Research
Dec 9, 2015
3:00 pm
Dec 9, 2015
3:00 pm
In-person
4 Thomas More St
London E1W 1YW, UK
The Roux Institute
Room
100 Fore Street
Portland, ME 04101
Network Science Institute
2nd floor
Network Science Institute
11th floor
177 Huntington Ave
Boston, MA 02115
Network Science Institute
2nd floor
Room
58 St Katharine's Way
London E1W 1LP, UK

Talk recording

We will  discuss the role that  subdominant states play in the design of  algorithms for large scale optimization problems. We shall take as  representative case the problem of learning random patterns with binary synapses in single layer networks. The standard statistical physics results show that this problem is exponentially dominated by isolated solutions that are extremely hard to find algorithmically.  By a novel large deviation method we  find unexpected analytical  evidence for the existence of subdominant and extremely dense regions of solutions. Numerical experiments confirm these findings. We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. These outcomes extend to synapses with multiple states and to deeper neural architectures. 

The large deviation measure we introduced for the analytic study also suggests how to design general optimization algorithms  based on local entropy maximization.

About the speaker
Share this page:
Dec 09, 2015