|Talks|

Are Computers Becoming Less General-Purpose? Deep Learning, Hardware Specialization, and the Fragmentation of Computing

Visiting speaker
Past Talk
Neil C. Thompson
Computer Science and Artificial Intelligence Lab, MIT
Oct 9, 2018
12:00 pm
Oct 9, 2018
12:00 pm
In-person
4 Thomas More St
London E1W 1YW, UK
The Roux Institute
Room
100 Fore Street
Portland, ME 04101
Network Science Institute
2nd floor
Network Science Institute
11th floor
177 Huntington Ave
Boston, MA 02115
Network Science Institute
2nd floor
Room
58 St Katharine's Way
London E1W 1LP, UK

Talk recording

By Neil C. Thompson & Svenja Spanuth

It is a triumph of technology and of economics that our computer chips are so universal - the staggering variety of calculations they can compute make countless applications possible. But, this was not always the case. Computers used to be specialized, doing only narrow sets of calculations. Their rise as a ‘general purpose technology (GPT)’ only happened because of ground-breaking technical advancements by computer scientists like Von Neumann and Turing, and virtuous economics common to general purpose technologies, where product improvement and market growth fuel each other in a mutually reinforcing cycle.

This paper argues that technological and economic forces are now pushing computing in the opposite direction, making computer processors less general-purpose and more specialized. This process has already begun, driven by the slow down in Moore’s Law and the algorithmic success of Deep Learning. This threatens to fragment computing into those applications that get to be in 'fast lane' because special customized chips are developed for them, while other applications get stuck in the 'slow lane' using general-purpose chips whose progress fades.

The rise of general purpose computer chips has been remarkable. So, too, could be its fall. This paper outlines the forces already starting to fragment this general purpose technology.

About the speaker
I am a Research Scientist at MIT’s Computer Science and Artificial Intelligence Lab and a Visiting Professor at the Lab for Innovation Science at Harvard. I am also an Associate Member of the Broad Institute, and was previously an Assistant Professor of Innovation and Strategy at the MIT Sloan School of Management, where I co-directed the Experimental Innovation Lab (X-Lab). I have advised businesses and government on the future of Moore’s Law and have been on National Academies panels on transformational technologies and scientific reliability. I did my PhD in Business and Public Policy at Berkeley, where I also did Masters degrees in Computer Science and Statistics. I have a masters in Economics from the London School of Economics, and undergraduate degrees in Physics and International Development. Prior to academia, I worked at organizations such as Lawrence Livermore National Laboratories, Bain and Company, The United Nations, the World Bank, and the Canadian Parliament.
Share this page:
Oct 09, 2018