Sparse representation, dictionary learning, and deep neural networks: their connections and new algorithms

Seminar  by Mostafa Sadeghi, Sharif University of Technology, Tehran

Tuesday 19 June 2018, 14:30 – 15:30, room F107

INRIA Montbonnot Saint-Martin

Abstract. Over the last decade, sparse representation, dictionary learning, and deep artificial neural networks have dramatically impacted on the signal processing and machine learning areas by yielding state-of-the-art results in a variety of tasks, including image enhancement and reconstruction, pattern recognition and classification, and automatic speech recognition. In this talk, we touch on these subjects by presenting a brief introduction to them, as well as introducing new algorithms and perspectives. Specifically, we will introduce efficient algorithms for sparse recovery and dictionary learning, which are mostly based on proximal methods in optimization. Furthermore, we will present a new algorithm to systematically design large artificial neural networks using a progression property. This is a greedy algorithm that progressively adds nodes and layers to the network. We will also talk about an effective method, inspired by available dictionary learning techniques, to reduce the number of training parameters in neural networks, thereby facilitating their use in applications with limited memory and computational resources. More connections among sparse representation, dictionary learning, and deep neural networks will also be discussed.

Biography. Mostafa Sadeghi received the B.Sc. degree from Ferdowsi University of Mashhad, Mashhad, Iran, in 2010, and the M.Sc. and PhD degrees from Sharif University of Technology, Tehran, Iran, in 2012 and 2018, all in electrical engineering. His main research interests include dictionary learning for sparse representation, machine learning, local/global optimization, and deep neural networks. From October 2016 to June 2017, he was a visiting scholar at the Information Science and Engineering Department, Royal Institute of Technology (KTH), Stockholm, Sweden. During this period, he worked on sparse representation, dictionary learning, and deep neural networks. He also spent a short visit at the Automatic Control Department at KTH from September 2017 to January 2018, where his research was focused on global optimization techniques applied to linear system identification.

Comments are closed.