Show simple item record

dc.contributor.advisor Baraniuk, Richard G
dc.creatorNguyen, Minh Tan
dc.date.accessioned 2020-09-14T20:25:18Z
dc.date.available 2020-09-14T20:25:18Z
dc.date.created 2020-08
dc.date.issued 2020-09-14
dc.date.submitted August 2020
dc.identifier.citation Nguyen, Minh Tan. "On the Momentum-based Methods for Training and Designing Deep Neural Networks." (2020) Diss., Rice University. https://hdl.handle.net/1911/109343.
dc.identifier.urihttps://hdl.handle.net/1911/109343
dc.description.abstract Training and designing deep neural networks (DNNs) are an art that often involves expensive search over candidate architectures and optimization algorithms. In my thesis, we develop novel momentum-based methods to speed up deep neural networks training and facilitate the process of designing them. For training DNNs, stochastic gradient descent (SGD) algorithms with constant momentum and its variants such as Adam are the optimization methods of choice for training DNNs. There is great interest in speeding up the convergence of these methods due to their high computational expense. Nesterov accelerated gradient (NAG) improves the convergence rate of gradient descent (GD) for convex optimization using a specially designed momentum; however, it accumulates error when an inexact gradient is used (such as in SGD), slowing convergence at best and diverging at worst. We propose scheduled restart SGD (SRSGD), a new NAG-style scheme for training DNNs. SRSGD replaces the constant momentum in SGD by the increasing momentum in NAG but stabilizes the iterations by resetting the momentum to zero according to a schedule. Using a variety of models and benchmarks for image classification, we demonstrate that, in training DNNs, SRSGD significantly improves convergence and generalization; for instance, in training ResNet-200 for ImageNet classification, SRSGD achieves an error rate of 20.93% vs. the benchmark of 22.13%. These improvements become more significant as the network grows deeper. Furthermore, on both CIFAR and ImageNet, SRSGD reaches similar or even better error rates with significantly fewer training epochs compared to the SGD baseline. For designing DNNs, we focus on the recurrent neural networks (RNNs) and establish a connection between the hidden state dynamics in an RNN and gradient descent (GD). We then integrate momentum into this framework and propose a new family of RNNs, called MomentumRNNs. We theoretically prove and numerically demonstrate that MomentumRNNs alleviate the vanishing gradient issue in training RNNs. We also demonstrate that MomentumRNN is applicable to many types of recurrent cells, including those in the state-of-the-art orthogonal RNNs. Finally, we show that other advanced momentum-based optimization methods, such as Adam and NAG with a restart, can be easily incorporated into the MomentumRNN framework for designing new recurrent cells with even better performance.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.subjectmomentum methods
scheduled restart SGD
recurrent neural networks
dc.title On the Momentum-based Methods for Training and Designing Deep Neural Networks
dc.type Thesis
dc.date.updated 2020-09-14T20:25:19Z
dc.type.material Text
thesis.degree.department Electrical and Computer Engineering
thesis.degree.discipline Engineering
thesis.degree.grantor Rice University
thesis.degree.level Doctoral
thesis.degree.name Doctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record