NIPS 2016: A Survey of Tutorials, Papers, and Workshops

By Two Sigma on April 4, 2017
green flower with 3s written all over it

The dominating theme at NIPS 2016 was deep learning, sometimes combined with other machine learning methods such as reinforcement learning and Bayesian techniques.

Since its launch in 1987, the annual Conference on Neural Information Processing Systems (NIPS) has brought together researchers working on neural networks and related fields, but it later diversified to become one of the largest conferences in machine learning. In recent years, the trend towards deep learning has brought the conference closer to its roots. The 2016 program spanned six days (Dec 5 to 10) and included tutorials, oral and poster presentations, workshops, and invited talks on a broad range of research topics.

Following their previous Insights post on ICML 2016, Two Sigma researchers Vinod Valsalam and Firdaus Janoos discuss below the notable advances in deep learning, optimization algorithms, Bayesian techniques, and time-series analysis presented at NIPS 2016.

Overview

With 550+ accepted papers and 50+ workshops, the number of attendees more than doubled in the past two years (from more than 2500 in 2014 to over 5000 in 2016), demonstrating rapidly growing interest in machine learning and artificial intelligence. That included strong industry participation (Two Sigma was among the more than 60 sponsors), both for recruiting talent as well as for presenting advances in the field.

Several interesting invited talks were given by researchers who are established in both academia and industry. Yann LeCun (NYU & Facebook) made a case for why unsupervised learning is the future, Drew Purves (DeepMind) talked about why AI and nature need each other, and Marc Raibert (Boston Dynamics) showed how it is possible to engineer sophisticated legged robots without using learning. Although not yet mainstream, the ultimate goal of achieving Artificial General Intelligence (AGI) is also gathering industry momentum. The major players in this area are attracting the research community to their software platforms with OpenAI announcing their Universe platform, DeepMind open-sourcing their Lab platform, and GoodAI releasing an update to their Brain Simulator.

The dominating theme at the conference was deep learning, sometimes combined with other machine learning methods such as reinforcement learning and Bayesian techniques. The core area of deep learning seems to be maturing with topics shifting from architectures and layers to better learning algorithms and analysis. A notable area that is still outside the strong influence of deep learning is time-series analysis, which classical and Bayesian approaches still dominate. There was also an interesting track on optimization. Below is a highly pared down list of the myriad presentations that Vinod and Firdaus thought were promising, relevant, or generally interesting.

Tutorials

The tutorials this year were very good, touching upon many of the exciting and salient technologies in machine learning.

Deep Reinforcement Learning Through Policy Optimization

Reinforcement Learning (RL) has been grabbing headlines recently by applying deep learning techniques. One way to build RL agents is to optimize their policies directly, which is often more convenient than working with state value functions, and which was the focus of this tutorial by Pieter Abbeel (Berkeley and OpenAI) and John Schulman (OpenAI) [slides]. The key to building such agents using deep learning is computing policy gradients effectively. However, gradient evaluations are often noisy, requiring variance reduction techniques that add complexity to the algorithms. For example, the vanilla policy gradient algorithm (Reinforce) uses an advantage estimate that subtracts a fitted baseline from returns. More advanced algorithms, such as Asynchronous Advantage Actor Critic (A3C) and Generalized Advantage Estimation (GAE), use value functions for variance reduction. Another important consideration for RL algorithms is step size and sample complexity that Trust Region Policy Optimization (TRPO) and Guided Policy Search (GPS) are designed to address.

They concluded the tutorial with an announcement of the OpenAI Universe, a platform for training and measuring AI agents. It is based on the previous OpenAI Gym and includes a lot of games and other applications to train agents with the goal of artificial general intelligence (AGI). With the same goal, DeepMind announced in another talk that they are open-sourcing their Lab platform, a 3D learning environment based on Quake. It seems like both companies are placing their bets on deep RL to make inroads into AGI.

Theory and Algorithms for Forecasting Non-Stationary Time Series

In this interesting—but theory-heavy—tutorial, Vitaly Kuznetsov (Google) and Mehryar Mohri (NYU) presented a relatively new way of analyzing time-series data based on a learning-focused concept called discrepancy [slides]. They started with a clear introduction to the classical autoregressive family of models but pointed out that the model and data assumptions often don’t hold in practice. On the other hand, the new approach is based on a discrepancy measure that is estimated from data, taking into account the loss function and hypothesis set used for learning. Essentially, discrepancy measures the degree of non-stationarity and can be used to guide the design of new learning algorithms. They showed results of such an algorithm that outperformed ARIMA on financial and weather data in most cases. A further extension of their method—combining batch and online learning for time-series prediction—also seems promising.

Large-Scale Optimization: Beyond Stochastic Gradient Descent and Convexity

This useful tutorial by Suvrit Sra (MIT) and Francis Bach (INRIA) summarized recent progress in optimization research with a focus on algorithms and their convergence guarantees [slides1slides2]. The optimization problem in supervised machine learning is to find the parameters of a prediction function such that the average loss on a given set of data is minimized. Since computing the average loss entails summing over the data points, the tutorial focused on methods for finite sums with an optional regularization term. Stochastic gradient descent (SGD) and its many variants are often used as the optimization algorithm to solve such problems. But designing such algorithms and analyzing their convergence behavior is hard for non-convex problems that arise in such methods as deep learning and matrix factorization. Nevertheless, recent work has extended the convergence results for convex finite sums to non-convex sums, as well. Moreover, similar results are now available for the new breed of variance reduction methods, such as stochastic variance reduced gradient (SVRG), that converge much faster [Reddi et al., 2016a]. The large-scale aspect of the tutorial was relegated to the end with a brief discussion of asynchronous and distributed versions of the aforementioned stochastic algorithms.

Generative Adversarial Networks (GANs)

GANs by Ian Goodfellow (OpenAI) are a really neat idea of using neural networks to generate samples from a distribution learned from data—with the twist that the sampling happens by playing an adversarial game rather than MCMC. The learning process consists of a game between two adversaries: a generator network that attempts to produce realistic samples, and a discriminator network that attempts to identify whether samples originated from the training data or from the generative model. At the Nash equilibrium of this game, the generator network reproduces the data distribution exactly, and the discriminator network cannot distinguish generated samples from the training data. Both networks can be trained using stochastic gradient descent with exact gradients computed by maximum likelihood. While most of the use-cases presented here involved generating samples from images, with a little more thought and experience it should be possible to find many more relevant applications of this cool technology.

Variational Inference: Foundations and Modern Methods

This tutorial by David Blei (Columbia), Shakir Mohamed (Deep Mind), and Rajesh Ranganath (Princeton) covered variational inference (VI) methods for approximating probability distributions through optimization. These methods tend to be faster than other methods such as Monte Carlo sampling, and are making inroads beyond their stronghold of Bayesian networks into neural networks. Towards the end of this tutorial, they described some of the newer advances in VI such as Monte Carlo gradient estimation, black box variational inference, stochastic approximation, and variational auto-encoders.

Deep Learning Papers

The following are a few selected papers on deep learning, covering topics in reinforcement learning, training techniques, generative modeling, and recurrent networks.

Value Iteration Networks for Deep Reinforcement Learning

This was an award talk [Tamar et al., 2016], based on a key observation that deep reinforcement learning networks are very similar to image recognition networks, i.e., they have convolution layers for feature extraction followed by fully connected layers that map features to action probabilities. Specifically, the expectation in the value iteration RL algorithm corresponds to convolution, max to max pool, and the number of iterations to the number of layers. This means that value iteration can be implemented as a convolutional network, which can then be trained to plan actions for new tasks such as for navigating a new map. Learning to plan in this way leads to better generalization across similar tasks, but the network does require some careful engineering.

Deep Learning Without Poor Local Minima

As one of the few purely theoretical talks at the conference, this talk nevertheless tackled the important problem of characterizing the nature of local minima when optimizing deep neural networks [Kawaguchi, 2016]. In particular, the author proved the following conjecture about deep linear networks that was published in the late 1980s [Baldi and Hornik, 1989]. Every local minimum is a global minimum and every critical point that is not a global minimum is a saddle point. Note that this result is not obvious since these networks have non-convex and non-concave loss functions despite having linear activation functions. Furthermore, under two unrealistic assumptions, he also extended this result to deep nonlinear networks. Although this work is not directly applicable to practical deep nonlinear models because of the unrealistic assumptions, it does show that any bad local minima that results are due to their nonlinear activations.

Alternatives to Batch Normalization

Two promising alternatives to the commonly used batch normalization (BN) technique [Ioffe and Szegedy, 2015] for speeding up the training of deep networks were presented at the conference. The first one called weight normalization (WN) reparameterizes each weight vector by decoupling its length and direction and then performs SGD on the length and direction parameters directly [Salimans and Kingma, 2016]. Unlike BN, WN doesn’t use noisy estimates of mini-batch statistics; in fact, it avoids dependence on the mini-batch altogether. Therefore, it works well with noise-sensitive applications such as recurrent neural networks, deep reinforcement learning networks, and generative networks. WN is also simpler and less computationally expensive while providing much of the speed-up. However, it does require more careful parameter initialization.

The second alternative is called layer normalization (LN) [Ba et al., 2016b]. For normalizing the input to each neuron, BN uses the summed activations to that neuron for all the examples in a mini-batch, while LN uses the summed activations to all the same-layer neurons for a single training example. In this way, LN avoids the undesirable dependence of BN on mini-batches. Moreover, applying LN to RNNs becomes straightforward, since each layer is normalized separately at each time step. The paper also contains a nice table showing whether or not BN, WN, and LN are invariant under various input and parameter transformations. For example, they showed that LN is robust to both input and weight matrix scaling, just like BN. Experimental results on a number of benchmark tasks showed that LN outperforms both BN and WN significantly, especially in RNN models. Assuming that these results hold more broadly, LN may work as a good default replacement for BN.

Learning Interpretable and Disentangled Representations using InfoGANs

The feature representations learned by regular GANs [Goodfellow et al., 2014] are not interpretable since the noise inputs to the generator do not correspond to any semantic features of the data. InfoGANs were designed to remedy this issue by decomposing the generator noise vector into an incompressible noise part and a latent code part that targets salient semantic features of the data [Chen et al., 2016]. During training, they use an additional objective to maximize the mutual information between the latent code and the generator distribution.

The authors showed some pretty impressive results for digits and faces datasets in which they obtained interpretable and disentangled representations that are competitive with those produced by supervised methods. For example, the latent code for MNIST digit classification can be designed to comprise one discrete random variable to represent each of the ten digits and two continuous variables to represent the angle and stroke thickness of the digit. Varying the value of these variables on a trained network varies the digit, its angle, and its stroke thickness respectively in the generated output. Besides being simply cool, these networks are likely to have applications in many unsupervised learning tasks.

Stochastic Depth for Training Very Deep Networks

Recently, very deep networks with hundreds of layers have become possible using techniques such as Highway Network [Srivastava et al., 2015] and Residual Network (ResNet) [He et al., 2016]. These techniques introduce skip connections between layers to improve the flow of activations and gradients that would otherwise diminish across a large number of layers. However, training gets slower with depth. Stochastic depth is a neat idea that makes it possible to train with shallow networks and test with deep networks, resulting in substantial reductions in training time [Huang et al., 2016]. For each training mini-batch, a random subset of layers is bypassed with the identity function. During testing, all layers are used with weights based on how often they were used during training. This method acts as a regularizer similar to dropout [Srivastava et al., 2014], but doesn’t lose effectiveness when used with batch normalization [Ioffe and Szegedy, 2015] on ResNets. On the CIFAR-10 image dataset, applying stochastic depth to a ResNet with more than 1200 layers successfully avoided previously observed overfitting problems and improved test error to produce a new record!

Using Fast Weights to Attend to the Recent Past

This paper augments the standard neural network model with weights and activations to contain fast variables that change faster than activations but slower than weights [Ba et al., 2016a], based on the observation that biological synapses have dynamics at many different time-scales. These “fast weights” can be used to store temporary memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proved very helpful in sequence-to-sequence models. By using fast weights, the authors claim that networks can avoid the need to store copies of neural activity patterns, as they do in LSTMs and other memory networks.

Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences

This thought-provoking paper extends the LSTM unit by adding a new time gate, which is controlled by a parametrized clock with a frequency range that produces updates of the memory cell only during a small percentage of the duty cycle [Neil et al., 2016]. The advantage of such gating is that it can allow LSTMs to work with multi-frequency and asynchronously sampled data. Furthermore, it makes it possible to train with very long roll-outs without experiencing the vanishing/exploding gradients problem.

Sequential Neural Models with Stochastic Layers

This paper glues together a deterministic recurrent neural network and a Bayesian state space model to form a stochastic and sequential neural generative model, which enables tracking the posterior distribution on the model’s states [Fraccaro et al., 2016]. They then build in another neural network to do structured variational inference on the Bayesian state-space model, the parameters of which they learn—you guessed it—through deep learning. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over the uncertainty of the state-space, they showed significant improvements over competing results on the Blizzard and TIMIT speech modeling data sets by a large margin, while achieving comparable performances on polyphonic music modeling.

Optimization Papers

This year, in addition to faster and large-scale convex optimization, there was a lot of work on large-scale and distributed non-convex and non-smooth optimization. Below is a very small subset of the many interesting papers presented.

A Multi-Batch L-BFGS Method for Machine Learning

This paper presents a batch method that uses a sizeable fraction of the training set at each iteration to facilitate parallelism, and that employ second-order information [Berahas et al., 2016]. In order to improve the learning process, they follow a multi-batch approach in which the batch changes at each iteration. This can cause difficulties because L-BFGS employs gradient differences to update the Hessian approximations, and when these gradients are computed using different data points the process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, illustrates the behavior of the algorithm in a distributed computing platform, and studies its convergence properties for both the convex and nonconvex cases.

Proximal Stochastic Methods for Nonsmooth, Nonconvex Finite-Sum Optimization

This paper deals with stochastic algorithms for optimizing nonconvex, non-smooth finite-sum problems, where the non-smooth part is convex [Reddi et al., 2016b]. Unlike the smooth case, it is not known whether proximal SGD with constant minibatch converges to a stationary point. The paper develops stochastic algorithms that provably converge to a stationary point for constant minibatches and converge faster than batch proximal gradient descent. This paper is highly recommend—it brings together concepts from variance reduction for convex optimization with a novel way for handling non-convex and non-smooth functions.

A Simple Practical Accelerated Method for Finite Sums

This paper presents a simple fast incremental gradient (FIG) method for optimizing finite sums (such as empirical risk), building on the recently introduced SAGA method [Defazio, 2016]. The method exhibits accelerated convergence rate on strongly convex smooth problems while having one tuning parameter (a step size) and is much simpler than other acceleration techniques (such as Nesterov’s acceleration). Although they did not analyze the case, they claimed that it exhibited good empirical speedup when applied to non-smooth problems. They did not provide any results for non-convex problems but claimed that it should work in that case, too.

NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization

This paper presents a primal-dual algorithm with augmented Lagrangian-based splitting for solving large nonconvex problems whose objective consists of a sum of non-convex smooth functions plus a non-smooth regularizer, in a distributed fashion [Hajinezhad et al., 2016]. To the best of our knowledge, nobody had previously tackled the problem of solving non-convex with non-smooth optimization in the distributed setting, and their results seem quite good. Furthermore, they exposed a nice connection between their method and primal-only methods such as IAG, SAG, and SAGA.

Regularized Nonlinear Acceleration

This is a very enjoyable paper in which they introduce some cool concepts from control theory (minimal polynomial extrapolation) to not only accelerate but also stabilize generic optimization problems [Scieur et al., 2016]. The scheme computes estimates of the optimum via a nonlinear average of the iterates, where the weights are obtained from a simple linear system in an online fashion. It can plug-and-play into any standard optimization algorithm and—if their results are to be believed—it provides much better convergence than most other acceleration schemes. Most promising though is its potential to reduce the instabilities caused by improperly selected learning rates and other hyper-parameters on the convergence of the underlying algorithm.

Workshops

Below are highlights from the time series and Bayesian deep learning workshops, which were just two of the more than 50 all-day workshops at the conference.

Time-Series Workshop

Although deep learning was the prevailing theme at the conference, the time-series workshop stood out for its lack of emphasis on deep learning. This discord was also at the center of the first question to the discussion panel consisting of Yan Liu (USC), Andrew Nobel (UNC), Stephen Roberts (Oxford), and Mehryar Mohri (NYU). They seemed to agree that while deep learning is great for classification, other techniques like regressions, curves, and probabilities are better suited for time-series analysis. For example, we know how to encode invariances in time series using Bayesian non-parametrics, which we don’t know how to do with deep networks. Another relevant question to the panel was on strategies to deal with noisy time-series data. Again, they suggested using techniques that are simple and dumb, instead of deep and fancy.

However, the panel seemed unbalanced without any deep learning experts to argue the case for using deep learning. While deep learning doesn’t have any killer applications yet for time-series analysis, a few successful results were still presented at the conference, especially using recurrent neural networks. For example, Shengdong Zhang (Bosch Research) presented deep learning approaches for predicting rare events from multi-variate heterogenous time-series data [Zhang et al., 2016]. His architecture based on LSTMs worked well on real-world applications, such as hard disk failure prediction without using the hand-engineering features typically required by other state-of-the-art methods. Another poster from Kaspersky Lab showed how they used an LSTM network to detect faults in industrial multivariate time-series data [Filonov et al., 2016].

Most of the work presented at the workshop used other methods. For instance, Inderjit Dillon (UT Austin and Voleon) presented an interesting matrix factorization approach using temporal regularization to predict high dimensional time series with a lot of missing and correlated data [Yu et al., 2016]. The temporal regularizer used an autoregressive time-series model to incorporate the structure of temporal dependencies. This approach was two orders of magnitude faster and generated better forecasts than traditional approaches on a Walmart e-commerce dataset. In another interesting work, Muhammad Amjad (MIT) learned classifiers for trading bitcoins using a simple buy-sell-hold strategy based on price deltas and hand-designed features. Although the first difference of prices were stationary and mixing, his classification algorithms outperformed the classical ARIMA models.

Bayesian Deep Learning Workshop

Although Bayesian and probabilistic methods have been used with neural networks since the 1990s, they are more heavily influencing the way deep learning is currently evolving. As Zoubin Ghahramani (Cambridge) noted in his informative history talk, Bayesian neural networks (BNNs) handle both parameter and structure uncertainties effectively, addressing a fundamental limitation of traditional deep learning. Being able to represent uncertainty is particularly useful in applications such as forecasting and decision-making. The disadvantage of BNNs is often their higher computational cost.

Finale Doshi-Velez (Harvard) presented model-based reinforcement learning as a new application for BNNs because they can represent both parameter and environment uncertainties [Depeweg et al., 2016]. The BNNs were trained by minimizing α divergence with α = 0.5, which according to another nice presentation at the workshop by Jose Miguel Hernandez-Lobato (Cambridge) [Hernandez-Lobato et al., 2016], performs better than variational Bayes (VB) and expectation propagation (EP) in regression and classification problems. He also pointed out that α-divergence minimization can be implemented more efficiently with better convergence guarantees than EP and it allows one to explore the value of α that is best suited for the problem.

The workshop also included two independent papers that arrived at the same idea for back-propagating gradients through stochastic categorical variables in neural networks [Jang et al., 2016Maddison et al., 2016]. This was previously possible for stochastic continuous variables through the reparameterization trick that decomposes each stochastic variable into a differentiable deterministic function and random noise. In order to apply the reparameterization trick to categorical latent variables, the authors introduced the Concrete distribution, a,k.a. the Gumbel-Softmax distribution, that is continuous over the simplex but can approximate samples from a categorical distribution. Using this technique, they reported an impressive 2x speedup when training MNIST without compromising performance compared to previous methods.

Conclusions

The conference reaffirmed the current focus on deep learning, which is driving research in a number of related fields and has created active inter-disciplinary areas such as deep reinforcement learning and Bayesian deep learning. Improvements are also being made to core deep learning architectures, algorithms, and techniques, resulting in incremental advances to the state-of-the-art in a number of applications. At the same time, advances in optimization methods such as those based on variance reduction, are making it possible to apply sophisticated machine learning algorithms to more complex problems and at a larger scale. Progress in these areas is not only making a wider range of applications possible, but it is also generating interest in tackling the grand challenge of Artificial General Intelligence.

References

[Ba et al., 2016a]   Ba, J., Hinton, G. E., Mnih, V., Leibo, J. Z., and Ionescu, C. (2016a). Using Fast Weights to Attend to the Recent Past. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 4331–4339. Curran Associates, Inc.

<a id=”Ba et al., 2016b” name=”Ba et al., 2016b”>Ba et al., 2016b</a>  Ba, J. L., Kiros, J. R., and Hinton, G. E. (2016b). Layer Normalization. In NIPS 2016 Deep Learning Symposium, Barcelona, Spain.

[Baldi and Hornik, 1989]   Baldi, P. and Hornik, K. (1989). Neural Networks and Principal Component Analysis: Learning from Examples Without Local Minima. Neural Netw., 2(1):53–58.

[Berahas et al., 2016]   Berahas, A. S., Nocedal, J., and Takac, M. (2016). A Multi-Batch L-BFGS Method for Machine Learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 1055–1063. Curran Associates, Inc.

[Chen et al., 2016]   Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. (2016). InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 2172–2180. Curran Associates, Inc.

[Defazio, 2016]   Defazio, A. (2016). A Simple Practical Accelerated Method for Finite Sums. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 676–684. Curran Associates, Inc.

[Depeweg et al., 2016]    Depeweg, S., Hernández-Lobato, J. M., Doshi-Velez, F., and Udluft, S. (2016). Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks. arXiv:1605.07127 [cs, stat].

[Filonov et al., 2016]   Filonov, P., Lavrentyev, A., and Vorontsov, A. (2016). Multivariate Industrial Time Series with Cyber-Attack Simulation: Fault Detection Using an LSTM-based Predictive Data Model. arXiv:1612.06676 [cs, stat].

[Fraccaro et al., 2016]   Fraccaro, M., Sø nderby, S. r. K., Paquet, U., and Winther, O. (2016). Sequential Neural Models with Stochastic Layers. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 2199–2207. Curran Associates, Inc.

[Goodfellow et al., 2014]   Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative Adversarial Nets. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc.

[Hajinezhad et al., 2016]   Hajinezhad, D., Hong, M., Zhao, T., and Wang, Z. (2016). NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 3207–3215. Curran Associates, Inc.

[He et al., 2016]   He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In arXiv:1512.03385 [Cs].

[Hernandez-Lobato et al., 2016]   Hernandez-Lobato, J. M., Li, Y., Rowland, M., Bui, T., Hernandez-Lobato, D., and Turner, R. (2016). Black-Box Alpha Divergence Minimization. In Proceedings of The 33rd International Conference on Machine Learning, pages 1511–1520.

[Huang et al., 2016]   Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K. Q. (2016). Deep Networks with Stochastic Depth. In Leibe, B., Matas, J., Sebe, N., and Welling, M., editors, Computer Vision – ECCV 2016, Lecture Notes in Computer Science, pages 646–661. Springer International Publishing.

[Ioffe and Szegedy, 2015]   Ioffe, S. and Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448–456.

[Jang et al., 2016]   Jang, E., Gu, S., and Poole, B. (2016). Categorical Reparameterization with Gumbel-Softmax. arXiv:1611.01144 [cs, stat].

[Kawaguchi, 2016]   Kawaguchi, K. (2016). Deep Learning without Poor Local Minima. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 586–594. Curran Associates, Inc.

[Maddison et al., 2016]   Maddison, C. J., Mnih, A., and Teh, Y. W. (2016). The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. arXiv:1611.00712 [cs, stat].

[Neil et al., 2016]   Neil, D., Pfeiffer, M., and Liu, S.-C. (2016). Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 3882–3890. Curran Associates, Inc.

[Reddi et al., 2016a]   Reddi, S. J., Hefny, A., Sra, S., Poczos, B., and Smola, A. (2016a). Stochastic Variance Reduction for Nonconvex Optimization. In Proceedings of The 33rd International Conference on Machine Learning, pages 314–323.

[Reddi et al., 2016b]   Reddi, S. J., Sra, S., Poczos, B., and Smola, A. J. (2016b). Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 1145–1153. Curran Associates, Inc.

[Salimans and Kingma, 2016]   Salimans, T. and Kingma, D. P. (2016). Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 901–901. Curran Associates, Inc.

[Scieur et al., 2016]   Scieur, D., d∖textquotesingle Aspremont, A., and Bach, F. (2016). Regularized Nonlinear Acceleration. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 712–720. Curran Associates, Inc.

[Srivastava et al., 2014]   Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15:1929–1958.

[Srivastava et al., 2015]   Srivastava, R. K., Greff, K., and Schmidhuber, J. (2015). Training Very Deep Networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 2377–2385. Curran Associates, Inc.

[Tamar et al., 2016]   Tamar, A., Levine, S., Abbeel, P., WU, Y., and Thomas, G. (2016). Value Iteration Networks. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 2146–2154. Curran Associates, Inc.

[Yu et al., 2016]   Yu, H.-F., Rao, N., and Dhillon, I. S. (2016). Temporal Regularized Matrix Factorization for High-dimensional Time Series Prediction. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 847–855. Curran Associates, Inc.

[Zhang et al., 2016]   Zhang, S., Bahrampour, S., Ramakrishnan, N., and Shah, M. (2016). Deep Symbolic Representation Learning for Heterogeneous Time-series Classification. ResearchGate.

This article is not an endorsement by Two Sigma of the papers discussed, their viewpoints or the companies discussed. The views expressed above reflect those of the authors and are not necessarily the views of Two Sigma Investments, LP or any of its affiliates (collectively, “Two Sigma”).  The information presented above is only for informational and educational purposes and is not an offer to sell or the solicitation of an offer to buy any securities or other instruments. Additionally, the above information is not intended to provide, and should not be relied upon for investment, accounting, legal or tax advice. Two Sigma makes no representations, express or implied, regarding the accuracy or completeness of this information, and the reader accepts all risks in relying on the above information for any purpose whatsoever. Click here for other important disclaimers and disclosures.