Organizing multiple artificial neurons, I'll describe how to construct and train neural networks using the most fundamental and important algorithm in all of deep learning: backpropagation of errors.
Perceptrons are a useful pedagogical tool but have a number of limitations, particularly in training them. In this post, I'll address several issues with perceptrons and promote them into more modern artificial neurons.
Over the past decade or so, neural networks have shown amazing success across a wide variety of tasks. In this post, I'll introduce the grandfather of modern neural networks: the perceptron.
I'll introduce concept of Lie Groups and how they can be useful for working with constrained surfaces like rotations; we'll also apply them to the problem of accurate robotic state estimation.
I overview some of the fundamental deep reinforcement learning algorithms used as the basis for many of the more advanced techniques used in practice and research.