Posts

  • Quantum Computing - Part 1: Basic Quantum Circuits

    Quantum Computing is a growing field of study that marries quantum physics with computer science which is already showing some promising results in speeding up specific kinds of computing problems. In this post, we'll begin our sojourn into the quantum realm!
  • Language Modeling - Part 4: Transformers

    Going beyond recurrent neural networks, the transfomer radicalized the language modeling field and became the core of many state-of-the-art large language models (LLMs) like ChatGPT, Claude, and Llama. In this post, we'll demystify that core that is the transformer.
  • Language Modeling - Part 3: Recurrent Neural Networks

    The advent of deep neural networks and GPUs changed the language modeling landscape. In this post, we'll exploit those same deep neural networks for the task of language modeling!
  • Anatomy of a Good-enough Modern CMake Project for C++ Libraries

    CMake is the most common meta-build system used for building C/C++ libraries and applications. In this post, I'll describe the anatomy of a good-enough C++ library project structure and a good-enough way to build it using CMake.
  • Neural Nets - Part 3: Artificial Neural Networks and Backpropagation

    Organizing multiple artificial neurons, I'll describe how to construct and train neural networks using the most fundamental and important algorithm in all of deep learning: backpropagation of errors.
  • Neural Nets - Part 2: From Perceptrons to Modern Artificial Neurons

    Perceptrons are a useful pedagogical tool but have a number of limitations, particularly in training them. In this post, I'll address several issues with perceptrons and promote them into more modern artificial neurons.
  • Neural Nets - Part 1: Perceptrons

    Over the past decade or so, neural networks have shown amazing success across a wide variety of tasks. In this post, I'll introduce the grandfather of modern neural networks: the perceptron.
  • Language Modeling - Part 2: Embeddings

    Moving beyond n-grams, embeddings let us better represent the meaning of words and quantify their relationships to other words.
  • Language Modeling - Part 1: n-gram Models

    We'll start our language modeling journey starting at classical language modeling using n-gram language models.
  • Lie Groups - Part 2

    I'll continue the discussion of Lie Groups into the realm of calculus on Lie Groups; we'll finish applying them to the robotic state estimation.
  • Lie Groups - Part 1

    I'll introduce concept of Lie Groups and how they can be useful for working with constrained surfaces like rotations; we'll also apply them to the problem of accurate robotic state estimation.
  • Manifolds - Part 3

    In the last part, I'll show how we can define curvature on a manifold by extending calculus to work on manifolds with the covariant derivative!
  • Manifolds - Part 2

    In the second part, I'll construct a manifold from scratch and redefine vectors, dual vectors, and tensors on a manifold.
  • Manifolds - Part 1

    As the first in a multi-part series, I'll introduce manifolds and discuss how vectors, dual vectors, and tensors work in a flat, Euclidean space.
  • Particle Filters for Robotic State Estimation

    Going beyond EKFs, I'll motivate particle filters as a more advanced state estimator that can compensate for the limitations of EKFs.
  • Extended Kalman Filtering for Robotic State Estimation

    I discuss a fundamental building block for state estimation for a robot: the extended kalman filter (EKF).
  • Deep Reinforcement Learning: Policy-based Methods

    I discuss state-of-the-art deep RL techniques that use policy-based methods.
  • Deep Reinforcement Learning: Value-based Methods

    I overview some of the fundamental deep reinforcement learning algorithms used as the basis for many of the more advanced techniques used in practice and research.
  • Reinforcement Learning

    I describe the fundamental algorithms and techniques used in reinforcement learning.
  • Undergraduate Research

    I chronicle some lessons learned from my 2.5 years as an undergrad student researcher.
  • Sequence-to-Sequence Models

    I discuss the magic behind attention-based sequence-to-sequence models, the very same models used in tasks such as machine translation.
  • Undergraduate Tips and Tricks

    After finishing finish my undergrad, I'll share some pointers (pun absolutely intended) that I've learned over the past few years.
  • Restricted Boltzmann Machines

    I'll explain one of the more difficult unsupervised neural networks in detail using examples and intuition.
  • Understanding Backpropagation

    I'll discuss the backpropagation algorithm on various levels using concrete examples.
  • Depth Perception: The Next Big Thing in Computer Vision

    Devices like the HoloLens and Tango have depth-sensing capabilities, allowing for a completely different level of augmented reality.

Subscribe