Activation Functions
Why we need non-linearity and a deep dive into Sigmoid, Tanh, ReLU, and Softmax.
Why we need non-linearity and a deep dive into Sigmoid, Tanh, ReLU, and Softmax.
Demystifying the heart of neural network training: The Chain Rule, Gradients, and Error Attribution.
Mastering the Chain Rule, the fundamental calculus tool for differentiating composite functions, and its direct application in the Backpropagation algorithm for training neural networks.
Scaling Reinforcement Learning with Deep Learning using Experience Replay and Target Networks.
Understanding how data flows from the input layer to the output layer to generate a prediction.
Understanding the high-level API that makes building neural networks as easy as stacking LEGO blocks.
Understanding how models quantify mistakes using MSE, Binary Cross-Entropy, and Categorical Cross-Entropy.
Exploring Feedforward Neural Networks, Hidden Layers, and how stacking neurons solves non-linear problems.
Exploring Facebook's PyTorch library, dynamic computational graphs, and its Pythonic approach to deep learning.
An introduction to Google's TensorFlow ecosystem, Keras API, and the dataflow graph architecture.
Understanding the Jacobian matrix, its role in vector-valued functions, and its vital importance in backpropagation and modern deep learning frameworks.
Understanding the building block of Deep Learning: Weights, Bias, and Step Functions.