Activation Functions
Why we need non-linearity and a deep dive into Sigmoid, Tanh, ReLU, and Softmax.
Why we need non-linearity and a deep dive into Sigmoid, Tanh, ReLU, and Softmax.
Neural network-based dimensionality reduction: Encoder-Decoder architecture and bottleneck representations.
Demystifying the heart of neural network training: The Chain Rule, Gradients, and Error Attribution.
Scaling Reinforcement Learning with Deep Learning using Experience Replay and Target Networks.
Understanding how data flows from the input layer to the output layer to generate a prediction.
Understanding the high-level API that makes building neural networks as easy as stacking LEGO blocks.
Understanding how models quantify mistakes using MSE, Binary Cross-Entropy, and Categorical Cross-Entropy.
Exploring Feedforward Neural Networks, Hidden Layers, and how stacking neurons solves non-linear problems.
Exploring Facebook's PyTorch library, dynamic computational graphs, and its Pythonic approach to deep learning.
How AI learns by predicting missing parts of its own input, powering Large Language Models and Computer Vision.
An introduction to Google's TensorFlow ecosystem, Keras API, and the dataflow graph architecture.
Defining tensors as generalized matrices, their ranks (order), and their crucial role in representing complex data types like images and video in Deep Learning frameworks (PyTorch, TensorFlow).
Understanding the building block of Deep Learning: Weights, Bias, and Step Functions.