📄️ Perceptron
Understanding the building block of Deep Learning: Weights, Bias, and Step Functions.
📄️ MLP
Exploring Feedforward Neural Networks, Hidden Layers, and how stacking neurons solves non-linear problems.
📄️ Forward Propagation
Understanding how data flows from the input layer to the output layer to generate a prediction.
📄️ Backpropagation
Demystifying the heart of neural network training: The Chain Rule, Gradients, and Error Attribution.
📄️ Activation Functions
Why we need non-linearity and a deep dive into Sigmoid, Tanh, ReLU, and Softmax.
📄️ Loss Functions
Understanding how models quantify mistakes using MSE, Binary Cross-Entropy, and Categorical Cross-Entropy.