Activation Functions
Why we need non-linearity and a deep dive into Sigmoid, Tanh, ReLU, and Softmax.
Why we need non-linearity and a deep dive into Sigmoid, Tanh, ReLU, and Softmax.
Neural network-based dimensionality reduction: Encoder-Decoder architecture and bottleneck representations.
Understanding the Encoder-Decoder architecture used for dimensionality reduction and feature learning.
Demystifying the heart of neural network training: The Chain Rule, Gradients, and Error Attribution.
How CNNs and deep neural networks power modern discovery engines like Netflix, YouTube, and Pinterest.
Scaling Reinforcement Learning with Deep Learning using Experience Replay and Target Networks.
Understanding how data flows from the input layer to the output layer to generate a prediction.
Understanding the competitive framework between Generators and Discriminators to create realistic synthetic data.
A deep dive into the GRU architecture, its update and reset gates, and how it compares to LSTM.
How to train neural networks to categorize images into predefined classes using CNNs.
Going beyond bounding boxes: How to classify every single pixel in an image.
Intermediate-level ML projects focusing on NLP, Computer Vision, and Time-Series forecasting.
Understanding the high-level API that makes building neural networks as easy as stacking LEGO blocks.
Understanding how models quantify mistakes using MSE, Binary Cross-Entropy, and Categorical Cross-Entropy.
A deep dive into the LSTM architecture, cell states, and the gating mechanisms that prevent vanishing gradients.
Understanding how multiple attention 'heads' allow Transformers to capture diverse linguistic and spatial relationships simultaneously.
Exploring Feedforward Neural Networks, Hidden Layers, and how stacking neurons solves non-linear problems.
How padding prevents data loss at the edges and controls the output size of convolutional layers.
Understanding Max Pooling, Average Pooling, and how they provide spatial invariance.
Exploring Facebook's PyTorch library, dynamic computational graphs, and its Pythonic approach to deep learning.
An introduction to Recurrent Neural Networks, hidden states, and processing sequential data.
How AI learns by predicting missing parts of its own input, powering Large Language Models and Computer Vision.
Understanding how the step size of a filter influences spatial dimensions and computational efficiency.
An introduction to Google's TensorFlow ecosystem, Keras API, and the dataflow graph architecture.
Defining tensors as generalized matrices, their ranks (order), and their crucial role in representing complex data types like images and video in Deep Learning frameworks (PyTorch, TensorFlow).
Understanding kernels, filters, and how feature maps are created in Convolutional Neural Networks.
Understanding how models weigh the importance of different parts of an input sequence using Queries, Keys, and Values.
Understanding the building block of Deep Learning: Weights, Bias, and Step Functions.
A comprehensive deep dive into the Transformer architecture, including Encoder-Decoder stacks and Positional Encoding.
Exploring 3D CNNs, Optical Flow, and Temporal Modeling for analyzing moving images.