Go to file
2025-01-05 18:59:24 -06:00
final_projects Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture01_02 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture03_06 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture07_11 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture12 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture13_17 Gradient of Loss wrt Inputs 2025-01-01 23:51:37 +00:00
lecture18 Gradient of Loss wrt Inputs 2025-01-01 23:51:37 +00:00
lecture19_21 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture22 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture23_24 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture25_26 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture27 Added remaining handouts 2024-12-08 03:22:05 +00:00
lecture28_31 Added remaining handouts 2024-12-08 03:22:05 +00:00
README.md Added lecture contents in readme 2025-01-05 18:59:24 -06:00

Purpose

Following along with the playlist created by Vizuara on Youtube (https://www.youtube.com/playlist?list=PLPTV0NXA_ZSj6tNyn_UadmUeU3Q3oR-hu).

The primary objective is to gain a foundational understanding of simple neural networks including forward propagation, activation layers, backward propagation, and gradient descent.

Lectures

Lectures 1-2

Neurons and layers with matrices and numpy.

Lectures 3-6

Multiple layer neural networks with matrices and numpy.

ReLU and softmax activation functions.

Lectures 7-11

Cross entropy loss and optimizing a single layer with gradient descent.

Lecture 12

Backpropagation for a single neuron.

Lectures 13-17

Backpropagation for neuron layers and activation layers.

Lecture 18

Backpropagation on the cross entropy loss function.

Lectures 19-21

Combined backpropagation for softmax and cross entropy loss.

Entire backpropagation pipeline for neural networks.

Entire forward pass pipeline for neural networks.

Lecture 22

Gradient descent for entire neural network.

Lectures 23-24

Learing rate decay in optimization.

Momentum in training neural networks.

Lectures 25-26

Coding the ADAGRAD optimizer for neural networks.

Coding the RMSprop optimizer for neural networks.

Lecture 27

Coding the ADAM optimizer for neural networks.

Lectures 28-31

Neural network testing, generilization, and overfitting.

K-Fold cross validation.

L1/L2 regularization to avoid overfitting.

Dropout layers in neural networks.