Neural-Networks-From-Scratch/README.md

1.5 KiB

Purpose

Following along with the playlist created by Vizuara on Youtube (https://www.youtube.com/playlist?list=PLPTV0NXA_ZSj6tNyn_UadmUeU3Q3oR-hu).

The primary objective is to gain a foundational understanding of simple neural networks including forward propagation, activation layers, backward propagation, and gradient descent.

Lectures

Lectures 1-2

Neurons and layers with matrices and numpy.

Lectures 3-6

Multiple layer neural networks with matrices and numpy.

ReLU and softmax activation functions.

Lectures 7-11

Cross entropy loss and optimizing a single layer with gradient descent.

Lecture 12

Backpropagation for a single neuron.

Lectures 13-17

Backpropagation for neuron layers and activation layers.

Lecture 18

Backpropagation on the cross entropy loss function.

Lectures 19-21

Combined backpropagation for softmax and cross entropy loss.

Entire backpropagation pipeline for neural networks.

Entire forward pass pipeline for neural networks.

Lecture 22

Gradient descent for entire neural network.

Lectures 23-24

Learing rate decay in optimization.

Momentum in training neural networks.

Lectures 25-26

Coding the ADAGRAD optimizer for neural networks.

Coding the RMSprop optimizer for neural networks.

Lecture 27

Coding the ADAM optimizer for neural networks.

Lectures 28-31

Neural network testing, generilization, and overfitting.

K-Fold cross validation.

L1/L2 regularization to avoid overfitting.

Dropout layers in neural networks.