Added lecture contents in readme
This commit is contained in:
parent
0d4ea9b80f
commit
e09b22a2a1
54
README.md
54
README.md
@ -3,27 +3,55 @@ Following along with the playlist created by Vizuara on Youtube (https://www.you
|
||||
|
||||
The primary objective is to gain a foundational understanding of simple neural networks including forward propagation, activation layers, backward propagation, and gradient descent.
|
||||
|
||||
## Lecture Contents
|
||||
Lectures 1-2 use same handout.
|
||||
# Lectures
|
||||
## Lectures 1-2
|
||||
Neurons and layers with matrices and numpy.
|
||||
|
||||
Lectures 3-6 use same handout.
|
||||
## Lectures 3-6
|
||||
Multiple layer neural networks with matrices and numpy.
|
||||
|
||||
Lectures 7-11 use same handout.
|
||||
ReLU and softmax activation functions.
|
||||
|
||||
Lecture 12 uses same handout.
|
||||
## Lectures 7-11
|
||||
Cross entropy loss and optimizing a single layer with gradient descent.
|
||||
|
||||
Lectures 13-17 use same handout.
|
||||
## Lecture 12
|
||||
Backpropagation for a single neuron.
|
||||
|
||||
Lecture 18 uses same handout.
|
||||
## Lectures 13-17
|
||||
Backpropagation for neuron layers and activation layers.
|
||||
|
||||
Lectures 19-21 use same handout.
|
||||
## Lecture 18
|
||||
Backpropagation on the cross entropy loss function.
|
||||
|
||||
Lecture 22 uses same handout.
|
||||
## Lectures 19-21
|
||||
Combined backpropagation for softmax and cross entropy loss.
|
||||
|
||||
Lectures 23-24 use same handout.
|
||||
Entire backpropagation pipeline for neural networks.
|
||||
|
||||
Lectures 25-26 use same handout.
|
||||
Entire forward pass pipeline for neural networks.
|
||||
|
||||
Lecture 27 uses same handout.
|
||||
## Lecture 22
|
||||
Gradient descent for entire neural network.
|
||||
|
||||
Lectures 28-31 use same handout.
|
||||
## Lectures 23-24
|
||||
Learing rate decay in optimization.
|
||||
|
||||
Momentum in training neural networks.
|
||||
|
||||
## Lectures 25-26
|
||||
Coding the ADAGRAD optimizer for neural networks.
|
||||
|
||||
Coding the RMSprop optimizer for neural networks.
|
||||
|
||||
## Lecture 27
|
||||
Coding the ADAM optimizer for neural networks.
|
||||
|
||||
## Lectures 28-31
|
||||
Neural network testing, generilization, and overfitting.
|
||||
|
||||
K-Fold cross validation.
|
||||
|
||||
L1/L2 regularization to avoid overfitting.
|
||||
|
||||
Dropout layers in neural networks.
|
||||
Loading…
Reference in New Issue
Block a user