From e09b22a2a1d174a3b63c3ff09ae6ddddf8fa1d21 Mon Sep 17 00:00:00 2001 From: judsonupchurch Date: Sun, 5 Jan 2025 18:59:24 -0600 Subject: [PATCH] Added lecture contents in readme --- README.md | 54 +++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 70a9fa0..1f99385 100644 --- a/README.md +++ b/README.md @@ -3,27 +3,55 @@ Following along with the playlist created by Vizuara on Youtube (https://www.you The primary objective is to gain a foundational understanding of simple neural networks including forward propagation, activation layers, backward propagation, and gradient descent. -## Lecture Contents -Lectures 1-2 use same handout. +# Lectures +## Lectures 1-2 +Neurons and layers with matrices and numpy. -Lectures 3-6 use same handout. +## Lectures 3-6 +Multiple layer neural networks with matrices and numpy. -Lectures 7-11 use same handout. +ReLU and softmax activation functions. -Lecture 12 uses same handout. +## Lectures 7-11 +Cross entropy loss and optimizing a single layer with gradient descent. -Lectures 13-17 use same handout. +## Lecture 12 +Backpropagation for a single neuron. -Lecture 18 uses same handout. +## Lectures 13-17 +Backpropagation for neuron layers and activation layers. -Lectures 19-21 use same handout. +## Lecture 18 +Backpropagation on the cross entropy loss function. -Lecture 22 uses same handout. +## Lectures 19-21 +Combined backpropagation for softmax and cross entropy loss. -Lectures 23-24 use same handout. +Entire backpropagation pipeline for neural networks. -Lectures 25-26 use same handout. +Entire forward pass pipeline for neural networks. -Lecture 27 uses same handout. +## Lecture 22 +Gradient descent for entire neural network. -Lectures 28-31 use same handout. \ No newline at end of file +## Lectures 23-24 +Learing rate decay in optimization. + +Momentum in training neural networks. + +## Lectures 25-26 +Coding the ADAGRAD optimizer for neural networks. + +Coding the RMSprop optimizer for neural networks. + +## Lecture 27 +Coding the ADAM optimizer for neural networks. + +## Lectures 28-31 +Neural network testing, generilization, and overfitting. + +K-Fold cross validation. + +L1/L2 regularization to avoid overfitting. + +Dropout layers in neural networks. \ No newline at end of file