Neural Networks
Neural networks are everywhere - capable of learning complex relationships between features and applicable to lots of different kinds of data, they're responsible for much of the recent excitement surrounding machine learning.
The key to understanding how neural networks work is understanding how the iterative training process works: How do we attribute portions of the error to individual weights in order to improve the model's predictive accuracy? You can implement this process, known as backpropagation, in the notebooks below.
Online resources
- The Gentle Introduction to Neural Networks series, by David Fumo
- A youtube series which goes over the intuition and some of the maths behind backpropagation
- Section 5.1 and 5.3 of Bishop's Pattern Recognition and Machine Learning
Note: If you don't mind wading through some algebra, I'd recommend going through the explanation in Bishop - we use the same notation in our implementation of backpropagation.
Click the links below to access the Jupyter Notebooks for Neural Networks
- Neural Network - Empty [Online notebook | .ipynb file]
- Neural Network - Redacted [Online notebook | .ipynb file]
- Neural Network - Complete [Online notebook | .ipynb file | HTML file]