Neural Networks

Neural networks are everywhere - capable of learning complex relationships between features and applicable to lots of different kinds of data, they're responsible for much of the recent excitement surrounding machine learning.

The key to understanding how neural networks work is understanding how the iterative training process works: How do we attribute portions of the error to individual weights in order to improve the model's predictive accuracy? You can implement this process, known as backpropagation, in the notebooks below.

Online resources

Note: If you don't mind wading through some algebra, I'd recommend going through the explanation in Bishop - we use the same notation in our implementation of backpropagation.

Click the links below to access the Jupyter Notebooks for Neural Networks