Kernel Regression

When we fit a linear regression model, we make the parametric assumption that the label can be expressed as a weighted sum of the features, plus some noise. Whilst linear regression tends to work well when this assumption holds it tends to perform substantially worse when this isn't the case. As the name suggests, non-parametric regression methods don't require us to make explicit parametric assumptions; instead they're generally based on the assumption that examples with similar features will have similar labels.

You can explore how the widely used Kernel regression method (also known as the Nadaraya-Watson method) uses this assumption to give us a flexible, non-parametric tool by tackling the notebooks below.

Online resources

Click the links below to access the Jupyter Notebooks for Kernel Regression