This is a blog about machine learning and deep learning fundamentals built by the authors of the
textbook Machine Learning Refined
published by Cambridge University Press. The posts, cut into short series, use careful writing and interactive coding widgets
to provide an intuitive and playful way to learn about core concepts in AI - from some of the most basic to the most advanced.
Each and every post here is a Python Jupyter notebook, prettied up for the web, that you can download and run on your own machine
by pulling our GitHub repo.
3.1. What are derivatives? text slides
3.2. Derivatives at a point and the numerical differentiator text slides
3.3. Derivative equations and hand computations text slides
3.4. Automatic differentiation - the forward mode text slides
3.5. Higher order derivatives text slides
3.6. Taylor series text slides
3.7. Derivatives of multi-input functions text slides
3.8. Effective gradient computation text slides
3.9. The Hessian and higher order derivatives text slides
3.10. Multi-input Taylor Series text slides
3.11. Getting to know autograd: your professional grade automatic differentiator text slides
7.1. Quadratic functions text slides
7.2. Second order derivatives and curvature text slides
7.3. Newton's method text slides
7.4. Regularization, Newton's method, and non-convex functions text slides
7.5. The first order derivation of Newton's method text slides
7.6. Quasi-Newton methods text slides
12.1. Features, functions, and nonlinear regression text slides
12.2. Features, functions, and nonlinear classification text slides
12.3. Features, functions, and nonlinear unsupervised learning text slides
12.4. Automating nonlinear learning text slides
12.5. Universal approximation text slides
12.6. Validation error text slides
12.7. Model search via Boosting text slides
13.1. Introduction to multi-layer perceptrons text slides
13.2. Batch normalization text slides
13.3. Normalized gradient descent text slides
13.4. Momentum methods text slides
13.5. Regularization text slides
13.6. Stochastic and mini-batch gradient descent text slides
14.1. The convolution operation text slides
14.2. Edge histogram based features text slides
14.3. Single layer convolutional neural networks text slides
14.4. Deep convolutional neural networks text slides
14.5. Transfer learning text slides
14.6. Adversarial examples and the fragility of convolutional networks text slides
15.1. Introduction text slides
15.2. Fixed Order Dynamic Systems text slides
15.3. Recurrence Relations text slides
15.4. Variable order dynamic systems text slides
15.5. Autoregressive modeling text slides
15.6. Recurrent networks text slides
15.7. Optimization tricks for recurrent networks text slides