Rick Wierenga

A blog about whatever interests me.


14 Nov 2019 · 1 min read

Benchmarking deep learning activation functions on MNIST

Over the years, many activation functions have been introduced by various machine learning researchers. And with so many different activation functions to choose from, aspiring machine learning practitioners might not be able to see the forest for the trees. Although this range of options allows practitioners to train more accurate...

Read

13 Oct 2019 · 1 min read

Building a Barcode Scanner in Swift

Barcodes are everywhere. They provide a uniform interface for machines to identify real world items. Thanks to the decimal code underneath the bars, barcodes are also interpretable by humans, which makes them wildly used virtually everywhere.

Read

14 Aug 2019 · 4 mins read

Lesson 14 - An end at the beginning

For the final lesson of fast.ai I redid the first project of the course. In Swift. (Read the first post so you know what we’re building) After using Swift and TensorFlow for the first time yesterday (post), I want to use Swift for everything. So I figured it might be...

Read

13 Aug 2019 · 7 mins read

Lesson 13 - Linear Regression in Swift with Tensorflow

Swift is a new programming language introduced by Apple in 2014. Despite being so young compared to other languages, it is already widely used in industry. It is mostly used to develop apps for Apple’s platforms. Recently however, after Apple making Swift open source, Swift was ported to linux and...

Read

12 Aug 2019 · 5 mins read

Lesson 12 - Label Smoothing

Working in machine learning includes dealing with poorly labelled datasets. Very few companies can afford hiring people to label the huge amounts of data required for large scale projects. Luckily, high quality datasets are available for practise projects. In production however, one will most likely need a custom dataset. With...

Read

11 Aug 2019 · 4 mins read

Lesson 11 - Adam Optimzer in Python

Optimizers are one of the most important components of neural networks. They control the learning rate and update the weights and biases of neural networks using the gradients computed by backpropagation. There are many different types of optimizers. Stochastic gradient descent, or SGD, and Adam are by far the most...

Read

10 Aug 2019 · 2 mins read

Lesson 10 - Correlation vs Covariance

Machine learning is for a very large part about the relations between numbers. This problem, however, is not limited just to machine learning. It is a problem in mathematics, statistics more precisely, as well. In this post I want to give you a quick introduction to statistics by explaining the...

Read

09 Aug 2019 · 15 mins read

Lesson 9 - Looking inside a neural network

While sometimes neural networks seem like black boxes that happen to be good at certain tasks, they really are not. After a quick setup of a fully connected neural network with one hidden layer, I’ll demonstrate how to look inside a neural network by visualising the hidden layer and get...

Read

08 Aug 2019 · 6 mins read

Lesson 8 - Numpy vs PyTorch for Linear Algebra

Numpy is one of the most popular linear algebra libraries right now. There’s also PyTorch - an open source deep learning framework developed by Facebook Research. While the latter is best known for its machine learning capabilities, it can also be used for linear algebra, just like Numpy.

Read

07 Aug 2019 · 3 mins read

Fast.ai Part 1 - Conclusion

For the past week I did one lesson of fast.ai a day including a machine learning project for each lesson. I kept track of the things I learned on my blog. Here’s a summary of what I learned so far.

Read

07 Aug 2019 · 10 mins read

Lesson 7 - GANs for Watermark Removal

Generative Adversarial Networks, or GANs, are a new machine learning technique developed by Goodfellow et al. (2014). GANs are generally known as networks that generate new things like images, videos, text, music or nealry any other form of media. This is not the only application of GANs, however. GANs can...

Read

07 Aug 2019 · 6 mins read

Lesson 6 - Convolutions and CNNs

Convolutions are ways to transform images. There are two main applications of this technique: image editing and convolutional neural networks. Convolutional neural networks, or CNNs, use convolutions to transform the image making them perform much better at computer vision tasks.

Read

06 Aug 2019 · 7 mins read

Lesson 5 - Neural Network Fundamentals

To really understand deep learning, you need to understand how neural networks, the mechanism that makes deep learning deep, work. This tutorial explains how to write a neural network from scratch using PyTorch.

Read

05 Aug 2019 · 5 mins read

Lesson 4 - Autocompletion while typing

Virtually all mobile phones support autocompletion nowadays. This might look like a casual feature but when you start thinking about how to implement this, something I’ve been wanting to know for a very long time, it can be quite complex. If you know a little bit about deep learning, it’s...

Read

04 Aug 2019 · 9 mins read

Lesson 3 - Self Driving Cars

Deep learning enables us to do things that could never be done before. It is able to do things that were science fiction only 20 or even 10 years ago. One of those things is having cars drive themselves. If I were old enough to drive a car, I would...

Read

03 Aug 2019 · 1 mins read

Lesson 2 part 3 - Common Issues with Neural Networks

This post outlines 4 common issues that cause your neural network to perform poorly along with symptoms and explanations for the behaviour.

Read

03 Aug 2019 · 8 mins read

Lesson 2 part 2 - Gradient Descent from Scratch in Pytorch

Understanding gradient descent is a critical part of learning machine learning. Although gradient descent is not the preferred method for optimization, it is similar to the more advanced methods so it’s still valuable to learn how it works.

Read

03 Aug 2019 · 10 mins read

Lesson 2 Part 1 - ML in Production

After my blog post on the first lesson of fast.ai yesterday (which got recognised by the founder of fast.ai!), today is time for the second lesson. Because this lesson consisted of three distinct parts, I decided to write 3 separate blog posts. The first one, the one you’re reading now...

Read

02 Aug 2019 · 5 mins read

Lesson 1 - Training a fruit classifier

After finishing CS229 Machine Learning by Stanford yesterday I decided to continue this fascinating journey with the fast.ai course. And wow! The first impressions are extremely good.

Read

28 Jul 2019 · 7 mins read

Introduction to natural language processing in Swift

As computers get smarter, the communication between machines and humans becomes more of a bottleneck. While humans are socially smarter, computers have surpassed us in many ways in areas like math and science. Perhaps the most important side of this bottleneck is communicating emotions. Although emotions are a fundamental part...

Read

23 Jul 2019 · 9 mins read

Compressing images using Python

Compressing images is a neat way to shrink the size of an image while maintaining the resolution. In this tutorial we’re building an image compressor using Python, Numpy and Pillow. We’ll be using machine learning, the unsupervised K-means algorithm to be precise.

Read

10 Jul 2019 · 6 mins read

Training Drawings.mlmodel

For my WWDC Scholarship submission I used a custom machine learning model that I trained using CreateML. In this blogpost I will be explaining how I went from binary data to a state of the art machine learning model.

Read

10 Jul 2019 · 13 mins read

Creating PictionARy

On April 16th, 2019 an email landed in my inbox. The subject line read: “You’ve been awarded a WWDC19 Scholarship.” What?! Because this was my first time taking part in a programming competition, I didn’t expect to win at all.

Read