August 07, 2019 · 3 mins read Part 1 - Conclusion

For the past week I did one lesson of a day including a machine learning project for each lesson. I kept track of the things I learned on my blog. Here’s a summary of what I learned so far.

1. Fruit Classifier

Coming for Stanford’s CS229 Machine Learning course, started out differently than I expected. Where CS229 was very math heavy (which I really liked), was focused on programming with very little math in general but a whole lot of programming. I’m still not sure which approach I liked better.

The first task was to build an image classifier. In CS229 this would be a challenging exercise, but the fastai library (which is amazing me every single day) made it super easy.

2. ML in Production

Although this lesson couldn’t be more different from CS229, it was super exciting. The skills we learned are very practical.

3. Self Driving Cars

I hadn’t expected I could do this project when I started the lesson. This project is when I realized the amount of information we learned in each lesson. Learning about U-Nets, image segmentation and more transfer learning was fascinating.

4. Autocomplete While Typing

This post answered a question I’ve had for a long time: how do keyboards predict the next word one’s about to type. This was my second project on NLP (the first one) so it was fun to compare the APIs and see how they relate and where the differences are.

5. Neural Networks from Scratch

This project was interesting because I had already implemented a neural network using only numpy. This was right in between the high level API of and the low levels projects in CS229.

I also learned about other activation functions than sigmoid which was interesting. Seeing these relations to nature in this way (ReLUs) was interesting and really reflects the origin of neural networks.

6. Convolutions and CNNs

Wow! It was fascinating to learn about these image editing techniques and how it helps neural networks. It made me question if humans handle these tasks in a similar manner, and if not, how do we do it?

Computer vision was easily one of the most fascinating things in this course.

7. GANs for Watermark Removal

I always believed GANs were very hard and it would not be something I could do. Especially not learning it in one day. That’s why this lesson was so exciting! I actually managed to create a GAN that’s operational.

What I missed

There’s one thing I missed in part 1: how do things work internally? I think this is something part 2 will cover but right now some things are like magic. As a person who wants to understand how things work in detail, this was a little frustrating. Given the amount of information that’s in the course already, I can fully understand some things aren’t covered however.


I’m super excited to continue with part 2 tomorrow and hopefully get some more insight in the internal workings. I’ll continue writing the posts as well - it’s great fun. Make sure to follow me on twitter to get notified when I release new posts.

A huge thanks to Sam Miserendino for proofreading this post!