FastML

Machine learning made easy

Tuning hyperparams fast with Hyperband

Hyperband is a method for tuning iterative algorithms. It uses random sampling and attempts to gain the edge by using time spent optimizing in the best way. We explain a few things that were not clear to us right away, and try the algorithm in practice.

Data in, predictions out

With many implementations of machine learning algorithms it is entirely unclear how to train them on one’s own data and then how to get predictions. This is an area where AI researchers have a lot of catching up to do with lessons long ago learned in computer science.

On chatbots

Chatbots seem to be all the craze these days. Why don’t we take a look at this fascinating topic. A warning, though: this article contains strong opinions.

Piping in R and in Pandas

In R community, there’s this one guy, Hadley Wickam, who by himself made R great again. One of the many, many things he came up with - so many they call it a hadleyverse - is the dplyr package, which aims to make data analysis easy and fast. It works by allowing a user to take a data frame and apply to it a pipeline of operations resulting in a desired outcome (an example in just a minute). This approach turned out to be successful. Then people have ported key pieces to Pandas.

Deep learning architecture diagrams

As a wild stream after a wet season in African savanna diverges into many smaller streams forming lakes and puddles, so deep learning has diverged into a myriad of specialized architectures. Each architecture has a diagram. Here are some of them.

^one weird trick for training char-^r^n^ns

Character-level recurrent neural networks are attractive for modelling text specifically because of their low input and output dimensionality. You have only so many chars to represent - lowercase letters, uppercase letters, digits and various auxillary characters, so you end up with 50-100 dimensions (each char is represented in one-hot encoding).

Still, it’s a drag to model upper and lower case separately. It adds to dimensionality, and perhaps more importantly, a network gets no clue that ‘a’ and ‘A’ actually represent pretty much the same thing.