FastML

Machine learning made easy

Predicting sales: Pandas vs SQL

Pandas is Python software for data manipulation. We show that some rather simple analytics allow us to attain a reasonable score in an interesting Kaggle competition. While doing that, we look at analogies between Pandas and SQL, a standard in relational databases.

An excerpt from The Master Algorithm

Pedro Domingos’ new book, The Master Algorithm, is a readable overview of machine learning. The author discerns and describes five main schools of thought in the field: symbolists, connectionists, evolutionaries, Bayesians and analogizers. Here’a a piece about how Bayesians fit their models, that is, infer parameter values. Even though the context is Bayes nets, the described method is applicable to almost any model.

Evaluating recommender systems

If you dig a little, there’s no shortage of recommendation methods. The question is, which model to choose. One of the primary decision factors here is quality of recommendations. You estimate it through validation, and validation for recommender systems might be tricky. There are a few things to consider, including formulation of the task, form of available feedback, and a metric to optimize for. We address these issues and present an example.

Deep nets generating stuff

The last few weeks have been a time of neural nets generating stuff. By deep nets we mean recurrent and convolutional neural networks, while the stuff is text, music, images and even video.

Classifying text with bag-of-words: a tutorial

There is a Kaggle training competition where you attempt to classify text, specifically movie reviews. No other data - this is a perfect opportunity to do some experiments with text classification.

Kaggle has a tutorial for this contest which takes you through the popular bag-of-words approach, and a take at word2vec. The tutorial hardly represents best practices, most certainly to let the competitors improve on it easily. And that’s what we’ll do.

The emperor’s new clothes: distributed machine learning

We can think of two reasons for using distributed machine learning: because you have to (so much data), or because you want to (hoping it will be faster). Only the first reason is good.

Distributed computation generally is hard, because it adds an additional layer of complexity and communication overhead. The ideal case is scaling linearly with the number of nodes; it rarely takes place. Emerging evidence shows that very often, one big machine, or even a laptop, outperforms a cluster.