Recently we’ve been browsing papers about out-of-core linear learning on a single machine. While for us this task is basically synonymous with Vowpal Wabbit, it turns out that there are other options.

Recently we’ve been browsing papers about out-of-core linear learning on a single machine. While for us this task is basically synonymous with Vowpal Wabbit, it turns out that there are other options.

On September 10th Michael Jordan, a renowned statistician from Berkeley, did *Ask Me Anything* on Reddit. These are his thoughts on deep learning.

The Avito competition was about predicting illicit content in classified ads. It amounted to classifying text in Russian. We offer a review of what worked for top ranked participants and some opinions about how Kaggle competitions differ from the industry reality.

Sometimes people ask what math they need for machine learning. The answer depends on what you want to do, but in short our opinion is that it is good to have some familiarity with linear algebra and multivariate differentiation.

Calibration is applicable in case a classifier outputs probabilities. Apparently some classifiers have their typical quirks - for example, they say boosted trees and SVM tend to predict probabilities conservatively, meaning closer to mid-range than to extremes. If your metric cares about exact probabilities, like logarithmic loss does, you can calibrate the classifier, that is post-process the predictions to get better estimates.

This article was inspired by Andrew Tulloch’s post on Speeding up isotonic regression in scikit-learn by 5,000x.

The Criteo competition is about ad click prediction. The unpacked training set is 11 GB and has 45 million examples. While we’re not sure if it qualifies as the mythical *big data*, it’s quite big for Kaggle standards.

Unless you have an adequate machine, it will be difficult to process it in memory. Our solution is to use online or mini-batch learning, which deals with either one example or a small portion of examples at a time. Vowpal Wabbit is especially well suited for the contest for a number of reasons.

Very often performance of your model depends on its parameter settings. It makes sense to search for optimal values automatically, especially if there’s more than one or two hyperparams, as is in the case of extreme learning machines. Tuning ELM will serve as an example of using *hyperopt*, a convenient Python package by James Bergstra.

What do you get when you take out backpropagation out of a multilayer perceptron? You get an extreme learning machine, a non-linear model with the speed of a linear one.

Here’s an excerpt from Love and Math, a book by Edward Frenkel. The author writes about mathematics and his career. One of the stories is about how during his studies in the 80s he built a decision tree to help with kidney transplants. There was no machine to learn from data so humans had to do the work.

On May 15th Yann LeCun answered “ask me anything” questions on Reddit. We hand-picked some of his thoughts and grouped them by topic for your enjoyment.