Return to page

BLOG

The MillionSongs Data Part 1: Bells and Whistles of GLM in H2O

 headshot

By H2O.ai Team | minute read | July 09, 2013

Category: Uncategorized
Blog decorative banner image

Using the Million Songs Data Set I want to go from beginning to end through H2O's GLM tool. Note that the original data are large, so downloading and fiddling with the full data set can be quite painful if you just do it from your desktop, that said you can find it here .  It’s a good opportunity to take a really detailed look at H2 O so that you can get the most bump from the trunk (so to speak).

To start, let’s assume you’ve decided that GLM is the method for you. You’ve launched H2 O, parsed your data and chosen GLM from the drop down menu under “Model”.
Destination Key  –  this is an automatically generated key for your model; it will allow you to recall this specific model and all of its details later in your analysis. While H2 O will spit a key out for you, you can also specify a model name such that later you can identify which of many models you are interested in revisiting.
Key –  this is the .hex key generated when you parsed your data into H2 O. If you didn’t save it at the time it’s no biggie. The .hex is named whatever your original data file was named, save for the change in extension. If you begin typing the name of your original file, you will be given the option to tab auto-complete. If you want to find the key yourself you can do so by going to the drop down menu “Admin”, select “Jobs” and under description find “Parse”. The key for your data of interest is given in the “Destination key” field, and is a clickable link that allows you to inspect your data.
Y –  Your dependent variable.
X –  Once you identify your dependent variable (the value you would like to predict) in the Y field, the X field will auto populate with all possible options (all of your other variables).  You select the subset of variables that you would like to use to predict with.
Family –  Under family you will see a drop down menu with choices. Each of the four options differs in the assumptions you make about your dependent (Y) variable – the variable you would like to predict. They are explained in some detail below.
Link –  Each family is associated with a default link function, which defines the specialized transformation on the set of X variables chosen to predict Y.

FamilyDefault LinkDescription and Example
GaussianIdentityYour dependent variables (Y) are quantitative, continuous (or continuous predicted values can be meaningfully interpreted), and expected to be normally distributed.EX: The average length of a song in seconds or the average purchase price of a product.
BinomialLogitYour dependent variables take on two values, traditionally coded as 0 and 1, and follow a binomial distribution. Choose this if you have a categorical Y with two possible outcomes.EX: Customer decides to purchase or notA song is played or not played
PoissonLogYour dependent variable is a count – a quantitative, discrete value that expresses the number of times some event occurred.EX: The number of customers visiting a website over time, the number of customers visiting a store over distance
GammaInverseYour dependent variable is a survival measure – that is, you have some measure of the duration of a process for which the outcome is variable.EX: The length of time an individual remains a customer, the length of time before a particular product feature fails

 

Lambda:  H2 O provides a default value, but this can also be user defined. Lambda is a regularization parameter that is designed to prevent overfitting. The best value of lambda depends on the degree to which you wish the variance of the cross validated coefficients to match.
Alpha:    A user defined tuning regularization parameter that H2 O sets to 0.5 by default, but which can take any value between 0 and 1, inclusive.  It functions so that there is an added penalty taken against the estimated fit of the model as the number of parameters increases. An alpha of 1 is the lasso penalty, and an alpha of 0 is the ridge penalty.
Lambda and alpha are distinct in purpose in that lambda is primarily concerned with preventing overfitting and thus increasing the generalizability of any specific coefficient in your model, where alpha is concerned with the model overall.  
N-Folds:  The number of cross validations you would like H2 O to generate. Choosing 10 means that ten random samples of observations from your orginal data will be selected and models will be fit to those subsets as well. It’s important to note that the smaller your orginal data are the larger the variation you can expect to see in the parameter estimates provided in the cross validation models; for sufficiently small data sets you may want to choose a different evaluative criteria.
Expert Settings:  for the moment I would like to leave expert settings, except to note that this is the option you choose if you would like to standardize your data . In data where there is a substantial difference in the scale of your input variables standardizing can greatly improve the interpretability of your results.

 headshot

H2O.ai Team

At H2O.ai, democratizing AI isn’t just an idea. It’s a movement. And that means that it requires action. We started out as a group of like minded individuals in the open source community, collectively driven by the idea that there should be freedom around the creation and use of AI.

Today we have evolved into a global company built by people from a variety of different backgrounds and skill sets, all driven to be part of something greater than ourselves. Our partnerships now extend beyond the open-source community to include business customers, academia, and non-profit organizations.