October 31st, 2013

0xdata and Yelp – Machine Learning for Relevance and Serendipity/Distributed Gradient Boosting

RSS icon RSS Category: Uncategorized
Fallback Featured Image

Join us and Yelp for a chat on Machine Learning, and make sure not to miss Sri's lightning talk on Distributed Gradient Boosting!

Main Talk: Machine Learning for Relevance and Serendipity
Speaker: Aria Haghighi (Prismatic)
Abstract: 
Careful use of well-designed machine learning systems can transform products by providing highly personalized user experiences. Unlike hand-tuned or heuristic-based personalization systems, machine learning allows for the use of millions of different potential indicators when making a decision, and is robust to many types of noise. In this talk, I will discuss our deeply-integrated use of machine learning and natural language processing for content discovery at Prismatic. Our real-time personalization engine is designed to give our users not just the content they expect, but also a healthy dose of targeted serendipity, all based on relevance models learned from users’ interactions with the site. We use sophisticated machine learning techniques for topical classification of stories, to determine story similarity, make topic suggestions, rate the value of different social connections, and ultimately to determine the relevance of a particular story for a particular user. I will go into detail describing our personalized relevance model, starting with a description of our problem formulation, then discussing feature design, model design, evaluation metrics, and our experimental setup which allows quick offline prototyping without forcing users into the role of guinea pig. Our model’s combination of social cues, topical classification, publisher information, and analysis of the user’s prior interactions produces highly-relevant and often delightfully serendipitous content for our users to consume.
Lightning Talk: Distributed Gradient Boosting
Speaker: SriSatish Ambati (0xdata)
Abstract: 
Boosting is a simple yet powerful technique for learning algorithms. We present a distributed gradient boosting algorithm that's accessible from R and a simple API for roll-your-own Distributed Machine Learning Algorithm for Big Data.
Tentative Schedule:
6:30-7:00 – socializing
7:00-7:20 – lightning talk
7:20-8:30 – main presentation
8:30-9:00 – socializing
 
Learn more and sign up at http://www.meetup.com/SF-Bayarea-Machine-Learning/events/146775042/?joinFrom=event

Leave a Reply

An Introduction to Time Series Modeling:
Time Series Preprocessing and Feature Engineering

Time is the only nonrenewable resource - Sri Ambati, Founder and CEO, H2O.ai. Prediction is very

October 26, 2021 - by Adam Murphy
New Features Now Available with the Latest Release of the H2O AI Hybrid Cloud 21.10

The Makers here at H2O.ai have been busy building new features and enhancing capabilities across

October 18, 2021 - by
Time Series Forecasting Best Practices

Earlier this year, my colleague Vishal Sharma gave a talk about time series forecasting best

October 15, 2021 - by Jo-Fai Chow
Improving NLP Model Performance with Context-Aware Feature Extraction

I would like to share with you a simple yet very effective trick to improve

October 8, 2021 - by Jo-Fai Chow
Feature Transformation with the H2O AI Hybrid Cloud

It is well known throughout the data science community that data preparation, pre-processing, and feature

October 7, 2021 - by Benjamin Cox
Introducing DatatableTon – Python Datatable Tutorials & Exercises

Datatable is a python library for manipulating tabular data. It supports out-of-memory datasets, multi-threaded data

September 20, 2021 - by Rohan Rao

Start your 14-day free trial today