Electrical Engineering and Computer Science


Faculty Candidate Seminar

Relaxing Bottlenecks for Fast Machine Learning

Christopher De Sa


PhD Candidate
Stanford
 
Wednesday, February 08, 2017
4:00pm - 5:00pm
1690 Beyster

Add to Google Calendar

About the Event

As machine learning problems become larger and more complicated, there is an increasing need for efficient solutions. Performance is critical not only quantitatively, by saving time, money, and energy, but also qualitatively, by enabling new types of methods (such as interactive human-in-the-loop systems) that were not previously possible. To address the performance bottlenecks that exist in ML pipelines, I have used a general recipe called the relaxed-consistency approach. The approach starts by identifying an underutilized resource in the system, and then altering the algorithm's semantics to best exploit this resource. It proceeds by identifying structural conditions that let us prove guarantees that the altered algorithm will still work. Finally, it applies this structural knowledge to improve the performance and accuracy of whole systems.

In this talk, I will describe the relaxed consistency approach, and demonstrate how it can be applied to a specific bottleneck (parallel overheads), problem (inference), and algorithm (asynchronous Gibbs sampling). I will demonstrate the effectiveness of this approach on a range of problems including CNNs, and finish with a discussion on the future of relaxed-consistency methods for fast machine learning.

Biography

Christopher De Sa is a PhD candidate in Electrical Engineering at Stanford University advised by Christopher RĂ© and Kunle Olukotun. His research interests include algorithmic, software, and hardware techniques for high-performance machine learning, with a focus on relaxed-consistency variants of stochastic algorithms such as asynchronous stochastic gradient descent (SGD). He is also interested in using these techniques to construct data analytics and machine learning frameworks that are efficient, parallel, and distributed. Chris's work on studying the behavior of asynchronous Gibbs sampling received the Best Paper Award at ICML 2016.

Additional Information

Sponsor(s): CSE

Open to: Public