We’re sorry we missed you at re:Invent, but we can still meet!

Cache-it – Episode #1 – Applying lessons from caching to ML feature stores with Yao Yue

Join host Khawaja Shams, for the first episode of the Cache-it podcast!

カワジャ・シャムス ヘッドショット
カワジャ・シャムス
著者

Share

Summary 

Welcome to the Cache-it, a podcast delivering valuable insights that you cache in your memory. Host カワジャ・シャムス interviews Yao Yue, the CEO and co-founder of IOP Systems about using lessons learned from caching to build machine learning feature stores. Yao has a background in working at Twitter, where she focused on distributed caching and performance engineering. She shares her experience dealing with high-profile incidents at Twitter, where caching played a crucial role in optimized Twitter’s performance. 

Yao explains the concept of ML feature stores, which is a place to discover, store, and serve features for machine learning training and inference. The conversation also dives deep into the challenges of performance engineering at scale and the importance of controlling tail latencies. 

Make sure to subscribe to the Cache-it channel on YouTube so you never miss an episode.

About Yao Yue

Yao Yue is the creator and a maintainer of Pelikan, a unified cache framework for building servers, proxies, and clients used in distributed caches. This project was the culmination of her knowledge about caching, which she obtained when working on and later leading the Cache Team at Twitter for the better part of 2010s. Besides Pelikan, Yao has coauthored several academic papers on caching published in OSDI, NSDI, and FAST, as well as many talks. In recent years, she has ventured into the world of distributed systems performance and optimization. She co-founded IOP Systems, a company aimed to improve softwar efficiency and reduce cost, with her teammates from Twitter. 

Share