Yu-Hsiang Lin

Email: yuhsianl@alumni.cmu.edu
LinkedIn: https://www.linkedin.com/in/yhl2997925/
GitHub: https://github.com/yuhsianglin

I am an applied scientist at Amazon, building neural language generative models to help customers find relevant new products. I also built the data pipeline that is used daily to generate worldwide customer search data on new products, and developed the metrics to measure the customer engagement with new products for online A/B testing.

I was at A9 as a Software Development Intern in Summer 2018, developing the two-phase ranking model for Amazon Business. Online and offline evaluation showed that the new model improved the ranking quality of product search and reduced the tail latency. I also developed the feature for the machine learning infrastructure that supports customer engagement weighting in both model training and evaluation.

Before joining Amazon, I was a Master of Computational Data Science student in the School of Computer Science at Carnegie Mellon University, with emphasis on natural language processing, machine learning, and distributed systems.

I worked on topics in natural language processing, including cross-lingual transfer for low-resource languages, advised by Graham Neubig (ACL 2019). I also worked on the search algorithms for structured prediction; in particular, I formulated the reinforcement learning (RL) problem for adaptive beam search in sequence-to-sequence models to control the search space, and implemented the RL and adaptive beam search environment, with the RL agent trained by the actor-critic method.

I am familiar with MapReduce frameworks including Apache Hadoop and Spark. I have experience optimizing Spark RDD operations, and efficient implementation of machine learning algorithms (e.g. belief propagation or linear regression) in the MapReduce framework. I am also familiar with cloud computing and the AWS systems.

Before joining CMU, I conducted research in the Machine Learning Group in National Taiwan Univeristy, led by Chih-Jen Lin. My research was about the second-order optimization methods for deep neural networks. I designed and implemented the optimization algorithm that largely reduces the memory usage by exploiting the data structure of the model, so that the method could be applied to large-scale datasets. The result was published on Neural Computation.

During my work at SLAC National Accelerator Laboratory, advised by Pisin Chen, I analyzed the astrophysical data coming from the European Space Agency, and built a model to categorize the behavior shown in the data. The result was published on Physical Review D. I also developed a numerical code for solving the non-linear system in astrophysics, and participated in improving the compatibility of the open-source code, CAMB, with non-parametrized data.