Machine learning engineer and data scientist based in the New York metropolitan area, with extensive experience building natural language processing tools and working with financial market data. Former quantitative researcher and developer passionate about machine learning and AI, financial markets, and blockchain technologies.
Currently pursuing a Master's in Computer Science at Columbia University.
Contact: kevinwu103@gmail.com
Master of Science in Computer Science (Machine Learning specialization)
Coursework: Reinforcement Learning, Deep Learning, Operating Systems, Algorithms, ...
Bachelor of Arts in Applied Mathematics (Economics Specialization)
Coursework: Probability, Linear Algebra, Graph Theory, Optimization, Quantitative Finance, Game Theory, Econometrics, ...
- Coursera — Machine Learning. Completed (April 2016)
- Udacity — Self-Driving Car Nanodegree Program
- Term 1: Computer Vision and Deep Learning. Completed (April 2017)
- Term 2: Sensor Fusion, Localization, and Control. Completed (December 2017)
- Term 3: Path Planning, Concentrations, and Systems. In Progress (January 2018 - present)
Data Scientist/Quant Developer (2016 - 2017)
Prattle is a financial technology and machine learning startup founded in 2013 by Evan Schnidman and William MacMillan. During my time there, I led initial research and development on a new dataset for quantifying the sentiment of corporate earnings calls, which was officially launched in September of 2017 as the Equities Analytics platform. My day-to-day responsibililites included development and testing of NLP models, writing ETL pipelines for processing third-party data, and debugging sofware and data quality issues as needed.
See press release and Bloomberg article for more information on Prattle's Equities Analytics platform.
Belvedere TradingQuantitative Analyst (2014 - 2016)
Belvedere Trading is a proprietary trading firm specializing in options market-making. As a trader trainee and then as a quantitative analyst, I completed an internal options theory and market-making course, refactored legacy code for derivatives pricing and data visualization, and worked with senior traders and quants to backtest and improve the firm's "delta one" trading strategies.
Resume available for download.
IEOR 8100, “Reinforcement Learning” Final Project (May 2018) | Full Text | Slides
I explore the feasibility of using Q-learning to train a high-frequency trading (HFT) agent that can capitalize on short-term price fluctuations, using only order book features as inputs. I propose a specfic functional form for the agent's value function based on domain intuition and evaluate the performance of Q-learning trading algorithms by replaying historical Bitcoin-USD exchange rate data through a naive market simulator. Overall, I find that Q-learning may be used to generate modest returns from an aggressive scalping strategy with short holding periods, while providing a high degree of human interpretability in the learned parameters.
Online Learning Techniques for Portfolio AllocationCOMS 6998.001, “Bandits and Reinforcement Learning” Final Project (December 2017) | Full Text | Slides | Code
An empirical study of regret-minimizing online learning algorithms, in particular, the widely-cited Exponentiated Gradient (EG) algorithm (Helmbold et al., 1998). We reimplement EG and run a new set of experiments on modern-day US stock market data, and we find that the algorithm fails to significantly outperform benchmark portfolios from 2000 to the present. However, we present a novel application of EG in which wealth is allocated among trading strategies rather than directly to stocks, and we find that an algorithm that allocates wealth to mean reversion-based strategies can not only significantly outperform benchmark and EG portfolios, but also comes close to achieving the optimal portfolio allocation in hindsight.
Understanding “Fedspeak”: Identifying the Sources of Market Sentiment in Central Bank CommunicationsSenior thesis, presented to the Harvard University Department of Applied Mathematics (April 2014) | Full Text
I examined the effect of one of the Federal Open Market Committee’s post-meeting announcement - one of the most highly anticipated economic release in the financial community — on short-term stock market returns. Whereas the relationship between Twitter or financial news and the polarity of Fed statements has been the subject of extensive research, I directly used returns on stock market indices to categorize statements as positive or negative. The other novelty of this approach was the use of L1-regression to identify a small number of influential phrases from central bank announcements from all possible n-grams.
Neural Machine Comprehension with BiLSTMs and Handcrafted Features
COMS 4995, “Deep Learning” Final Project (May 2018) | Full Text
We build a deep learning system for extractive question answering for the Stanford Question Answering Dataset (SQuAD) from the ground up, following Weissenborn et al.'s FastQA paper . Using word embeddings and hand-crafted context features as the inputs to a simple bidirectional LSTM, we are able to achieve reasonably high accuracy with remarkably few parameters and model components.
COMS 6998, “Advanced Machine Learning for Personalization” Final Project (May 2018) | Full Text | Slides
We use denoising and autoencoder methods for the playlist continuation/track recommendation task outlined in the 2018 Recsys Challenge and the newly-released Spotify Million Playlists Dataset. Following the example of Liang et al. ( paper ), we train autoencoders with multinomial log-likelihood functions and compare our results against a matrix factorization (SVD) baseline.
Udacity Self-Driving Car Nanodegree Program, Term 1 (Jan 2017) | Code
I used Keras to implement and train a convolutional neural network for autonomous driving, as part of the final project in Term 1 of Udacity's Self-Driving Car Nanodegree. The model architecture was based on the following 2016 NVIDIA paper .
Numerai
Numerai is a crowd-sourced hedge fund powered by an ongoing data science/machine learning tournament. You can track my progress here.
Fisherman at dawn, Inle Lake, Myanmar.
Sunset, Ngapali Beach, Myanmar.
Hot air balloons over Bagan, Myanmar.
U Bein Bridge, Mandalay, Myanmar.
El Malecón, Havana, Cuba.
Horseplay. Havana, Cuba.
Jump. Havana, Cuba.
El almendrón. Havana, Cuba.
Vermillion Lakes. Alberta, Canada.
Banff National Park, Canada.
Daybreak, Banff National Park, Canada.
Yoho River. Alberta, Canada.