klin1234 [at] gmail [dot] com

I am working on my Ph.D. from the Department of Statistics & Data Science at Carnegie Mellon University, under supervision of Professor Kathryn Roeder and Professor Jing Lei. My research mainly focuses on developing statistical methods to understand complex data dependencies found in epigenetics applications. Recently, I have been researching changepoint detection and network models. I also have a growing interest in topics relating to statistical practice that formalize how data scientists interact with their datasets. The code for my research are available on GitHub.

I also have a M.S. in Machine Learning from Carnegie Mellon University. Before coming to CMU, I got my B.S.E. in Operations Research and Financial Enginnering from Princeton University in 2014, along with a Certificate in Statistics and Machine Learning as well as a Certificate in Applications of Computing.

When taking a break from work, I am a fanatic fan of zumba, cooking, anime and poetry.


Research
Preprints/In Preparation
Exponential-family embedding for cell developmental trajectories

Common pipelines to estimate the cell developmental trajectories based on single-cell data typically first embed each cell into a lower-dimensional space, but these embedding typically assume statistical models that do not model single-cell data well. In this paper, we develop an embedding for hierarchical model where the inner product between two latent low-dimensional vectors is the natural parameter of an exponential family distributed random variable, and prove identifiability and convergence. When studying oligodendrocytes in fetal mouse brains, we find that oligodendrocytes mature into various cell types.

K. Lin, J. Lei, and K. Roeder
Exponential-family embedding with application to cell developmental trajectories for single-cell RNA-seq data
In preparation
Dependency diagnostic via scatter plots

Dependency graphs encode complex pairwise patterns that are often statistically estimated, but are often hard to diagnose with visualizations due to the quadratic number of scatter plots. In this paper, we develop an interactive system in R that learns if the data scientist visually interprets dependency. Then, this system applies the learned classifier to infer a dependency graph that can be compared against the estimated graph. This paper won honorable mention for the Student Paper Award in the ASA Section: Statistical Computing and Statistical Graphics.

K. Lin, A. Tian and H. Liu
Dependency diagnostic: Visually understanding pairwise variable relationships
In preparation
Post-selection inference for changepoint problems

Changepoint detection methods such as binary segmentation are often used in CGH analyses for copy number variation detection, but these methods lack proper downstream statistical inference. In this paper, we develop post-selection hypothesis tests for various changepoint detection methods and provide substantial practical guidelines based on simulation.

S. Hyun, K. Lin, M. G'Sell, and R. J. Tibshirani
Valid post-selection inference for segmentation methods with application to copy number variation data
Submitted to Biometrics (2018) (arxiv)
Covariance-based sample selection for heterogenous data

Microarray samples from brain tissue is hard to collect, and also varies substantially depending on the tissue's brain region and the developmental age of its subject, hence it is hard to collect enough samples for the statistical analysis. In this paper, we develop a sample selection method to find additional microarray samples that are statistically similar to the samples of our desired spatio-temporal brain tissue. We demonstrate that after apply an existing analysis pipeline to our selected samples, we detect a higher percentage of autism risk genes.

K. Lin, H. Liu, and K. Roeder
Covariance-based sample selection for heterogenous data: Applications to gene expression and autism risk gene detection
Submitted to Journal of the American Statistical Association (JASA) Applications and Case Studies (2018) (arxiv)
Theory and Methods (Published)
Fused lasso and changepoint screening

Changepoint estimators have statistical theory for how well they estimate the mean function and how well they estimate the changepoints, but existing theory often analyzes these properties separately. In this paper, we prove a near-optimal estimation rate for the fused lasso, which in turn directly proves a changepoint detection rate that is near the detection limit. We extend this logic to other estimators and settings.

K. Lin, J. Sharpnack, A. Rinaldo, and R. J. Tibshirani
A Sharp Error Analysis for the Fused Lasso, with Application to Approximate Changepoint Screening
Advances in Neural Information Processing Systems (NIPS) (2017) (link) (arxiv)
Compressed sensing

Many compressed sensing are developed to be as generic as possible, but have shortcomings in specialized settings where modern optimization theory can deliver a substantial boost in computational efficiency. In this paper, we develop two compressed sensing algorithms, one specialized for extremely sparse signals and another specialized for Kronecker-structed sensing matrices. We numerically demonstrate a near 10-times reduction in computation time compared to other state-of-the-art methods.

R. Vanderbei, K. Lin, H. Liu, and L. Wang
Revisiting compressed sensing: Exploiting the efficiency of simplex and sparsification methods
Mathematical Programming Computation 8.3 (2016): 253-269. (link) (pdf)
Application (Published)
Autism risk factors within the noncoding genome

While de novo mutations within the protein-coding portion of the genome have been thoroughly studied, these mutations in the noncoding portions which comprise of 98.5% of the genome have been less well understood. In this paper, we use a bioinformatics framework to analyze 1902 autism quartets via WGS and find that the strongest signals arose from promoters -- noncoding regions that control gene transcription.

J. An*, K. Lin*, L. Zhu*, D. M. Werling*, S. Dong, H. Brand, H. Z. Wang, X. Zhao, G. B. Schwartz, R. L. Collins, B. B. Currall, C. Dastmalchi, J. Dea, C. Duhn, M. C. Gilson, L. Klei, L. Liang, E. Markenscoff-Papadimitriou, S. Pochareddy, N. Ahituv, J. D. Buxbaum, H. Coon, M. J. Daly, Y. S. Kim, G. T. Marth, B. M. Neale, A. R. Quinlan, J. L. Rubenstein, N. Sestan, M. W. State, A. J. Willsey, M. E. Talkowski, B. Devlin, K. Roeder, S. J. Sanders. (*Equal contributions)
Analysis of 7,608 genomes highlights a role for promoter regions in Autism Spectrum Disorder
Science 362.6420 (2018). (link) (pdf)

Articles
We, the millenials: The statistical significance of political significance
Significance (2017) (link)

Talks
  • Dependency diagnostic: Visually understanding pairwise variable relationships (talk).
    2018 Joint Statistical Meetings (JSM), Vancouver, Canada.
  • A sharp error analysis for the fused lasso, with application to approximate changepoint screening (poster).
    2017 Conference on Neural Information Processing Systems (NIPS), Long Beach, CA.
  • Hypothesis testing for simulatenous variable clustering and correlation network estimation, with applications to gene coexpression networks (talk).
    2017 Joint Statistical Meetings (JSM), Baltimore, MD.
  • Longitudinal Gaussian graphical model for autism risk gene detection (talk).
    2016 Joint Statistical Meetings (JSM), Chicago, IL.
  • Longitudinal Gaussian graphical model integrating gene expression and sequencing data for autism risk gene detection (talk).
    2015 American Society of Human Genetics (ASHG), Baltimore, MD.
  • Optimization for compressed sensing: New insights and alternatives (talk).
    2014 Modeling and Optimization: Theory and Applications, Bethlehem, PA.

Experience
Honors and Awards
  • Honorable mention in student paper competition
    (For article "Dependency diagnostic: Visually understanding pairwise variable relationships")
    ASA section: Statistical Computing and Statistical Graphics, January 2018
  • Winner of Statistical excellence for early-career writing
    (For article "We, the millenials: The statistical significance of political significance")
    Significance magazine in partnership with Young Statisticians Section of Royal Statistical Society, June 2017
  • Teaching assistant award recipient
    (For "Statistical Computing" in Fall 2016)
    Carnegie Mellon University, May 2017
  • Award recipient of Kenneth H. Condit Prize
    (For excellence in service to department)
    Princeton University, May 2014
Teaching Experience
  • (2018 Summer) 36-350 Statistical Computing (Instructor)
  • (2018 Spring) 36-350 Statistical Computing (Assistant Instructor with R. J. Tibshirani)
  • (2017 Fall) 36-350 Statistical Computing (TA under P. Freeman)
  • (2015 Fall, 2016 Fall) 36-350 Statistical Computing (TA under R. J. Tibshirani)
  • (2015 Spring) 36-217 Probability Theory and Random Processes (TA under A. Rinaldo)
  • (2014 Fall) 46-921 Financial Data Analysis I and 46923 Financial Data Analysis II (TA under C. Schafer)
  • (2014 Spring, 2013 Spring, 2012 Spring) ORF 350 Analysis of Big Data (Course designer under H. Liu)
Professional Service
Copyright © 2015 - 2018, Kevin Lin. All rights reserved.
Last Updated: August 10, 2018.