Projects

Analysis of genomic data from single cells

Single-cell RNA-Sequencing (scRNA-seq) data has become the most widely used high-throughput method for transcription profiling of individual cells. This technology has created an unprecedented opportunity to investigate important biological questions that can only be answered at the single-cell level. However, this technology also brings new statistical, computational and methodological challenges.

Methods to address technical variablity

In contrast to bulk RNA-seq experiments, the majority of reported expression levels in scRNA-seq data are zeros, which could be either biologically-driven, genes not expressing RNA at the time of measurement, or technically-driven, genes expressing RNA, but not at a sufficient level to be detected by sequencing technology. In addition, systematic errors, including batch effects, have been widely reported as a major challenge in high-throughput technologies, however, surprisingly, these issues have received minimal attention in published studies based on scRNA-seq technology. To investigate this, I examined data from fifteen published scRNA-seq studies and demonstrated that systematic errors can explain a substantial percentage of observed cell-to-cell expression variability, which in turn can lead to false discoveries, for example, when using unsupervised learning methods (1). More recently, we developed a dimensionality reduction method, Varying-censoring Aware Matrix Factorization (VAMF), which permits the identification of low-dimensional representations of cells in the presence of cell-specific censoring. This allows for the correction for batch effects if they are mediated through a varying censoring mechanism in either confounded or unconfounded study designs, which is not possible using standard batch correction methods (2).

  1. Hicks SC, Townes FW, Teng M, Irizarry RA (2018). Missing data and technical variability in single-cell RNA-sequencing experiments. Biostatistics.
  2. Townes FW, Hicks SC, Aryee MJ, Irizarry RA (2017). Varying-Censoring Aware Matrix Factorization for Single Cell RNA-Sequencing. bioRxiv.

Fast, scalable, memory-efficient methods to analyze single-cell data

The k-means algorithm is a classic algorithm used in the analysis of scRNA-seq data. However, with increasing sizes of single-cell data, new methods are needed that are fast, scalable and memory-efficient. To address this, we are implementing the mini-batch optimization for k-means clustering proposed in Sculley (2010) for large single cell sequencing data (1). The mini batch k-means algorithm can be run with data stored in memory or on disk (e.g. HDF5 file format).

  1. mbkmeans. Mini-batch k-means clustering for large single-cell datasets.

High-grade serous ovarian cancer subtypes with single-cell profiling

The goal of this project is to identify the biological basis of subtypes of high-grade serous ovarian cancers (HGSOC) using bulk and single-cell gene expression data. This is highly relevant to public health because HGSOC is a particularly deadly cancer that is often only identified at late stage and treatment options are limited. The long-term impact of this project will be a key step towards developing targeted treatments for HGSOCs.

If you are interested in this project, there is an open postdoctoral scientist position (see the Join us page for more information)!

Development and neurogenesis of the enteric nervous system with single-cell profiling

This is a collaboration with Subhash Kulkarni at Johns Hopkins School of Medicine to broadly study the steady-state and transcriptomic changes from stimuli of cells in the enteric nervous system. For example, one project investigates the remodeling and cellular changes in the gastrointestinal tract from inflammation. This is key to understand the biology of persistent inflammation as well as will help identify novel drug targets and develop treatments for curbing inflammation and associated pathological changes from diseases such as colitis.


Data Science Education

An increase in demand for statistics and data science education has led to changes in curriculum, specifically an increase in computing. While this has led to more applied courses, students still struggle with effectively deriving knowledge from data and solving real-world problems. In 1999, Deborah Nolan and Terry Speed argued the solution was to teach courses through in-depth case studies derived from interesting scientific questions with nontrivial solutions that leave room for different analyses of the data. This innovative framework teaches the student to make important connections between the scientific question, data and statistical concepts that only come from hands-on experience analyzing data (1, 2). To address this, I am building the openDataCases community resource of case studies that educators can use in the classroom to teach students how to effectively derive knowledge from data.

  1. Hicks SC, Irizarry RA (2018). A Guide to Teaching Data Science. The American Statistician.
  2. Hicks SC (2017). Greater Data Science Ahead. Journal of Computational Graphical Statistics.

Statistical methods to control for false discoveries

In high-throughput studies, hundreds to millions of hypotheses are typically tested. Statistical methods that control the false discovery rate (FDR) have emerged as popular and powerful tools for error rate control. While classic FDR methods use only p-values as input, more modern FDR methods have been shown to increase power by incorporating complementary information as “informative covariates” to prioritize, weight, and group hypotheses. To address this, we investigated the accuracy, applicability, and ease of use of two classic and six modern FDR-controlling methods by performing a systematic benchmark comparison using simulation studies as well as six case studies in computational biology (1).

  1. Korthauer K, Kimes PK, Duvallet C, Reyes A, Subramanian A, Teng M, Shukla C, Alm EJ, Hicks SC (2018). A practical guide to methods controlling false discoveries in computational biology. bioRxiv.

Statistical methods for normalization of high-throughput data

Normalization is an essential step for the analysis of genomics high-throughput data. Quantile normalization is one of the most widely used multi-sample normalization tools for applications including genotyping arrays, RNA-Sequencing (RNA-Seq), DNA methylation, ChIP-Sequencing (ChIP-Seq) and brain imaging. However, quantile normalization relies on assumptions about the data-generation process that are not appropriate in some contexts. I developed a data-driven method to test these assumptions and guide the choice of an appropriate normalization method (1). The freely available software has been downloaded over 7500 times (distinct IPs) from Bioconductor since 2014 and has helped researchers test the assumptions of global normalization methods in the analysis of their own data. To address the scenario when the assumptions of quantile normalization are not appropriate, I have developed a generalization of quantile normalization, referred to as smooth quantile normalization, which allows for global differences between biological groups (2). More recently, I collaborated with researchers from the University of Maryland to correct for compositional biases found in sparse metagenomic sequencing data (3).

  1. Hicks SC, Irizarry RA (2015). quantro: a data-driven approach to guide the choice of an appropriate normalization method. Genome Biology.
  2. Hicks SC, Okrah K, Paulson JN, Quackenbush J, Irizarry RA, Bravo HC (2018). Smooth quantile normalization. Biostatistics.
  3. Kumar MS, Slud EV, Okrah K, Hicks SC, Hannenhalli S, Corrada Bravo H (2018). Analysis and correction of compositional bias in sparse sequencing count data. BMC Genomics.