Last updated: 2018-12-05

workflowr checks: (Click a bullet for more information)
Expand here to see past versions:


This page describes how to download the data and code used in this analysis, set up the project directory and rerun the analysis. We have use the workflowr package to organise the analysis and insert reproducibilty information into the output documents. The packrat package has also been used to manage R package versions and dependencies.

Getting the code

All the code and outputs of analysis are available from GitHub at https://github.com/Oshlack/combes-organoid-paper. If you want to replicate the analysis you can either fork the repository and clone it or download the repository as a zipped directory.

Once you have a local copy of the repository you should see the following directory structure:

Installing packages

Packages and dependencies for this project are managed using packrat. This should allow you to install and use the same package versions as we have used for the analysis. packrat should automatically take care of this process for you the first time that you open R in the project directory. If for some reason this does not happen you may need to run the following commands:

install.packages("packrat")
packrat::restore()

Note that a clean install of all the required packages can take a significant amount of time when the project is first opened.

Getting the data

In this project we have used three scRNA-seq datasets, two batches of kidney organoids, the first containing three samples and the second a single organoid, and a human fetal kidney dataset published by Lindstrom et al. The organoid datasets can be downloaded from GEO accession number GSE114802 and the fetal kidney dataset from GEO GSE102596.

Once the datasets have been downloaded they need to be extracted, placed in the correct directorys and renamed. The analysis code assumes the following directory structure inside the data/ directory:

Additional data files used during the analysis are provided as part of the repository. Intermediate data files created during the analysis will be placed in data/processed. These are used by later stages of the analysis so should not be moved, altered or deleted.

Running the analysis

The analysis directory contains the following analysis files:

As indicated by the numbering they should be run in this order. If you want to rerun the entire analysis this can be easily done using workflowr.

workflowr::wflow_build(republish = TRUE)

It is important to consider the computer and environment you are using before doing this. Running this analysis from scratch requires a considerable amount of time, disk space and memory. Some stages of the analysis also assume that multiple (10) cores are available for processing. If you have fewer cores available you will need to change the following line in the relevant files and provide the number of cores that are available for use.

bpparam <- MulticoreParam(workers = 10)

It is also possible to run individual stages of the analysis, either by providing the names of the file you want to run to workflowr::wflow_build() or by manually knitting the document (for example using the ‘Kint’ button in RStudio).

Caching

To avoid having to repeatably rerun long running sections of the analysis we have turned on caching in the analysis documents. However, this comes at a tradeoff with disk space, useability and (potentially but unlikely if careful) reproducibility. In most cases this should not be a problem but it is something to be aware of. In particularly there is a incompatibilty with caching and workflowr that can cause images to not appear in the resulting HTML files (see this GitHub issue for more details). If you have already run part of the analysis (and therefore have a cache) and want to rerun a document the safest option is the use the RStudio ‘Knit’ button.

devtools::session_info()

This reproducible R Markdown analysis was created with workflowr 1.1.1