diff --git a/docs/docs/favicon.ico b/docs/docs/favicon.ico
new file mode 100644
index 0000000..eb9c2a1
Binary files /dev/null and b/docs/docs/favicon.ico differ
diff --git a/docs/docs/index.md b/docs/docs/index.md
index e053a9c..0af838b 100644
--- a/docs/docs/index.md
+++ b/docs/docs/index.md
@@ -1,17 +1,162 @@
-# Welcome to Peter
+# Cookiecutter Data Science
-For full documentation visit [mkdocs.org](http://mkdocs.org).
+## Why?
-## Commands
+We often think data analysis is a report, some visualizations and or some insights. While these end products are generated by code, it's easy to focus on making to products look _real good_ and ignore the quality of the code that generates them.
-* `mkdocs new [dir-name]` - Create a new project.
-* `mkdocs serve` - Start the live-reloading docs server.
-* `mkdocs build` - Build the documentation site.
-* `mkdocs help` - Print this help message.
+On top of that, it's no secret that good analyses are often the result of exploration, experimentation, and digging into the data to see what works. This is not a process that lends itself to thinking carefully about the structure of your code or your project beforehand. So, let someone else do that thinking and the setup for you. Here's why:
-## Project layout
- mkdocs.yml # The configuration file.
- docs/
- index.md # The documentation homepage.
- ... # Other markdown pages, images and other files.
+### Other people will thank you
+
+A well-defined project structure means that a newcomer can begin to understand an analysis without digging in to extensive documentation. Well organized code is self-documenting and provides a lot of context for your code without much overhead. People will thank you for this because they can:
+
+ - Collaborate easily with you on this analysis
+ - Easily learn from your analysis about the process and the domain
+ - Feel confident in the conclusions the analysis presents
+
+A consistent project structure means that the
+
+### You will thank you
+
+Ever tried to reproduce an analysis that you did a few months ago or even a few years ago? You may have written the code, but it's now impossible to decipher whether you should use `make_figures.py.old`, `make_figures_working.py` or `new_make_figures01.py` to get things done. A good project structure encourages practices that make it easier to come back to old work, for example separation of concerns, abstracting analysis as a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph), and engineering best practices like version control.
+
+## Getting started
+
+With this in mind, we've created a Cookiecutter Data Science template for projects in Python. Your analysis doesn't have to be in Python, but the template does provide some Python boilerplate that you'd want to remove (exclusively in the `src` folder).
+
+### Requirements
+
+ - Python 2.7 or 3.5
+ - [cookiecutter Python package](http://cookiecutter.readthedocs.org/en/latest/installation.html) >= 1.4.0: `pip install cookiecutter`
+
+
+### Starting a new project
+
+Starting a new project is as easy as running this command at the command line. No need to create a directory first, the cookiecutter will do it for you.
+
+ cookiecutter https://github.com/drivendata/cookiecutter-data-science
+
+### Example
+
+
+
+## Directory structure
+
+```none
+├── LICENSE
+├── Makefile <- Makefile with commands like `make data` or `make train`
+├── README.md <- The top-level README for developers using this project.
+├── data
+│ ├── external <- Data from third party sources.
+│ ├── interim <- Intermediate data that has been transformed.
+│ ├── processed <- The final, canonical data sets for modeling.
+│ └── raw <- The original, immutable data dump.
+│
+├── docs <- A default Sphinx project; see sphinx-doc.org for details
+│
+├── models <- trained and serialized models, model predictions, or model summaries
+│
+├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
+│ the creator's initials, and a short `-` delimited description, e.g.
+│ `1.0-jqp-initial-data-exploration`.
+│
+├── references <- Data dictionaries, manuals, and all other explanatory materials.
+│
+├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
+│ └── figures <- Generated graphics and figures to be used in reporting
+│
+├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
+│ generated with `pip freeze > requirements.txt`
+│
+├── src <- Source code for use in this project.
+│ ├── __init__.py <- Makes src a Python module
+│ │
+│ ├── data <- Scripts to download or generate data
+│ │ └── make_dataset.py
+│ │
+│ ├── features <- Scripts to turn raw data into features for modeling
+│ │ └── build_features.py
+│ │
+│ └── models <- scripts to train models and then use trained models to make
+│ │ predictions
+│ ├── predict_model.py
+│ └── train_model.py
+│
+└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
+```
+
+## Opinions
+
+There are some opinions implicit in the project structure that have grown out of learning what works and what doesn't when collaborating on data science projects. Some of the opinions are about workflows, and some of the opinions are about tools that make life easier. Here are some of the beliefs which this project is built on--if you've got thoughts, please [contribute or share them](#contributing).
+
+### Data is immutable
+
+Don't edit your raw data in Excel. Don't overwrite your raw data. Don't save multiple versions of the raw data. Treat the data (and its format) as immutable. The code you write should move the raw data through a pipeline to your final analysis. You shouldn't have to run all of the steps every time you want to make a new figure (q.v. [Analysis is a DAG](#analysis-is-a-dag)), but anyone should be able to reproduce the final products with only the code in `src` and the data in `data/raw`.
+
+Also, if data is immutable, it doesn't need source control in the same way that code does. Therefore, ***by default, the data folder is included in the .gitignore file.*** If you have a small amount of data that rarely changes, you may want to include the data in the repository. Github currently warns if files are over 50MB and rejects files over 100MB. Some other options for storing/syncing large data include [AWS S3](https://aws.amazon.com/s3/) with a syncing tool (e.g., [`s3cmd`](http://s3tools.org/s3cmd)), [Git Large File Storage](https://git-lfs.github.com/), [Git Annex](https://git-annex.branchable.com/), and [dat](http://dat-data.com/). Currently by default, we ask for an S3 bucket and use `s3cmd` to sync data in the `data` folder with the server.
+
+### Notebooks are for exploration
+
+Notebooks such as the [Jupyter notebook](http://jupyter.org/) and other literate programming tools are very effective for exploratory data analysis. However, these tools can be less effective for reproducing an analysis. When we use notebooks in our work, we often subdivide the `notebooks` folder. For example, `notebooks/exploratory` contains initial explorations, whereas `notebooks/reports` is more polished work that can be exported as html to the `reports` directory.
+
+Since notebooks are challenging objects for source control (e.g., diffs of the `json` are often not human-readable and merging is near impossible), we recommended not collaborating directly with others on Jupyter notebooks. There are two steps we recommend for using notebooks effectively:
+
+ 1. Follow a naming convention that shows the owner and the order the analysis was done in. We use the format `--.ipynb` (e.g., `0.3-bull-visualize-distributions.ipynb`).
+
+ 2. Refactor the good parts. Don't write code to do the same task in multiple notebooks. If it's a data preprocessing task, put it in the pipeline at `src/data/make_dataset.py` and load data from `data/interim`. If it's useful utility code, refactor it to `src` and import it into notebooks with a cell like the following. If updating the system path is icky to you, we'd recommend making a Python package (there is a [cookiecutter for that](https://github.com/audreyr/cookiecutter-pypackage) as well) and installing that as an editable package with `pip install -e`.
+
+```
+# Load the "autoreload" extension
+%load_ext autoreload
+
+# always reload modules marked with "%aimport"
+%autoreload 1
+
+import os
+import sys
+
+# add the 'src' directory as one where we can import modules
+src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
+sys.path.append(src_dir)
+
+# import my method from the source code
+%aimport preprocess.build_features
+from preprocess.build_features import remove_invalid_data
+```
+
+### Analysis is a DAG
+
+Often in an analysis you have long-running steps that preprocesses data or trains models. If these steps have been run already (and you have stored the output somewhere like the `data/interim` directory), you don't want to wait to rerun them every time. We prefer [`make`](https://www.gnu.org/software/make/) for managing steps that depend on each other, especially the long-running ones. Make is a common tool on unix platforms (and [is available for Windows]()). Following the [`make` documentation](https://www.gnu.org/software/make/), [Makefile conventions](https://www.gnu.org/prep/standards/html_node/Makefile-Conventions.html#Makefile-Conventions), and [portability guide](http://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.69/html_node/Portable-Make.html#Portable-Make) will help ensure your Makefiles work effectively across systems. Here are [some](http://zmjones.com/make/) [examples](http://blog.kaggle.com/2012/10/15/make-for-data-scientists/) to [get started](https://web.archive.org/web/20150206054212/http://www.bioinformaticszen.com/post/decomplected-workflows-makefiles/).
+
+There are other tools for managing DAGs that are written in Python instead of a DSL (e.g., [Paver](http://paver.github.io/paver/#), [Luigi](http://luigi.readthedocs.org/en/stable/index.html), [Airflow](http://pythonhosted.org/airflow/cli.html), [Snakemake](https://bitbucket.org/snakemake/snakemake/wiki/Home), [Ruffus](http://www.ruffus.org.uk/), or [Joblib](https://pythonhosted.org/joblib/memory.html)). Feel free to use these if they are more appropriate for your analysis.
+
+### Build from the environment up
+
+The first step in reproducing an analysis is always reproducing the computational environment it was run in. You need the same tools, the same libraries, and the same versions to make everything play nicely together.
+
+One effective approach to this is use [virtualenv](https://virtualenv.pypa.io/en/latest/) (we recommend [virtualenvwrapper](https://virtualenvwrapper.readthedocs.org/en/latest/) for managing virtualenvs). By listing all of your requirements in the repository (we include a `requirements.txt` file) you can easily track the packages needed to recreate the analysis. Here is a good workflow:
+
+ 1. Run `mkvirtualenv` when creating a new project
+ 2. `pip install` the packages that your analysis needs
+ 3. Run `pip freeze >> requirements.txt` to pin the exact package versions used to recreate the analysis
+ 4. If you find you need to install another package, run `pip freeze >> requirements.txt` again and commit the changes to version control.
+
+If you have more complex requirements for recreating your environment, consider a virtual machine based approach such as [Docker](https://www.docker.com/) or [Vagrant](https://www.vagrantup.com/). Both of these tools use text-based formats (Dockerfile and Vagrantfile respectively) you can easily add to source control to describe how to create a virtual machine with the requirements you need.
+
+## Contributing
+
+The Cookiecutter Data Science project is opinionated, but not afraid to be wrong. Best practices change, tools evolve, and lessons are learned. **The goal of this project is to make it easier to start, structure, and share an analysis.** [Pull requests](https://github.com/drivendata/cookiecutter-data-science/pulls) and [filing issues](https://github.com/drivendata/cookiecutter-data-science/issues) is encouraged. We'd love to hear what works for you, and what doesn't.
+
+If you use the Cookiecutter Data Science project, link back to this page or [give us a holler](https://twitter.com/drivendataorg) and [let us know](mailto:info@drivendata.org)!
+
+## Links to related projects and references
+
+Project structure and reproducibility is talked about more in the R research community. Here are some projects and blog posts if you're working in R that may help you out.
+
+ - [Project Template](http://projecttemplate.net/index.html) - An R data analysis template
+ - "[Designing projects](http://nicercode.github.io/blog/2013-04-05-projects/)" on Nice R Code
+ - "[My research workflow](http://www.carlboettiger.info/2012/05/06/research-workflow.html)" on Carlboettifer.info
+ - "[A Quick Guide to Organizing Computational Biology Projects](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000424)" in PLOS Computational Biology
+
+Finally, a huge thanks to the [Cookiecutter](https://cookiecutter.readthedocs.org/en/latest/) project ([github](https://github.com/audreyr/cookiecutter)), which is helping us all spend less time thinking about and writing boilerplate and more time getting things done.
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index 8d311bf..6ecf669 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -1,4 +1,9 @@
-site_name: My Docs
+site_name: Cookiecutter Data Science
+site_description: A project template and directory structure for Python data science projects.
+site_favicon: favicon.ico
+repo_url: https://github.com/drivendata/cookiecutter-data-science
+copyright: Project maintained by the friendly folks at DrivenData.
+google_analytics: ['UA-54096005-4', 'drivendata.github.io/cookiecutter-data-science/']
theme: cinder
pages:
- Home: index.md