Browse Source

Start to flesh out READMEs

main
isms 9 years ago
parent
commit
ba99332b2d
  1. 1
      .gitattributes
  2. 4
      README.md
  3. 11
      {{ cookiecutter.repo_name }}/.pylintrc
  4. 29
      {{ cookiecutter.repo_name }}/README.md

1
.gitattributes vendored

@ -0,0 +1 @@
* text=auto

4
README.md

@ -1,2 +1,6 @@
cookiecutter-data-science cookiecutter-data-science
------------------------- -------------------------
To start a new project:
cookiecutter [email protected]:drivendata/cookiecutter-data-science.git

11
{{ cookiecutter.repo_name }}/.pylintrc

@ -0,0 +1,11 @@
[MASTER]
load-plugins=pylint_common
[FORMAT]
max-line-length=120
[MESSAGES CONTROL]
disable=missing-docstring,invalid-name
[DESIGN]
max-parents=13

29
{{ cookiecutter.repo_name }}/README.md

@ -1 +1,28 @@
## {{ cookiecutter.repo_name }} {{cookiecutter.project_name}}
==============================
{{cookiecutter.description}}
Organization
------------
├── data
   ├── external <- Data from third party sources.
   ├── interim <- Intermediate data that has been transformed goes.
   ├── processed <- The final, canonical data sets for modeling.
   └── raw <- The original, immutable data dump.
├── notebooks <- Jupyter or Beaker notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
`1.0-jqp-initial-data-exploration`.
├── references <- Reports, data dictionaries, manuals, and all other explanatory materials.
└── src <- Source code. Possible subdirectories might be `scripts` or `API` for
projects with larger codebases.
Basic Commands
--------------
### Syncing data to S3
* `make sync_data_to_s3` will use `s3cmd` to recursively sync files in `data/` up to `s3://{{ cookiecutter.s3_bucket }}/data/`.
* `make sync_data_from_s3` will use `s3cmd` to recursively sync files from `s3://{{ cookiecutter.s3_bucket }}/data/` to `data/`.
Loading…
Cancel
Save