Browse Source

Update to structure

main
Peter Bull 9 years ago
parent
commit
74d36bd24c
  1. 44
      {{ cookiecutter.repo_name }}/README.md
  2. 0
      {{ cookiecutter.repo_name }}/reports/figures/.gitkeep
  3. 0
      {{ cookiecutter.repo_name }}/src/features/.gitkeep
  4. 0
      {{ cookiecutter.repo_name }}/src/features/build_features.py
  5. 23
      {{ cookiecutter.repo_name }}/src/make_dataset.py
  6. 0
      {{ cookiecutter.repo_name }}/src/model/.gitkeep
  7. 0
      {{ cookiecutter.repo_name }}/src/model/predict_model.py
  8. 0
      {{ cookiecutter.repo_name }}/src/model/train_model.py

44
{{ cookiecutter.repo_name }}/README.md

@ -3,18 +3,50 @@
{{cookiecutter.description}} {{cookiecutter.description}}
Organization Project Organization
------------ ------------
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data ├── data
   ├── external <- Data from third party sources.    ├── external <- Data from third party sources.
   ├── interim <- Intermediate data that has been transformed goes.    ├── interim <- Intermediate data that has been transformed.
   ├── processed <- The final, canonical data sets for modeling.    ├── processed <- The final, canonical data sets for modeling.
   └── raw <- The original, immutable data dump.    └── raw <- The original, immutable data dump.
|
├── docs <- A default Sphinx project; see sphinx-doc.org for details
|
├── figures <- Graphic output from modeling to be used in reports
|
├── models <- trained and serialized models, model predictions, or model summaries
├── notebooks <- Jupyter or Beaker notebooks. Naming convention is a number (for ordering), ├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g. │ the creator's initials, and a short `-` delimited description, e.g.
`1.0-jqp-initial-data-exploration`. `1.0-jqp-initial-data-exploration`.
├── references <- Reports, data dictionaries, manuals, and all other explanatory materials. |
└── src <- Source code. Possible subdirectories might be `scripts` or `API` for ├── references <- Data dictionaries, manuals, and all other explanatory materials.
projects with larger codebases. |
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
   └── figures <- Generated graphics and figures to be used in reporting
|
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
| generated with `pip freeze > requirements.txt`
|
├── src <- Source code for use in this project.
   ├── __init__.py <- Makes src a Python module
| |
   ├── data <- Scripts to download or generate data
     └── make_dataset.py
| |
   ├── features <- Scripts to turn raw data into features for modeling
     └── build_features.py
| |
   └── models <- scripts to train models and then use trained models to make
| | predictions
   ├── predict_model.py
   └── train_model.py
|
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org

0
{{ cookiecutter.repo_name }}/figures/.gitkeep → {{ cookiecutter.repo_name }}/reports/figures/.gitkeep

0
{{ cookiecutter.repo_name }}/src/features/.gitkeep

0
{{ cookiecutter.repo_name }}/src/features/build_features.py

23
{{ cookiecutter.repo_name }}/src/make_dataset.py

@ -1,23 +0,0 @@
# -*- coding: utf-8 -*-
import click
import logging
@click.command()
@click.argument('input_filepath', type=click.Path(exists=True))
@click.argument('output_filepath', type=click.Path())
def main(input_filepath, output_filepath):
logger = logging.getLogger(__name__)
logger.info('making final data set from raw data')
if __name__ == '__main__':
log_fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(level=logging.INFO, format=log_fmt)
project_dir = os.path.join(os.path.dirname(__file__), os.pardir)
dotenv_path = os.path.join(project_dir, '.env')
dotenv.load_dotenv(dotenv_path)
main()

0
{{ cookiecutter.repo_name }}/src/model/.gitkeep

0
{{ cookiecutter.repo_name }}/src/model/predict_model.py

0
{{ cookiecutter.repo_name }}/src/model/train_model.py

Loading…
Cancel
Save