项目作者: nerevu

项目描述 :
UNDP Climate Change Data Collector
高级语言: Python
项目地址: git://github.com/nerevu/hdxscraper-undp-climate.git
创建时间: 2015-12-04T15:29:11Z
项目社区:https://github.com/nerevu/hdxscraper-undp-climate

开源协议:MIT License

下载


UNDP Climate Change Data Collector

HDX collector for UNDP Climate Change Data.

Introduction

This collector operates in the following way:

  • Downloads a fixed-width text file of climate data for each country
  • Parses and enriches the data with country name and a unique row identifier
  • Adds new records to the database climate table

With hdxscraper-undp-climate, you can

  • Save UNDP Climate Change Data to an external database
  • Create CKAN datasets/packages for each database table
  • Upload ScraperWiki generated CSV files into a CKAN instance
  • Update resources previously uploaded to CKAN with new metadata

View the live data

Requirements

hdxscraper-undp-climate has been tested on the following configuration:

  • MacOS X 10.9.5
  • Python 2.7.10

hdxscraper-undp-climate requires the following in order to run properly:

Setup

local

(You are using a virtualenv, right?)

  1. git clone https://github.com/reubano/hdxscraper-undp-climate.git
  2. pip install -r requirements.txt
  3. manage setup

ScraperWiki Box

  1. rm -rf tool
  2. git clone https://github.com/reubano/hdxscraper-undp-climate.git tool
  3. cd tool
  4. make setup

Usage

local

  1. manage run

ScraperWiki Box

  1. source venv/bin/activate
  2. screen manage -m Scraper run
  3. Now press `Ctrl-a d`

The results will be stored in a SQLite database scraperwiki.sqlite.

Upload tables to HDX/CKAN

upload to production site

  1. manage upload

upload to staging site

  1. manage upload -s

Update tables on HDX/CKAN with new metadata

update dataset on production site

  1. manage update

update dataset on staging site

  1. manage update -s

Update ScraperWiki box with new code

  1. cd tool
  2. make update
  3. source venv/bin/activate
  4. screen manage -m Scraper run
  5. # Now press `Ctrl-a d`

Configuration

hdxscraper-undp-climate will use the following Environment Variables if set:

Environment Variable Description
CKAN_API_KEY Your CKAN API Key
CKAN_PROD_URL Your CKAN instance remote production url
CKAN_REMOTE_URL Your CKAN instance remote staging url
CKAN_USER_AGENT Your user agent

Creating a new collector

If you would like to create collector or scraper from scratch, check out cookiecutter-collector.

  1. pip install cookiecutter
  2. cookiecutter https://github.com/reubano/cookiecutter-collector.git

Contributing

Code

  1. fork
  2. commit
  3. submit PR
  4. ???
  5. PROFIT!!!

Document

  • improve this readme
  • add comments to confusing parts of the code
  • write a “Getting Started” guide
  • write additional deployment instructions (Heroku, AWS, Digital Ocean, GAE)

QA

  1. follow this guide and see if everything works as expected
  2. if something doesn’t work, please submit an issue

License

hdxscraper-undp-climate is distributed under the MIT License.