项目作者: flokkr

项目描述 :
Examples to run Hadoop/Spark clusters locally with docker-compose.
高级语言: RobotFramework
项目地址: git://github.com/flokkr/runtime-compose.git
创建时间: 2017-07-29T22:40:21Z
项目社区:https://github.com/flokkr/runtime-compose

开源协议:

下载


Docker compose based hadoop/spark cluster

This repository contains docker-compose files to demonstrate usage of the flokkr docker images with simple docker-compose files.

To start the containers go to a subdirectory and start the containers with

  1. docker-compose up -d

To scale services

  1. docker-compose scale datanode=1

Attributes

Topic Solution
Configuration management
Source of config files: docker-compose external environment variable file
Configuration preprocessing: envtoconf (Convert environment variables to configuration formats
Automatic restart on config change: Not supported, docker-compose up is required
Provisioning and scheduling
Multihost support NO
Requirements on the hosts docker daemon and docker-compose
Definition of the containers per host N/A, one docker-compose file for the local host
Scheduling (find hosts with available resource) NO, localhost only
Failover on host crash NO
Scale up/down: Easy with docker-compose scale datanode=3
Multi tenancy (multiple cluster) Partial (from multiple checkout directory, after port adjustment)
Network
Network between containers dedicated network per docker-compose file
DNS YES, handled by the docker network
Service discovery NO (DNS based)
Data locality NO
Availability of the ports Published according to the docker-compose files