Deploy services docker-compose style to AWS Elastic Container Service
docker-compose like deployments for AWS Elastic Container Service
Deploy a complete ECS cluster with a single command:
$ ecs-compose cluster deploy my-cluster -f my-services.yml
You just need to specify the services that will be deployed within the cluster in a YAML file like the following
You can form as well a YAML configuration file for use it when creating or updating a cluster definition (eg. POST / PUT /api/clusters).
Basically the YAML stackfile is similar in form to a docker-compose yaml file with the following structure
vpc:
id: vpc-xxx
subnets:
public: ["subnet-xxx", "subnet-xxx"]
private: ["subnet-xxx", "subnet-xxx"]
security_groups:
public: ["sg-xxx"]
private: ["sg-xxx"]
logging:
log_driver: awslogs
options:
awslogs-group: /ecs/staging
awslogs-region: us-east-1
awslogs-stream-prefix: svc
service_discovery:
namespace: example.sd
defaults:
memory: 930
environment:
- JAVA_OPTS: -Duser.timezone="UTC" -Xms256m -Xmx640m -XX:MaxMetaspaceSize=256m
- SPRING_PROFILES_ACTIVE: staging
- LOGGING_LEVEL_ROOT: INFO
- SERVER_PORT: 80
services:
- edge-service:
image: xxx.dkr.ecr.us-east-1.amazonaws.com/edge-service:latest
ports:
- "8080:8080"
environment:
- SERVER_PORT: 8080
desired_count: 1
elb:
type: public
protocol: HTTPS
ports:
public: 443
container: 8080
certificates:
- arn:aws:acm:us-east-1:xxx:certificate/xxx-xxx-xxx-xxx-xxx
dns:
hosted_zone_id: xxxx
record_name: staging.example.com
healthcheck:
protocol: HTTP
port: 8080
path: /health
deployment_configuration:
maximum_percent: 100
minimum_healthy_percent: 0
placement_constraints:
- expression: "attribute:ecs.instance-type =~ p3.*"
type: "memberOf"
- expression: "attribute:ecs.availability-zone == us-east-1a"
type: "memberOf"
- user-service:
image: xxx.dkr.ecr.us-east-1.amazonaws.com/user-service:latest
ports:
- "80:80"
desired_count: 1
dns_discovery:
name: user.staging
- deeplearning:
image: xxx.dkr.ecr.us-east-1.amazonaws.com/deep-learning:latest
ports:
- "8080:8080"
desired_count: 1
gpus: 1
dns_discovery:
name: deeplearning.staging
# WORKERS NO PORTS EXPOSED
- email-worker:
image: xxx.dkr.ecr.us-east-1.amazonaws.com/email-worker:xxx
vpc:
id: vpc-xxx
subnets:
public: ["subnet-xxx", "subnet-xxx"]
private: ["subnet-xxx", "subnet-xxx"]
security_groups:
public: ["sg-xxx"]
private: ["sg-xxx"]
vpc
vpc.subnets
vpc.security_groups
logging:
log_driver: awslogs
options:
awslogs-group: /ecs/staging
awslogs-region: us-east-1
awslogs-stream-prefix: svc
In this section you specify the logging driver and (optional) the parameters to that specific driver. You can use gelf, awslogs, syslog, etc.
service_discovery:
namespace: example.sd
In this section you specify the desired private namespace that will be used for registering the services using DNS based Service Discovery. You can’t assign multilevel domains here, only top level domains are allowed e.g. example.local
defaults:
memory: 930
environment:
- JAVA_OPTS: -Duser.timezone="UTC" -Xms256m -Xmx640m -XX:MaxMetaspaceSize=256m
- SPRING_PROFILES_ACTIVE: staging
- LOGGING_LEVEL_ROOT: INFO
- SERVER_PORT: 80
In this section you need to specify the hard limit (in MiB) of memory to present to the container and the default environment variables that will be applied to all services within the stackfile. The reason behind the global environment section is preventing from copy and paste all over the place instead, if you need to overwrite a specific envvar you can declare that within the service definition and that envvar will have greater precedence over the global one.
services:
- edge-service:
image: xxx.dkr.ecr.us-east-1.amazonaws.com/edge-service:latest
ports:
- "8080:8080"
environment:
- SERVER_PORT: 8080
desired_count: 1
elb:
type: public
protocol: HTTPS
ports:
public: 443
container: 8080
certificates:
- arn:aws:acm:us-east-1:xxx:certificate/xxx-xxx-xxx-xxx-xxx
dns:
hosted_zone_id: xxxx
record_name: staging.example.com
healthcheck:
protocol: HTTP
port: 8080
path: /health
In this section you define an array of services that will be deployed in the cluster.
Each array item corresponds to a different service and you will need to specify the name of the service and then its properties.
services:
- user-service:
image: xxx.dkr.ecr.us-east-1.amazonaws.com/user-service:latest
ports:
- "80:80"
desired_count: 1
dns_discovery:
name: user.staging
<service_name>.<cluster_name/environment>.<namespace>
)
- email-worker:
image: xxx.dkr.ecr.us-east-1.amazonaws.com/email-worker:xxx
There aren’t any exposed ports and no load balancer / service discovery configuration. Only the container image definition and if required (desired_count)
The project is available on PyPI. Simply run::
$ pip install ecs-compose
The mechanism in which ecs-compose looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials.
Please read the boto3 documentation for more details
(http://boto3.readthedocs.org/en/latest/guide/configuration.html#configuration).
Or just run::
$ aws configure
Currently the following actions are supported:
Cluster related operations
deploy / redeploys a single or multiple services at once defined in the YAML stackfile
Destroy the entire AWS ECS Cluster with all services and attached load balancers associated with it.
List all deployed services within the specified cluster as YAML stackfile
Individual service related operations
Destroy an individual service within the specified cluster with its load balancer associated with it.
For detailed information about the available actions, arguments and options, run::
$ ecs-compose --help
$ ecs-compose cluster --help
$ ecs-compose service --help