Distributed System for large scale data management - Ensimag 2018/2019
Distributed System for large scale data management
automation/ : Bash scripts to automatize the deployment of the infrastructure and the application
config/ : Kubernetes and AWS configuration files
src/ : Source code of our application
In order to automatically run the solution, please follow these steps :
Follow this guide to have at least the ~/.aws/credentials
and ~/.aws/config
(at least with the region) files : https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
pip3 install -r requirements.txt
(You could use virtualenv to isolate this project from your other projects : https://virtualenv.pypa.io/en/latest/)
You could configure the deployment using the config/instances.ini file
Go to the automation folder
cd automation
./deploy.sh
In order to automatically run the solution, please follow these steps :
cd ..
./manage.py read type get-master-public-ip
ssh -i ssh/Smackey ubuntu@MASTER_PUBLIC_IP
kubectl
commands for example to play with the clusterkubectl get pods
Some pods at the startup will have the status ‘Error’ : the cluster just need some time to attain a global coherency
./manage.py read type get-workers-public-ip
http://ONE_WORKER_PUBLIC_IP:32222
All contributions are well appreciated.
Please read CONTRIBUTING.md before starting to contribute on this project.