an example builder to build a container with Travis CI, and push to a Singularity Registry Server (or other endpoint)
This is a simple example of how you can achieve:
for a reproducible build workflow. This recipe on master is intended to build
Singularity 3.x (with GoLang). If you are looking for legacy builds of Singularity,
see the release/2.6 branch.
Why should this be managed via Github?
Github, by way of easy integration with continuous integration, is an easy way
to have a workflow set up where multiple people can collaborate on a container recipe,
the recipe can be tested (with whatever testing you need), discussed in pull requests,
and then finally pushed to the registry. Importantly, you don’t need to give your
entire team manager permissions to the registry. An encrypted credential that only
is accessible to administrators can do the push upon merge of a discussed change.
Why should I use this instead of a service?
You could use a remote builder, but if you do the build in a continuous integration
service you get complete control over it. This means everything from the version of
Singularity to use, to the tests that you run for your container. You have a lot more
freedom in the rate of building, and organization of your repository, because it’s you
that writes the configuration.
Add your Singularity recipes to this repository, and edit the build commands in
the build.sh file. This is where you can specify endpoints
(Singularity Registry, Dropbox, Google Storage, AWS) along with container names
(the uri) and tag. You can build as many recipes as you like, just add another line!
# recipe relative to repository base
- /bin/bash .travis/build.sh Singularity
- /bin/bash .travis/build.sh --uri collection/container --tag tacos --cli google-storage Singularity
- /bin/bash .travis/build.sh --uri collection/container --cli google-drive Singularity
- /bin/bash .travis/build.sh --uri collection/container --cli globus Singularity
- /bin/bash .travis/build.sh --uri collection/container --cli registry Singularity
For each client that you use, required environment variables (e.g., credentials to push,
or interact with the API) must be defined in the (encrypted) Travis environment. To
know what variables to define, along with usage for the various clients, see
the client specific pages
You can clone and tweak, but it’s easiest likely to get started with our example
files and edit them as you need.
We will be working with Travis CI. You can see
example builds for this repository here.
For the example here, we have a single recipe named “Singularity” that is provided
as an input argument to the build script. You could add another
recipe, and then of course call the build to happen more than once.
The build script will name the image based on the recipe, and you of course
can change this up.
The basic steps to setup the build are the following:
The basic steps for the build are the following:
--uri collection/container
. If you don’t define one, a robot name will be generated.--uri
to specify a custom name, and this can include the tag, OR you can specify --tag
to go along with a name without one. It depends on which is easier for you.--cli
then this is telling the build script that you have defined the needed environment variables for your client of choice and you want successful builds to be pushed to your storage endpoint. Valid clients include:See the .travis.yml for examples of this build.sh command (commented out). If there is some cloud service that you’d like that is not provided, please open an issue.
If you go to your Travis Profile you can usually select a Github organization (or user) and then the repository, and then click the toggle button to activate it to build on commit —> push.
That’s it for the basic setup! At this point, you will have a continuous integration service that will build your container from a recipe each time that you push. The next step is figuring out where you want to put the finished image(s), and we will walk through this in more detail.
Once the image is built, where can you put it? An easy answer is to use the
Singularity Global Client and
choose one of the many clients
to add a final step to push the image. You then use the same client to pull the
container from your host. Once you’ve decided which endpoints you want to push to,
you will need to:
You don’t even need to use sregistry to upload a container (or an artifact / result produced from running one via a cron job maybe?) to an endpoint of choice! There are many places you can deploy to. If you can think of it, it’s on this list. Here are a sampling of some that I’ve tried (and generally like):
Guess what, this setup is totally changeable by you, it’s your build! This means you can do any of the following “advanced” options: