© 2020 Strange Loop
You have installed Docker, you know how to run containers, and have written Dockerfiles to build container images for your applications (or parts of your applications). Now what?
Now comes the next part: connecting multiple containers together, and deploying them on a cluster of Docker hosts, instead of your single development machine.
This is what we want to cover here, alongside with some production-related questions: how to manage logs? Backups? Remote access? Security upgrades?
In this workshop, we will present the networking model of containers, and how to run containerized applications spanning multiple Docker hosts. We will give recipes to implement cleanly ops tasks like logging, backups, and more. Finally, we will discuss orchestration and scheduling. We will present solutions like Swarm and Mesos, and see some of them in action on an actual microservices architecture.
You won't need to pre-install Docker before the course: each student will be given credentials to connect to an individual virtual machine. So all you need is a computer with a SSH client!
Jerome is a senior engineer at Docker, where he helps others to containerize all the things. In another life he built and operated Xen clouds when EC2 was just the name of a plane, developed a GIS to deploy fiber interconnects through the French subway, managed commando deployments of large-scale video streaming systems in bandwidth-constrained environments such as conference centers, operated and scaled the dotCloud PAAS, and various other feats of technical wizardry. When annoyed, he threatens to replace things with a very small shell script.
AJ does technical community outreach as chief of counter-bullshit operations at Gandi.net, a domain name registrar, webhost and SSL provider that sets respect for customer rights, internet freedom and open source as its top priorities. Their life mission is to do away with unnecessary hierarchy, yak-shaving bikeshedders and gendered pronouns.