The course starts with the fundamentals of Docker—explaining how it works, how to set it up, and how to get started on leveraging the benefits of this technology. The course goes on to cover more advanced features and shows you how to create and share your own Docker images. Then you will explore compose by writing a docker-compose.yml file for a social network app, and look at top-down approaches to building network topologies for our social network’s containers.
Then you will be familiarized with the swarm workflow, Kubernetes, and Google’s tool for setting up a managed cluster. You will learn how to set up Docker’s plugin infrastructure and use the customization options. By the end of this course, you will be able to successfully manage your Docker containers, with the help of minecraft server.
What am I going to get from this course?
- Learn the fundamentals of Docker—explaining how it works
- Learn more advanced features and how to create and share your own Docker images.
- Explore compose by writing a docker-compose.yml file for a social network app.
- You will be familiarized with the swarm workflow, Kubernetes, and Google’s tool for setting up a managed cluster.
- You will learn how to set up Docker’s plugin infrastructure and use the customization options. You will be able to successfully manage your Docker containers, with the help of minecraft server.
Prerequisites and Target Audience
What will students need to know or do before starting this course?
It assumes basic knowledge of Linux but supplies everything you need to know to get your own Docker environment up and running.
Who should take this course? Who should not?
If you recognize Docker’s importance for innovation in everything from system administration to web development, but aren’t sure how to use it to its full potential,
this book is for you.
Module 1: Beginning Docker
The Course Overview
This video will offer an overview of the course.
Getting Docker Inside a Vagrant VM
See how to get your own Docker running using a local virtual machine that is controlled by Vagrant.
You want to use Docker but you've never installed it before. We'll get a VM set up with Docker.
Install Vagrant from the website.
Set up Vagrantfile with "vagrant init 3scale/docker” and then run the "vagrant up” command.
Containers Versus Virtual Machines
Learn the difference between Docker containers and traditional virtual machines.
You don't know what a Docker container is, but you know what a virtual machine is. So we'll compare containers with virtual machines.
See how fast you can "get into" a VM as compared to a container, first with Bash and then with cat /etc/os-release. Then show the different distros.
VMs run a whole "system" of processes, whereas Docker is made to run just one process (with subprocesses) that you want using ps auxf.
How Docker Works?
Get a high-level overview of the objects Docker works with and how they are used.
Understand what Docker containers and images are, and see what the other major concepts that are used with Docker are.
Overview of the concepts of Docker: images, containers, Dockerfiles, and registries.
Understand the details of the layer file system and the relationships between images and containers.
Running the Containerized Commands
We have Docker and want to run a command in an isolated container, so we use "docker run" to work with commands in containers.
Run simple foreground commands, such as ls and ping. Attempt an interactive command with apt-get. Also, we introduce the --rm flag.
Run interactive commands, including 'bash', 'apt-get install', and 'vim'
Run a detached SSH server daemon. See how to kill a container process with 'docker kill', and see its output with 'docker logs'.
Managing Your Containers
You want to manage containers once you've started creating them. You can list, inspect, view log, stop, and delete them with basic Docker commands.
List the active containers and then all containers. Inspect any container by ID or name.
Show the logs and learn the shortcut for the last container. Also, there’s a mention of Docker attach command.
Stop running the container and show the kill option. Remove old containers and mention the commit changes somewhere.
Committing Changes to a Container Image
We want to make an image that has a package (sshd) already installed and configured. So, we'll use Docker commands to make a new image from an existing image.
Interactively install from a container from Ubuntu image.
Inspect the changes that were made to the container file system.
Commit the changes to a new image and test it out.
Sharing a Container on the Index
Once you've made a container you like, you would want to share it with others or make it easy for you to install on other machines. So, you have to push the container image to the Docker index.
Make a docker.io account and then login with Docker.
Make sure that it's named properly and then push it into the Docker Index.
See the container in the index. Delete the local image, pull it down from the index, and try it out.
Finding and Using Third-party Containers
Making containers for everything we want to use in containers is difficult, so we can use ready-made containers. Find and use container images using the Docker index and the Docker ecosystem.
Search with the help of commands, check the index, and go to Dockerfile project.
Pull the Redis-server and run it
Run the Redis client in the container from the Redis image. Try it out and stop the container.
Writing and Building a Dockerfile
We want to quickly and consistently produce and reproduce a container image, so we use a Dockerfile to define a container.
Create a directory with a file named Dockerfile
Add minimal Dockerfile contents: FROM, AUTHOR, and RUN
Use with 'docker build' to create an image from Dockerfile
Adding Files to Your Container
We are making an image that uses a versioned configuration file, so we can add it to the container in the build process.
Put the configuration files inside the project directory
Put the ADD directive inside the Dockerfile
Build the Dockerfile and test it
Setting Default Container Properties
We want to simplify the running process of our container, so we add metadata and defaults to the Dockerfile.
Use the CMD directive for the default command and use ENTRYPOINT for a forced, hidden command.
Change the process environment with the USER, WORKDIR, and ENV directives.
Set network forwarding with EXPOSE.
Building on Existing Containers
You have a container that you want to use, but with some variations, so you will build a new container based on the existing image.
Create a Dockerfile from an existing image by using the FROM instruction.
Add directives to customize or override existing containers.
Use ONBUILD instruction for future images based on this previous image.
Setting Up Trusted Builds
We want others to trust our image in the Docker index, so we will set up trusted builds to automatically build from our GitHub repository.
Make a GitHub repository and upload your project directory.
Create a Trusted Build from the Docker Index and wait for it to build.
See it in the Index and pull to try it out.
Constraining the Container Resource
Using the Docker run command to improve performance, and exploring some of the features it provides us when running containerized commands.
Examine ways in which we can use Docker commands to prioritize the CPU for certain container processes.
Use memory allocation tuning to improve the build’s performance.
Run tests to demonstrate that the performance changes have had the desired effect.
Overriding the Dockerfile Defaults
Discover how to use docker run to override some of the defaults that come with containers, which are usually specified from their Dockerfiles.
Once you start using third-party containers, you may find that the way they’ve set up their containers is more limiting than helpful to you. Luckily, many of the settings that you can specify in Dockerfiles can be overridden with docker run arguments
How can we access inside of a container via a shell? See how we can do this easily by overriding the entrypoint to use /bin/sh instead.
We don’t always want to run as the default user; this can be incorrect or even dangerous at times. We show how we can specify the default user that we would like to run.
Using Volumes and Mounts
Learn about Docker volumes and mounts and how we can use docker run to configure them for our containers.
The job of a process is to use and manage lots of data; we don’t want it to be part of the container. Instead, we use mounts and volumes.
Note that by default, these mounts are read-write. We will see how we can make them read-only.
Introduces and details the concept of volumes, which are basically directories of the host system managed by Docker.
Ports and Networking
Learn about ports and networking with Docker and Docker Containers.
Take a look at the various ways in which you can publish ports. Then we will also talk about disabling the network altogether for extreme isolation.
Glance over the docker port command, which takes a container name or ID and then an internal port that has been exposed. Docker will then give us the interface and port that it’s actually listening on as the host.
Learn how to customize the DNS settings used by your container with working examples.
Similar to how we can "share" data volumes across containers, we can also "share" ports across containers so that they can communicate with each other.
If we want to restrict which containers can connect to us, we can use a Docker feature called Linking.
See how Linking will take any exposed ports of one container and make them directly accessible to another. It will also populate a number of environment variables for that container to discover how to connect to it.
Examine the working environment and demonstrate how we can connect to the linked container.
Writing a Simple Application
Although Docker is great for running backing services and infrastructure, its real value comes from shipping your application. Unlike the supporting infrastructure, your application will be updated and shipped quite frequently. Docker can make this process much easier. In order to demonstrate this, we’re going to build a simple web application to deploy with Docker.
Start with setting up the environment. We will install Redis, Python, and pip, a Python package manager. Remember that we’re installing these on our VM, not in a Docker container.
We start our app in another terminal session in our VM. We can curl this URL.
We make changes to our application and illustrate how it works prior to publishing it.
Containerizing the Application
Previously, we made a simple "Hello World" web application that uses Redis to increment a counter with every request. Now, we’re going to take this application and "Dockerize" it.
Create and configure our new application environment.
Set up the Dockerfile to expose the required ports.
Learn how our application is now containerized.
Setting Up an Application Server
Work through the process that is required to set up an application server to act as our production server to deploy on.
Understand the point about Docker that it can run containers consistently across all of these development environments.
We make a simple cloud server our production environment that uses the digital ocean.
Implement the Digital Ocean server set up, and link it to our Redis container.
Shipping the Container to Production
Demonstrate how to effectively ship a container to another machine.
Show how to pull the Docker registry container.
Details on how to tag our application and container and push them to the registry.
Understand how to pull our container from other server and demonstrate it by running there.
Creating a Simple Deployment Workflow
We show how we can streamline the process and create an easy deployment workflow.
Learn how to create and tune a deploy-app script.
Discuss the setup process: start a new container and expose the required ports.
We show how we can run the "make" command to create and deploy our application for us.
Using the Docker Remote API
We will explore the idea of automation in Docker.
Take a look at the Docker API and set up the environment.
Demonstrate hitting of the endpoint to retrieve a list of containers.
Set up Python and pip and demonstrate the working of the interactive Python shell.
Container Inside a Container
Demonstrate how we can expose control of containers to other containers.
Expose the Docker Socket to a container.
Use Docker binary and Unix Socket to use the container inside the container.
Managing Docker Logs with logspout
Explain the limitations of the in-built logging functionality.
Introduce logspout and explain how it works.
Demonstrate installing logspout in a container and show it working.
Review the logging options and how we can extend syslog using logspout.
Creating Your Own PaaS with Dokku
Building our own Docker-powered mini-Heroku platform-as-a-service using Dokku.
Publish changes to Git.
Automatic deployment of those changes.
Using Ambassador Containers
Explore Docker pattern called ambassador.
Set up an ambassador locally on our Vagrant VM.
See ambassador in action with Redis.
Module 2: Mastering Docker
The Course Overview
This video gives an overview of the entire course.
Recollecting Docker Concepts
The aim of this video is to talk about the underlying concepts of Docker. It is critical for us to know how the internals of Docker are laid out so that if we encounter problems whilst using Docker, we will be able to figure out exactly what went wrong and where.
Talk about Docker and compare it to something we are already familiar with—virtual machines.
Next, talk about the container engine. Then take a look at AUFS and the role copy-on-write filesystems.
Finally, learn about volumes and the video with an abstract overview of networking in Docker.
Docker CLI Commands
The aim of this video is to revisit some of the more useful Docker CLI commands.
Pull a couple of images from the Docker Hub repositories and start some containers.
Add content to a running container, by running a diaspora setup script.
Commit the container into an image and push the image to Docker Hub.
Running setup commands in a running container and then committing it, although possible, is not an efficient solution. It also doesn’t lend itself very well to automation. So, we will look at automating the image creation process using Docker file and the Docker build command.
Write a Docker file containing the commands that would be needed for setup.
Run Docker build. Verify the repeatability of the builds and caching.
Use Docker exec to debug running containers.
In this section and video, we will learn about Docker Compose. Compose is a tool for orchestrating multi-container Docker applications.
Discuss orchestration. Then we learn about compose and its use cases.
Next, we write a Docker compose yaml file for the diaspora application we have been working on.
Build and test this new method of setting up diaspora.
Deploying Composed Services
We have set up diaspora enough number of times in various different ways in the last few videos. Let us apply this learning to make a deployment of diaspora on to an AWS instance.
A very brief introduction to Docker machine. Create a local machine and deploy diaspora there.
Create a Docker machine on AWS.
Build and deploy diaspora on the AWS instance, controlled by Docker Machine. Verify that it works.
Single Host Scaling
The aim of this video is to scale application services across multiple containers in a single host.
Use compose’s scale command to increase number of containers of the web application service.
Run and verify load being balanced in the logs.
The aim of this video is to discuss the default networking drivers available in Docker, and specifically the bridge network.
Discover the networking model in Docker, the default drivers available in Docker, and their features.
Discover the default docker0 network, about --links and about /etc/hosts, embedded dns.
End the video with user-defined network.
Discuss and get familiar with the multi-host networks completely.
The challenges of multi-host networking.
The architecture of overlay networks.
Components of the overlay network.
The aim of this video is to explore solutions to service discovery.
Challenges of always available, reliable service discovery.
Key-value Stores as a service discovery mechanism.
DNS as a service discovery mechanism.
Designing Infrastructure of the Social Network
In this video, we will be designing infrastructure for the next phase of our diaspora deployment.
First discuss the current architecture and issues with it.
Then make successive attempts to design alternate architectures, fixing problems in the older architecture in each iteration.
Come up with an architecture that is reasonable to implement and fulfills our requirements. Also discuss advanced architectures that the viewer can implement next.
Use Swarm to deploy diaspora on a cluster of Docker hosts.
Firstly, create a bunch of hosts with Docker machines.
Then, deploy service discovery on one of the hosts. We then create an overlay network across the other hosts.
Finally, we create a Swarm and deploy our containers on the cluster.
Deploying a Swarm cluster on AWS.
Set up security groups.
Start and set up a key-value store.
Run the Swarm setup script.
Introduction to Managed Cluster
Discover the tools that give more power to operations, with a better ability to scale out. These tools are production ready, are battle tested, and are being used in production today at some of the biggest companies.
Discuss managed clusters and what they provide in addition to the tools we have already seen.
Talk about the tools, Kubernetes and Marathon/Mesos.
Compare Kubernetes and Marathon to Swarm to find out what they provide in addition to what we have already seen.
Explore Kubernetes, Google’s cluster management tool that they use to back their container engine.
Briefly discuss the k8s architecture.
Use the k8s Docker container to set it up locally.
Start and scale an nginx service, setup DNS, and the web dashboard.
Marathon / Mesos
We will be setting up Marathon and Mesos locally in a VM.
Briefly discuss Mesos and its architecture.
Set up Marathon and Mesos locally using Vagrant.
Set up diaspora in Mesos using Marathon.
Discuss security considerations and possible attack vectors in a Docker deployment.
Discussion attack vectors in the Docker daemon.
Discussion on the root privileges given to the user in the container.
Discussion on security profiles for the daemon in the host and content trust.
Docker Bench for Security
Explore Docker Bench for Security tool and use it for our Docker environment.
Run docker-bench-security image while the diaspora application is running.
Analyze the audit logs.
Look at best practices to fix warnings.
Notary and Content Security
Deals with the issue of content security when transferring objects over an untrusted medium—the Internet.
Discuss content security.
Install notary—a security tool to sign images.
Sign the docker image for diaspora and push it to Docker Hub.
Discuss the options available to route logs—logging drivers.
Discover drivers and plugins and why they are needed.
Describe and use the default json logging driver and the options to customize it.
Discuss and use syslogd logger.
Learn how to use volume plugins.
Know what volume plugins are useful for.
Set up the diaspora deployment to use the rexray volume plugin for the ebs mount.
List other volume plugins.
Discover how to extend Docker with the Network Plugins.
Discuss network plugins.
Set up weave.
Demonstrate the weave network between Docker containers running in two hosts.
Keeping the Garden Pruned
Discuss the best practices in a Docker environment.
Best practices in handling images.
Best practices in handling volumes and storage.
Best security and maintenance practices.
Discover the tools available to complement workflows in the Docker ecosystem.
Talk about the ecosystem.
Talk about some of the companies that have been building tools to supplement your Docker workflow.
Tools that we will be looking at include shipyard, panamax, Docker cloud, Quay, Drone.IO, Elastic Container Service, and Google Container Engine.
We will look at Dockercraft.
Spin up a Dockercraft container.
Start a few containers.
View and manage Docker containers from Minecraft.