RUN RUN COPY RUN FROM RUN COPY RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous section](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrally opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise β writing better Dockerfiles .nav[ [Previous section](#toc-dockerfile-examples) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Exercise β writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[containers/Exercise_Dockerfile_Advanced.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Exercise_Dockerfile_Advanced.md)] --- class: pic .interstitial[] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous section](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers  .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous section](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-container-networking-basics) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container  .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[] --- name: toc-container-networking-basics class: title Container networking basics .nav[ [Previous section](#toc-getting-inside-a-container) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Container networking basics  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Objectives We will now run network services (accepting requests) in containers. At the end of this section, you will be able to: * Run a network service in a container. * Manipulate container networking basics. * Find a container's IP address. We will also explain the different network models used by Docker. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## A simple, static web server Run the Docker Hub image `nginx`, which contains a basic web server: ```bash $ docker run -d -P nginx 66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e ``` * Docker will download the image from the Docker Hub. * `-d` tells Docker to run the image in the background. * `-P` tells Docker to make this service reachable from other computers. (`-P` is the short version of `--publish-all`.) But, how do we connect to our web server now? .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Finding our web server port We will use `docker ps`: ```bash $ docker ps CONTAINER ID IMAGE ... PORTS ... e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ... ``` * The web server is running on port 80 inside the container. * This port is mapped to port 32768 on our Docker host. We will explain the whys and hows of this port mapping. But first, let's make sure that everything works properly. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (GUI) Point your browser to the IP address of your Docker host, on the port shown by `docker ps` for container port 80.  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (CLI) You can also use `curl` directly from the Docker host. Make sure to use the right port number if it is different from the example below: ```bash $ curl localhost:32768 Welcome to nginx! ... ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## How does Docker know which port to map? * There is metadata in the image telling "this image has something on port 80". * We can see that metadata with `docker inspect`: ```bash $ docker inspect --format '{{.Config.ExposedPorts}}' nginx map[80/tcp:{}] ``` * This metadata was set in the Dockerfile, with the `EXPOSE` keyword. * We can see that with `docker history`: ```bash $ docker history nginx IMAGE CREATED CREATED BY 7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "β¦ 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM] 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Why are we mapping ports? * We are out of IPv4 addresses. * Containers cannot have public IPv4 addresses. * They have private addresses. * Services have to be exposed port by port. * Ports have to be mapped to avoid conflicts. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Finding the web server port in a script Parsing the output of `docker ps` would be painful. There is a command to help us: ```bash $ docker port 80 32768 ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Manual allocation of port numbers If you want to set port numbers yourself, no problem: ```bash $ docker run -d -p 80:80 nginx $ docker run -d -p 8000:80 nginx $ docker run -d -p 8080:80 -p 8888:80 nginx ``` * We are running three NGINX web servers. * The first one is exposed on port 80. * The second one is exposed on port 8000. * The third one is exposed on ports 8080 and 8888. Note: the convention is `port-on-host:port-on-container`. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Plumbing containers into your infrastructure There are many ways to integrate containers in your network. * Start the container, letting Docker allocate a public port for it. Then retrieve that port number and feed it to your configuration. * Pick a fixed port number in advance, when you generate your configuration. Then start your container by setting the port numbers manually. * Use a network plugin, connecting your containers with e.g. VLANs, tunnels... * Enable *Swarm Mode* to deploy across a cluster. The container will then be reachable through any node of the cluster. When using Docker through an extra management layer like Mesos or Kubernetes, these will usually provide their own mechanism to expose containers. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Finding the container's IP address We can use the `docker inspect` command to find the IP address of the container. ```bash $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 172.17.0.3 ``` * `docker inspect` is an advanced command, that can retrieve a ton of information about our containers. * Here, we provide it with a format string to extract exactly the private IP address of the container. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Pinging our container We can test connectivity to the container using the IP address we've just discovered. Let's see this now by using the `ping` tool. ```bash $ ping 64 bytes from : icmp_req=1 ttl=64 time=0.085 ms 64 bytes from : icmp_req=2 ttl=64 time=0.085 ms 64 bytes from : icmp_req=3 ttl=64 time=0.085 ms ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Section summary We've learned how to: * Expose a network port. * Manipulate container networking basics. * Find a container's IP address. In the next chapter, we will see how to connect containers together without exposing their ports. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- class: pic .interstitial[] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous section](#toc-container-networking-basics) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-the-container-network-model) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports many different network drivers. The built-in drivers include: * `bridge` (default) * `none` * `host` * `container` The driver is selected with `docker run --net ...`. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- class: pic .interstitial[] --- name: toc-the-container-network-model class: title The Container Network Model .nav[ [Previous section](#toc-container-network-drivers) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-service-discovery-with-containers) ] .debug[(automatically generated title slide)] --- class: title # The Container Network Model  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Objectives We will learn about the CNM (Container Network Model). At the end of this lesson, you will be able to: * Create a private network for a group of containers. * Use container naming to connect services together. * Dynamically connect and disconnect containers to networks. * Set the IP address of a container. We will also explain the principle of overlay networks and network plugins. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## The Container Network Model The CNM was introduced in Engine 1.9.0 (November 2015). The CNM adds the notion of a *network*, and a new top-level command to manipulate and see those networks: `docker network`. ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f blog-dev overlay 228a4355d548 blog-prod overlay ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## What's in a network? * Conceptually, a network is a virtual switch. * It can be local (to a single Engine) or global (spanning multiple hosts). * A network has an IP subnet associated to it. * Docker will allocate IP addresses to the containers connected to a network. * Containers can be connected to multiple networks. * Containers can be given per-network names and aliases. * The names and aliases can be resolved via an embedded DNS server. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Network implementation details * A network is managed by a *driver*. * The built-in drivers include: * `bridge` (default) * `none` * `host` * `macvlan` * A multi-host driver, *overlay*, is available out of the box (for Swarm clusters). * More drivers can be provided by plugins (OVS, VLAN...) * A network can have a custom IPAM (IP allocator). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Differences with the CNI * CNI = Container Network Interface * CNI is used notably by Kubernetes * With CNI, all the nodes and containers are on a single IP network * Both CNI and CNM offer the same functionality, but with very different methods .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic ## Single container in a Docker network  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on a single Docker network  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on two Docker networks  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Creating a network Let's create a network called `dev`. ```bash $ docker network create dev 4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba ``` The network is now visible with the `network ls` command: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f dev bridge ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Placing containers on a network We will create a *named* container on this network. It will be reachable with its name, `es`. ```bash $ docker run -d --name es --net dev elasticsearch:2 8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863 ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Communication between containers Now, create another container on this network. .small[ ```bash $ docker run -ti --net dev alpine sh root@0ecccdfa45ef:/# ``` ] From this new container, we can resolve and ping the other one, using its assigned name: .small[ ```bash / # ping es PING es (172.18.0.2) 56(84) bytes of data. 64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms ^C --- es ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms root@0ecccdfa45ef:/# ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving container addresses In Docker Engine 1.9, name resolution is implemented with `/etc/hosts`, and updating it each time containers are added/removed. .small[ ```bash [root@0ecccdfa45ef /]# cat /etc/hosts 172.18.0.3 0ecccdfa45ef 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 es 172.18.0.2 es.dev ``` ] In Docker Engine 1.10, this has been replaced by a dynamic resolver. (This avoids race conditions when updating `/etc/hosts`.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-service-discovery-with-containers class: title Service discovery with containers .nav[ [Previous section](#toc-the-container-network-model) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-ambassadors) ] .debug[(automatically generated title slide)] --- # Service discovery with containers * Let's try to run an application that requires two containers. * The first container is a web server. * The other one is a redis data store. * We will place them both on the `dev` network created before. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Running the web server * The application is provided by the container image `jpetazzo/trainingwheels`. * We don't know much about it so we will try to run it and see what happens! Start the container, exposing all its ports: ```bash $ docker run --net dev -d -P jpetazzo/trainingwheels ``` Check the port that has been allocated to it: ```bash $ docker ps -l ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Test the web server * If we connect to the application now, we will see an error page:  * This is because the Redis service is not running. * This container tries to resolve the name `redis`. Note: we're not using a FQDN or an IP address here; just `redis`. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Start the data store * We need to start a Redis container. * That container must be on the same network as the web server. * It must have the right name (`redis`) so the application can find it. Start the container: ```bash $ docker run --net dev --name redis -d redis ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Test the web server again * If we connect to the application now, we should see that the app is working correctly:  * When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## A few words on *scope* * What if we want to run multiple copies of our application? * Since names are unique, there can be only one container named `redis` at a time. * However, we can specify the network name of our container with `--net-alias`. * `--net-alias` is scoped per network, and independent from the container name. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Using a network alias instead of a name Let's remove the `redis` container: ```bash $ docker rm -f redis ``` And create one that doesn't block the `redis` name: ```bash $ docker run --net dev --net-alias redis -d redis ``` Check that the app still works (but the counter is back to 1, since we wiped out the old Redis container). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Names are *local* to each network Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network. ```bash $ docker run --rm alpine ping es ping: bad address 'es' ``` Names can be resolved only when containers are on the same network. Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`! We will use *network aliases*. A container can have multiple network aliases. Network aliases are *local* to a given network (only exist in this network). Multiple containers can have the same network alias (even on the same network). In Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Creating containers on another network Create the `prod` network. ```bash $ docker network create prod 5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c ``` We can now create multiple containers with the `es` alias on the new `prod` network. ```bash $ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2 38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771 $ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2 1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving network aliases Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image. ```bash $ docker run --net prod --rm alpine nslookup es Name: es Address 1: 172.23.0.3 prod-es-2.prod Address 2: 172.23.0.2 prod-es-1.prod ``` (You can ignore the `can't resolve '(null)'` errors.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Connecting to aliased containers Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint. Try the following command a few times: .small[ ```bash $ docker run --rm --net dev centos curl -s es:9200 { "name" : "Tarot", ... } ``` ] Then try it a few times by replacing `--net dev` with `--net prod`: .small[ ```bash $ docker run --rm --net prod centos curl -s es:9200 { "name" : "The Symbiote", ... } ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Good to know ... * Docker will not create network names and aliases on the default `bridge` network. * Therefore, if you want to use those features, you have to create a custom network first. * Network aliases are *not* unique on a given network. * i.e., multiple containers can have the same alias on the same network. * In that scenario, the Docker DNS server will return multiple records. (i.e. you will get DNS round robin out of the box.) * Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS. * Creation of networks and network aliases is generally automated with tools like Compose. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## A few words about round robin DNS Don't rely exclusively on round robin DNS to achieve load balancing. Many factors can affect DNS resolution, and you might see: - all traffic going to a single instance; - traffic being split (unevenly) between some instances; - different behavior depending on your application language; - different behavior depending on your base distro; - different behavior depending on other factors (sic). It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Custom networks When creating a network, extra options can be provided. * `--internal` disables outbound traffic (the network won't have a default gateway). * `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed). * `--subnet` (in CIDR notation) indicates the subnet to use. * `--ip-range` (in CIDR notation) indicates the subnet to allocate from. * `--aux-address` allows specifying a list of reserved addresses (which won't be allocated to containers). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Setting containers' IP address * It is possible to set a container's address with `--ip`. * The IP address has to be within the subnet used for the container. A full example would look like this. ```bash $ docker network create --subnet 10.66.0.0/16 pubnet 42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135 $ docker run --net pubnet --ip 10.66.66.66 -d nginx b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09 ``` *Note: don't hard code container IP addresses in your code!* *I repeat: don't hard code container IP addresses in your code!* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Overlay networks * The features we've seen so far only work when all containers are on a single host. * If containers span multiple hosts, we need an *overlay* network to connect them together. * Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN, *enabled with Swarm Mode*. * Other plugins (Weave, Calico...) can provide overlay networks as well. * Once you have an overlay network, *all the features that we've used in this chapter work identically across multiple hosts.* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (overlay) Out of the scope for this intro-level workshop! Very short instructions: - enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes) - `docker network create mynet --driver overlay` - `docker service create --network mynet myimage` If you want to learn more about Swarm mode, you can check [this video](https://www.youtube.com/watch?v=EuzoEaE6Cqs) or [these slides](https://container.training/swarm-selfpaced.yml.html). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (plugins) Out of the scope for this intro-level workshop! General idea: - install the plugin (they often ship within containers) - run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!) - some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location") - you can then `docker network create --driver pluginname` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Connecting and disconnecting dynamically * So far, we have specified which network to use when starting the container. * The Docker Engine also allows connecting and disconnecting while the container is running. * This feature is exposed through the Docker API, and through two Docker CLI commands: * `docker network connect ` * `docker network disconnect ` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Dynamically connecting to a network * We have a container named `es` connected to a network named `dev`. * Let's start a simple alpine container on the default network: ```bash $ docker run -ti alpine sh / # ``` * In this container, try to ping the `es` container: ```bash / # ping es ping: bad address 'es' ``` This doesn't work, but we will change that by connecting the container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Finding the container ID and connecting it * Figure out the ID of our alpine container; here are two methods: * looking at `/etc/hostname` in the container, * running `docker ps -lq` on the host. * Run the following command on the host: ```bash $ docker network connect dev `` ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Checking what we did * Try again to `ping es` from the container. * It should now work correctly: ```bash / # ping es PING es (172.20.0.3): 56 data bytes 64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms 64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms ^C ``` * Interrupt it with Ctrl-C. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Looking at the network setup in the container We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`: .small[ ```bash / # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 20: eth1@if21: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ``` ] Each network connection is materialized with a virtual network interface. As we can see, we can be connected to multiple networks at the same time. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Disconnecting from a network * Let's try the symmetrical command to disconnect the container: ```bash $ docker network disconnect dev ``` * From now on, if we try to ping `es`, it will not resolve: ```bash / # ping es ping: bad address 'es' ``` * Trying to ping the IP address directly won't work either: ```bash / # ping 172.20.0.3 ... (nothing happens until we interrupt it with Ctrl-C) ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases are scoped per network * Each network has its own set of network aliases. * We saw this earlier: `es` resolves to different addresses in `dev` and `prod`. * If we are connected to multiple networks, the resolver looks up names in each of them (as of Docker Engine 18.03, it is the connection order) and stops as soon as the name is found. * Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not** give us the addresses of all the `es` services; but only the ones in `dev` or `prod`. * However, we can lookup `es.dev` or `es.prod` if we need to. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Finding out about our networks and names * We can do reverse DNS lookups on containers' IP addresses. * If the IP address belongs to a network (other than the default bridge), the result will be: ``` name-or-first-alias-or-container-id.network-name ``` * Example: .small[ ```bash $ docker run -ti --net prod --net-alias hello alpine / # apk add --no-cache drill ... OK: 5 MiB in 13 packages / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03 inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ... / # drill -t ptr `3.0.21.172`.in-addr.arpa ... ;; ANSWER SECTION: 3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`. ... ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Building with a custom network * We can build a Dockerfile with a custom network with `docker build --network NAME`. * This can be used to check that a build doesn't access the network. (But keep in mind that most Dockerfiles will fail, because they need to install remote packages and dependencies!) * This may be used to access an internal package repository. (But try to use a multi-stage build instead, if possible!) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-ambassadors class: title Ambassadors .nav[ [Previous section](#toc-service-discovery-with-containers) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-local-development-workflow-with-docker) ] .debug[(automatically generated title slide)] --- class: title # Ambassadors  .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## The ambassador pattern Ambassadors are containers that "masquerade" or "proxy" for another service. They abstract the connection details for this services, and can help with: * discovery (where is my service actually running?) * migration (what if my service has to be moved while I use it?) * fail over (how do I know to which instance of a replicated service I should connect?) * load balancing (how to I spread my requests across multiple instances of a service?) * authentication (what if my service requires credentials, certificates, or otherwise?) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Introduction to Ambassadors The ambassador pattern: * Takes advantage of Docker's per-container naming system and abstracts connections between services. * Allows you to manage services without hard-coding connection information inside applications. To do this, instead of directly connecting containers you insert ambassador containers. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- class: pic  .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Interacting with ambassadors * The web container uses normal Docker networking to connect to the ambassador. * The database container also talks with an ambassador. * For both containers, the ambassador is totally transparent. (There is no difference between normal operation and operation with an ambassador.) * If the database container is moved (or a failover happens), its new location will be tracked by the ambassador containers, and the web application container will still be able to connect, without reconfiguration. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for simple service discovery Use case: * my application code connects to `redis` on the default port (6379), * my Redis service runs on another machine, on a non-default port (e.g. 12345), * I want to use an ambassador to let my application connect without modification. The ambassador will be: * a container running right next to my application, * using the name `redis` (or linked as `redis`), * listening on port 6379, * forwarding connections to the actual Redis service. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for service migration Use case: * my application code still connects to `redis`, * my Redis service runs somewhere else, * my Redis service is moved to a different host+port, * the location of the Redis service is given to me via e.g. DNS SRV records, * I want to use an ambassador to automatically connect to the new location, with as little disruption as possible. The ambassador will be: * the same kind of container as before, * running an additional routine to monitor DNS SRV records, * updating the forwarding destination when the DNS SRV records are updated. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for credentials injection Use case: * my application code still connects to `redis`, * my application code doesn't provide Redis credentials, * my production Redis service requires credentials, * my staging Redis service requires different credentials, * I want to use an ambassador to abstract those credentials. The ambassador will be: * a container using the name `redis` (or a link), * passed the credentials to use, * running a custom proxy that accepts connections on Redis default port, * performing authentication with the target Redis service before forwarding traffic. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for load balancing Use case: * my application code connects to a web service called `api`, * I want to run multiple instances of the `api` backend, * those instances will be on different machines and ports, * I want to use an ambassador to abstract those details. The ambassador will be: * a container using the name `api` (or a link), * passed the list of backends to use (statically or dynamically), * running a load balancer (e.g. HAProxy or NGINX), * dispatching requests across all backends transparently. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## "Ambassador" is a *pattern* There are many ways to implement the pattern. Different deployments will use different underlying technologies. * On-premise deployments with a trusted network can track container locations in e.g. Zookeeper, and generate HAproxy configurations each time a location key changes. * Public cloud deployments or deployments across unsafe networks can add TLS encryption. * Ad-hoc deployments can use a master-less discovery protocol like avahi to register and discover services. * It is also possible to do one-shot reconfiguration of the ambassadors. It is slightly less dynamic but has far fewer requirements. * Ambassadors can be used in addition to, or instead of, overlay networks. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Service meshes * A service mesh is a configurable network layer. * It can provide service discovery, high availability, load balancing, observability... * Service meshes are particularly useful for microservices applications. * Service meshes are often implemented as proxies. * Applications connect to the service mesh, which relays the connection where needed. *Does that sound familiar?* .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors and service meshes * When using a service mesh, a "sidecar container" is often used as a proxy * Our services connect (transparently) to that sidecar container * That sidecar container figures out where to forward the traffic ... Does that sound familiar? (It should, because service meshes are essentially app-wide or cluster-wide ambassadors!) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Some popular service meshes ... And related projects: * [Consul Connect](https://www.consul.io/docs/connect/index.html) Transparently secures service-to-service connections with mTLS. * [Gloo](https://gloo.solo.io/) API gateway that can interconnect applications on VMs, containers, and serverless. * [Istio](https://istio.io/) A popular service mesh. * [Linkerd](https://linkerd.io/) Another popular service mesh. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Learning more about service meshes A few blog posts about service meshes: * [Containers, microservices, and service meshes](http://jpetazzo.github.io/2019/05/17/containers-microservices-service-meshes/) Provides historical context: how did we do before service meshes were invented? * [Do I Need a Service Mesh?](https://www.nginx.com/blog/do-i-need-a-service-mesh/) Explains the purpose of service meshes. Illustrates some NGINX features. * [Do you need a service mesh?](https://www.oreilly.com/ideas/do-you-need-a-service-mesh) Includes high-level overview and definitions. * [What is Service Mesh and Why Do We Need It?](https://containerjournal.com/2018/12/12/what-is-service-mesh-and-why-do-we-need-it/) Includes a step-by-step demo of Linkerd. And a video: * [What is a Service Mesh, and Do I Need One When Developing Microservices?](https://www.datawire.io/envoyproxy/service-mesh/) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- class: pic .interstitial[] --- name: toc-local-development-workflow-with-docker class: title Local development workflow with Docker .nav[ [Previous section](#toc-ambassadors) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-working-with-volumes) ] .debug[(automatically generated title slide)] --- class: title # Local development workflow with Docker  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Objectives At the end of this section, you will be able to: * Share code between container and host. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Local development in a container We want to solve the following issues: - "Works on my machine" - "Not the same version" - "Missing dependency" By using Docker containers, we will get a consistent development environment. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Working on the "namer" application * We have to work on some application whose code is at: https://github.com/jpetazzo/namer. * What is it? We don't know yet! * Let's download the code. ```bash $ git clone https://github.com/jpetazzo/namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the code ```bash $ cd namer $ ls -1 company_name_generator.rb config.ru docker-compose.yml Dockerfile Gemfile ``` -- Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe? .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the `Dockerfile` ```dockerfile FROM ruby COPY . /src WORKDIR /src RUN bundler install CMD ["rackup", "--host", "0.0.0.0"] EXPOSE 9292 ``` * This application is using a base `ruby` image. * The code is copied in `/src`. * Dependencies are installed with `bundler`. * The application is started with `rackup`. * It is listening on port 9292. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Building and running the "namer" application * Let's build the application with the `Dockerfile`! -- ```bash $ docker build -t namer . ``` -- * Then run it. *We need to expose its ports.* -- ```bash $ docker run -dP namer ``` -- * Check on which port the container is listening. -- ```bash $ docker ps -l ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Connecting to our application * Point our browser to our Docker node, on the port allocated to the container. -- * Hit "reload" a few times. -- * This is an enterprise-class, carrier-grade, ISO-compliant company name generator! (With 50% more bullshit than the average competition!) (Wait, was that 50% more, or 50% less? *Anyway!*)  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Making changes to the code Option 1: * Edit the code locally * Rebuild the image * Re-run the container Option 2: * Enter the container (with `docker exec`) * Install an editor * Make changes from within the container Option 3: * Use a *volume* to mount local files into the container * Make changes locally * Changes are reflected in the container .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Our first volume We will tell Docker to map the current directory to `/src` in the container. ```bash $ docker run -d -v $(pwd):/src -P namer ``` * `-d`: the container should run in detached mode (in the background). * `-v`: the following host directory should be mounted inside the container. * `-P`: publish all the ports exposed by this image. * `namer` is the name of the image we will run. * We don't specify a command to run because it is already set in the Dockerfile via `CMD`. Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell). .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Mounting volumes inside containers The `-v` flag mounts a directory from your host into your Docker container. The flag structure is: ```bash [host-path]:[container-path]:[rw|ro] ``` * `[host-path]` and `[container-path]` are created if they don't exist. * You can control the write status of the volume with the `ro` and `rw` options. * If you don't specify `rw` or `ro`, it will be `rw` by default. There will be a full chapter about volumes! .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Testing the development container * Check the port used by our new container. ```bash $ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ... ``` * Open the application in your web browser. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Making a change to our application Our customer really doesn't like the color of our text. Let's change it. ```bash $ vi company_name_generator.rb ``` And change ```css color: royalblue; ``` To: ```css color: red; ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Viewing our changes * Reload the application in our browser. -- * The color should have changed.  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Understanding volumes * Volumes are *not* copying or synchronizing files between the host and the container. * Volumes are *bind mounts*: a kernel mechanism associating one path with another. * Bind mounts are *kind of* similar to symbolic links, but at a very different level. * Changes made on the host or on the container will be visible on the other side. (Under the hood, it's the same file anyway.) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Trash your servers and burn your code *(This is the title of a [2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html) by Chad Fowler, where he explains the concept of immutable infrastructure.)* -- * Let's majorly mess up our container. (Remove files or whatever.) * Now, how can we fix this? -- * Our old container (with the blue version of the code) is still running. * See on which port it is exposed: ```bash docker ps ``` * Point our browser to it to confirm that it still works fine. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Immutable infrastructure in a nutshell * Instead of *updating* a server, we deploy a new one. * This might be challenging with classical servers, but it's trivial with containers. * In fact, with Docker, the most logical workflow is to build a new image and run it. * If something goes wrong with the new image, we can always restart the old one. * We can even keep both versions running side by side. If this pattern sounds interesting, you might want to read about *blue/green deployment* and *canary deployments*. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Recap of the development workflow 1. Write a Dockerfile to build an image containing our development environment. (Rails, Django, ... and all the dependencies for our app) 2. Start a container from that image. Use the `-v` flag to mount our source code inside the container. 3. Edit the source code outside the container, using familiar tools. (vim, emacs, textmate...) 4. Test the application. (Some frameworks pick up changes automatically. Others require you to Ctrl-C + restart after each modification.) 5. Iterate and repeat steps 3 and 4 until satisfied. 6. When done, commit+push source code changes. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Debugging inside the container Docker has a command called `docker exec`. It allows users to run a new process in a container which is already running. If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead. You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## `docker exec` example ```bash $ # You can run ruby commands in the area the app is running and more! $ docker exec -it bash root@5ca27cf74c2e:/opt/namer# irb irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact => [0, 1, 4, 9, 16] irb(main):002:0> exit ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Stopping the container Now that we're done let's stop our container. ```bash $ docker stop ``` And remove it. ```bash $ docker rm ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Section summary We've learned how to: * Share code between container and host. * Set our working directory. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: pic .interstitial[] --- name: toc-working-with-volumes class: title Working with volumes .nav[ [Previous section](#toc-local-development-workflow-with-docker) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-compose-for-development-stacks) ] .debug[(automatically generated title slide)] --- class: title # Working with volumes  .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Objectives At the end of this section, you will be able to: * Create containers holding volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Working with volumes Docker volumes can be used to achieve many things, including: * Bypassing the copy-on-write system to obtain native disk I/O performance. * Bypassing copy-on-write to leave some files out of `docker commit`. * Sharing a directory between multiple containers. * Sharing a directory between the host and a container. * Sharing a *single file* between the host and a container. * Using remote storage and custom storage with *volume drivers*. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes are special directories in a container Volumes can be declared in two different ways: * Within a `Dockerfile`, with a `VOLUME` instruction. ```dockerfile VOLUME /uploads ``` * On the command-line, with the `-v` flag for `docker run`. ```bash $ docker run -d -v /uploads myapp ``` In both cases, `/uploads` (inside the container) will be a volume. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes bypass the copy-on-write system Volumes act as passthroughs to the host filesystem. * The I/O performance on a volume is exactly the same as I/O performance on the Docker host. * When you `docker commit`, the content of volumes is not brought into the resulting image. * If a `RUN` instruction in a `Dockerfile` changes the content of a volume, those changes are not recorded neither. * If a container is started with the `--read-only` flag, the volume will still be writable (unless the volume is a read-only volume). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes can be shared across containers You can start a container with *exactly the same volumes* as another one. The new container will have the same volumes, in the same directories. They will contain exactly the same thing, and remain in sync. Under the hood, they are actually the same directories on the host anyway. This is done using the `--volumes-from` flag for `docker run`. We will see an example in the following slides. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Sharing app server logs with another container Let's start a Tomcat container: ```bash $ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat ``` Now, start an `alpine` container accessing the same volume: ```bash $ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*" ``` Then, from another window, send requests to our Tomcat container: ```bash $ curl localhost:8080 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes exist independently of containers If a container is stopped or removed, its volumes still exist and are available. Volumes can be listed and manipulated with `docker volume` subcommands: ```bash $ docker volume ls DRIVER VOLUME NAME local 5b0b65e4316da67c2d471086640e6005ca2264f3... local pgdata-prod local pgdata-dev local 13b59c9936d78d109d094693446e174e5480d973... ``` Some of those volume names were explicit (pgdata-prod, pgdata-dev). The others (the hex IDs) were generated automatically by Docker. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Naming volumes * Volumes can be created without a container, then used in multiple containers. Let's create a couple of volumes directly. ```bash $ docker volume create webapps webapps ``` ```bash $ docker volume create logs logs ``` Volumes are not anchored to a specific path. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Using our named volumes * Volumes are used with the `-v` option. * When a host path does not contain a `/`, it is considered a volume name. Let's start a web server using the two previous volumes. ```bash $ docker run -d -p 1234:8080 \ -v logs:/usr/local/tomcat/logs \ -v webapps:/usr/local/tomcat/webapps \ tomcat ``` Check that it's running correctly: ```bash $ curl localhost:1234 ... (Tomcat tells us how happy it is to be up and running) ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Using a volume in another container * We will make changes to the volume from another container. * In this example, we will run a text editor in the other container. (But this could be an FTP server, a WebDAV server, a Git receiver...) Let's start another container using the `webapps` volume. ```bash $ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp ``` Vandalize the page, save, exit. Then run `curl localhost:1234` again to see your changes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Using custom "bind-mounts" In some cases, you want a specific directory on the host to be mapped inside the container: * You want to manage storage and snapshots yourself. (With LVM, or a SAN, or ZFS, or anything else!) * You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk. * You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed). Wait, we already met the last use-case in our example development workflow! Nice. ```bash $ docker run -d -v /path/on/the/host:/path/in/container image ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Migrating data with `--volumes-from` The `--volumes-from` option tells Docker to re-use all the volumes of an existing container. * Scenario: migrating from Redis 2.8 to Redis 3.0. * We have a container (`myredis`) running Redis 2.8. * Stop the `myredis` container. * Start a new container, using the Redis 3.0 image, and the `--volumes-from` option. * The new container will inherit the data of the old one. * Newer containers can use `--volumes-from` too. * Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Data migration in practice Let's create a Redis container. ```bash $ docker run -d --name redis28 redis:2.8 ``` Connect to the Redis container and set some data. ```bash $ docker run -ti --link redis28:redis busybox telnet redis 6379 ``` Issue the following commands: ```bash SET counter 42 INFO server SAVE QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Upgrading Redis Stop the Redis container. ```bash $ docker stop redis28 ``` Start the new Redis container. ```bash $ docker run -d --name redis30 --volumes-from redis28 redis:3.0 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Testing the new Redis Connect to the Redis container and see our data. ```bash docker run -ti --link redis30:redis busybox telnet redis 6379 ``` Issue a few commands. ```bash GET counter INFO server QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes lifecycle * When you remove a container, its volumes are kept around. * You can list them with `docker volume ls`. * You can access them by creating a container with `docker run -v`. * You can remove them with `docker volume rm` or `docker system prune`. Ultimately, _you_ are the one responsible for logging, monitoring, and backup of your volumes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes defined by an image Wondering if an image has volumes? Just use `docker inspect`: ```bash $ # docker inspect training/datavol [{ "config": { . . . "Volumes": { "/var/webapp": {} }, . . . }] ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes used by a container To look which paths are actually volumes, and to what they are bound, use `docker inspect` (again): ```bash $ docker inspect [{ "ID": "", . . . "Volumes": { "/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468" }, "VolumesRW": { "/var/webapp": true }, }] ``` * We can see that our volume is present on the file system of the Docker host. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Sharing a single file The same `-v` flag can be used to share a single file (instead of a directory). One of the most interesting examples is to share the Docker control socket. ```bash $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh ``` From that container, you can now run `docker` commands communicating with the Docker Engine running on the host. Try `docker ps`! .warning[Since that container has access to the Docker socket, it has root-like access to the host.] .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volume plugins You can install plugins to manage volumes backed by particular storage systems, or providing extra features. For instance: * [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS, EFS). * [Portworx](https://portworx.com/) - provides distributed block store for containers. * [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale to several petabytes. It provides interfaces for object, block and file storage. * and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)! .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes vs. Mounts * Since Docker 17.06, a new options is available: `--mount`. * It offers a new, richer syntax to manipulate data in containers. * It makes an explicit difference between: - volumes (identified with a unique name, managed by a storage plugin), - bind mounts (identified with a host path, not managed). * The former `-v` / `--volume` option is still usable. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## `--mount` syntax Binding a host path to a container path: ```bash $ docker run \ --mount type=bind,source=/path/on/host,target=/path/in/container alpine ``` Mounting a volume to a container path: ```bash $ docker run \ --mount source=myvolume,target=/path/in/container alpine ``` Mounting a tmpfs (in-memory, for temporary files): ```bash $ docker run \ --mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Section summary We've learned how to: * Create and manage volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: pic .interstitial[] --- name: toc-compose-for-development-stacks class: title Compose for development stacks .nav[ [Previous section](#toc-working-with-volumes) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-exercise--writing-a-compose-file) ] .debug[(automatically generated title slide)] --- # Compose for development stacks Dockerfiles are great to build container images. But what if we work with a complex stack made of multiple containers? Eventually, we will want to write some custom scripts and automation to build, run, and connect our containers together. There is a better way: using Docker Compose. In this section, you will use Compose to bootstrap a development environment. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## What is Docker Compose? Docker Compose (formerly known as `fig`) is an external tool. Unlike the Docker Engine, it is written in Python. It's open source as well. The general idea of Compose is to enable a very simple, powerful onboarding workflow: 1. Checkout your code. 2. Run `docker-compose up`. 3. Your app is up and running! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose overview This is how you work with Compose: * You describe a set (or stack) of containers in a YAML file called `docker-compose.yml`. * You run `docker-compose up`. * Compose automatically pulls images, builds containers, and starts them. * Compose can set up links, volumes, and other Docker options for you. * Compose can run the containers in the background, or in the foreground. * When containers are running in the foreground, their aggregated output is shown. Before diving in, let's see a small example of Compose in action. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic  .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Checking if Compose is installed If you are using the official training virtual machines, Compose has been pre-installed. If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them. If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`. You can always check that it is installed by running: ```bash $ docker-compose --version ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose First step: clone the source code for the app we will be working on. ```bash $ cd $ git clone https://github.com/jpetazzo/trainingwheels ... $ cd trainingwheels ``` Second step: start your app. ```bash $ docker-compose up ``` Watch Compose build and run your app with the correct parameters, including linking the relevant containers together. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose Verify that the app is running at `http://:8000`.  .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Stopping the app When you hit `^C`, Compose tries to gracefully terminate all of the containers. After ten seconds (or if you press `^C` again) it will forcibly kill them. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## The `docker-compose.yml` file Here is the file used in the demo: .small[ ```yaml version: "2" services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis ``` ] .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file structure A Compose file has multiple sections: * `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.) * `services` is mandatory. A service is one or more replicas of the same image running as containers. * `networks` is optional and indicates to which networks containers should be connected. (By default, containers will be connected on a private, per-compose-file network.) * `volumes` is optional and can define volumes to be used and/or shared by the containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file versions * Version 1 is legacy and shouldn't be used. (If you see a Compose file without `version` and `services`, it's a legacy v1 file.) * Version 2 added support for networks and volumes. * Version 3 added support for deployment options (scaling, rolling updates, etc). The [Docker documentation](https://docs.docker.com/compose/compose-file/) has excellent information about the Compose file format if you need to know more about versions. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Containers in `docker-compose.yml` Each service in the YAML file must contain either `build`, or `image`. * `build` indicates a path containing a Dockerfile. * `image` indicates an image name (local, or on a registry). * If both are specified, an image will be built from the `build` directory and named `image`. The other parameters are optional. They encode the parameters that you would typically add to `docker run`. Sometimes they have several minor improvements. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Container parameters * `command` indicates what to run (like `CMD` in a Dockerfile). * `ports` translates to one (or multiple) `-p` options to map ports. You can specify local ports (i.e. `x:y` to expose public port `x`). * `volumes` translates to one (or multiple) `-v` options. You can use relative paths here. For the full list, check: https://docs.docker.com/compose/compose-file/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose commands We already saw `docker-compose up`, but another one is `docker-compose build`. It will execute `docker build` for all containers mentioning a `build` path. It can also be invoked automatically when starting the application: ```bash docker-compose up --build ``` Another common option is to start containers in the background: ```bash docker-compose up -d ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Check container status It can be tedious to check the status of your containers with `docker ps`, especially when running multiple apps at the same time. Compose makes it easier; with `docker-compose ps` you will see only the status of the containers of the current stack: ```bash $ docker-compose ps Name Command State Ports ---------------------------------------------------------------------------- trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (1) If you have started your application in the background with Compose and want to stop it easily, you can use the `kill` command: ```bash $ docker-compose kill ``` Likewise, `docker-compose rm` will let you remove containers (after confirmation): ```bash $ docker-compose rm Going to remove trainingwheels_redis_1, trainingwheels_www_1 Are you sure? [yN] y Removing trainingwheels_redis_1... Removing trainingwheels_www_1... ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (2) Alternatively, `docker-compose down` will stop and remove containers. It will also remove other resources, like networks that were created for the application. ```bash $ docker-compose down Stopping trainingwheels_www_1 ... done Stopping trainingwheels_redis_1 ... done Removing trainingwheels_www_1 ... done Removing trainingwheels_redis_1 ... done ``` Use `docker-compose down -v` to remove everything including volumes. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Special handling of volumes Compose is smart. If your container uses volumes, when you restart your application, Compose will create a new container, but carefully re-use the volumes it was using previously. This makes it easy to upgrade a stateful service, by pulling its new image and just restarting your stack with Compose. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose project name * When you run a Compose command, Compose infers the "project name" of your app. * By default, the "project name" is the name of the current directory. * For instance, if you are in `/home/zelda/src/ocarina`, the project name is `ocarina`. * All resources created by Compose are tagged with this project name. * The project name also appears as a prefix of the names of the resources. E.g. in the previous example, service `www` will create a container `ocarina_www_1`. * The project name can be overridden with `docker-compose -p`. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Running two copies of the same app If you want to run two copies of the same app simultaneously, all you have to do is to make sure that each copy has a different project name. You can: * copy your code in a directory with a different name * start each copy with `docker-compose -p myprojname up` Each copy will run in a different network, totally isolated from the other. This is ideal to debug regressions, do side-by-side comparisons, etc. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-a-compose-file class: title Exercise β writing a Compose file .nav[ [Previous section](#toc-compose-for-development-stacks) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-application-configuration) ] .debug[(automatically generated title slide)] --- # Exercise β writing a Compose file Let's write a Compose file for the wordsmith app! The code is at: https://github.com/jpetazzo/wordsmith .debug[[containers/Exercise_Composefile.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Exercise_Composefile.md)] --- class: pic .interstitial[] --- name: toc-application-configuration class: title Application Configuration .nav[ [Previous section](#toc-exercise--writing-a-compose-file) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-orchestration-an-overview) ] .debug[(automatically generated title slide)] --- # Application Configuration There are many ways to provide configuration to containerized applications. There is no "best way" β it depends on factors like: * configuration size, * mandatory and optional parameters, * scope of configuration (per container, per app, per customer, per site, etc), * frequency of changes in the configuration. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Command-line parameters ```bash docker run jpetazzo/hamba 80 www1:80 www2:80 ``` * Configuration is provided through command-line parameters. * In the above example, the `ENTRYPOINT` is a script that will: - parse the parameters, - generate a configuration file, - start the actual service. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Command-line parameters pros and cons * Appropriate for mandatory parameters (without which the service cannot start). * Convenient for "toolbelt" services instantiated many times. (Because there is no extra step: just run it!) * Not great for dynamic configurations or bigger configurations. (These things are still possible, but more cumbersome.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Environment variables ```bash docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana ``` * Configuration is provided through environment variables. * The environment variable can be used straight by the program, or by a script generating a configuration file. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Environment variables pros and cons * Appropriate for optional parameters (since the image can provide default values). * Also convenient for services instantiated many times. (It's as easy as command-line parameters.) * Great for services with lots of parameters, but you only want to specify a few. (And use default values for everything else.) * Ability to introspect possible parameters and their default values. * Not great for dynamic configurations. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration ``` FROM prometheus COPY prometheus.conf /etc ``` * The configuration is added to the image. * The image may have a default configuration; the new configuration can: - replace the default configuration, - extend it (if the code can read multiple configuration files). .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration pros and cons * Allows arbitrary customization and complex configuration files. * Requires writing a configuration file. (Obviously!) * Requires building an image to start the service. * Requires rebuilding the image to reconfigure the service. * Requires rebuilding the image to upgrade the service. * Configured images can be stored in registries. (Which is great, but requires a registry.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Configuration volume ```bash docker run -v appconfig:/etc/appconfig myapp ``` * The configuration is stored in a volume. * The volume is attached to the container. * The image may have a default configuration. (But this results in a less "obvious" setup, that needs more documentation.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Configuration volume pros and cons * Allows arbitrary customization and complex configuration files. * Requires creating a volume for each different configuration. * Services with identical configurations can use the same volume. * Doesn't require building / rebuilding an image when upgrading / reconfiguring. * Configuration can be generated or edited through another container. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume * This is a powerful pattern for dynamic, complex configurations. * The configuration is stored in a volume. * The configuration is generated / updated by a special container. * The application container detects when the configuration is changed. (And automatically reloads the configuration when necessary.) * The configuration can be shared between multiple services if needed. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume example In a first terminal, start a load balancer with an initial configuration: ```bash $ docker run --name loadbalancer jpetazzo/hamba \ 80 goo.gl:80 ``` In another terminal, reconfigure that load balancer: ```bash $ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \ 80 google.com:80 ``` The configuration could also be updated through e.g. a REST API. (The REST API being itself served from another container.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Keeping secrets .warning[Ideally, you should not put secrets (passwords, tokens...) in:] * command-line or environment variables (anyone with Docker API access can get them), * images, especially stored in a registry. Secrets management is better handled with an orchestrator (like Swarm or Kubernetes). Orchestrators will allow to pass secrets in a "one-way" manner. Managing secrets securely without an orchestrator can be contrived. E.g.: - read the secret on stdin when the service starts, - pass the secret using an API endpoint. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- class: pic .interstitial[] --- name: toc-orchestration-an-overview class: title Orchestration, an overview .nav[ [Previous section](#toc-application-configuration) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-from-docker-to-kubernetes) ] .debug[(automatically generated title slide)] --- # Orchestration, an overview In this chapter, we will: * Explain what is orchestration and why we would need it. * Present (from a high-level perspective) some orchestrators. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## What's orchestration?  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## What's orchestration? According to Wikipedia: *Orchestration describes the __automated__ arrangement, coordination, and management of complex computer systems, middleware, and services.* -- *[...] orchestration is often discussed in the context of __service-oriented architecture__, __virtualization__, provisioning, Converged Infrastructure and __dynamic datacenter__ topics.* -- What does that really mean? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances -- - Q: do we always use 100% of our servers? -- - A: obviously not! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances - Every night, scale down (by shutting down extraneous replicated instances) - Every morning, scale up (by deploying new copies) - "Pay for what you use" (i.e. save big $$$ here) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances How do we implement this? - Crontab - Autoscaling (save even bigger $$$) That's *relatively* easy. Now, how are things for our IAAS provider? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - Q: what's the #1 cost in a datacenter? -- - A: electricity! -- - Q: what uses electricity? -- - A: servers, obviously - A: ... and associated cooling -- - Q: do we always use 100% of our servers? -- - A: obviously not! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - If only we could turn off unused servers during the night... - Problem: we can only turn off a server if it's totally empty! (i.e. all VMs on it are stopped/moved) - Solution: *migrate* VMs and shutdown empty servers (e.g. combine two hypervisors with 40% load into 80%+0%, and shut down the one at 0%) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter How do we implement this? - Shut down empty hosts (but keep some spare capacity) - Start hosts again when capacity gets low - Ability to "live migrate" VMs (Xen already did this 10+ years ago) - Rebalance VMs on a regular basis - what if a VM is stopped while we move it? - should we allow provisioning on hosts involved in a migration? *Scheduling* becomes more complex. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## What is scheduling? According to Wikipedia (again): *In computing, scheduling is the method by which threads, processes or data flows are given access to system resources.* The scheduler is concerned mainly with: - throughput (total amount of work done per time unit); - turnaround time (between submission and completion); - response time (between submission and start); - waiting time (between job readiness and execution); - fairness (appropriate times according to priorities). In practice, these goals often conflict. **"Scheduling" = decide which resources to use.** .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 1 - You have: - 5 hypervisors (physical machines) - Each server has: - 16 GB RAM, 8 cores, 1 TB disk - Each week, your team requests: - one VM with X RAM, Y CPU, Z disk Scheduling = deciding which hypervisor to use for each VM. Difficulty: easy! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. Difficulty: ??? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM.  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 3 - You have machines (physical and/or virtual) - You have containers - You are trying to put the containers on the machines - Sounds familiar? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## We can't fit a job of size 6 :( .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## ... Now we can! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with two resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with three resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## You need to be good at this .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## But also, you must be quick! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## And be web scale! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## And think outside (?) of the box! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Good luck! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## TL,DR * Scheduling with multiple resources (dimensions) is hard. * Don't expect to solve the problem with a Tiny Shell Script. * There are literally tons of research papers written on this. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## But our orchestrator also needs to manage ... * Network connectivity (or filtering) between containers. * Load balancing (external and internal). * Failure recovery (if a node or a whole datacenter fails). * Rolling out new versions of our applications. (Canary deployments, blue/green deployments...) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Some orchestrators We are going to present briefly a few orchestrators. There is no "absolute best" orchestrator. It depends on: - your applications, - your requirements, - your pre-existing skills... .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Nomad - Open Source project by Hashicorp. - Arbitrary scheduler (not just for containers). - Great if you want to schedule mixed workloads. (VMs, containers, processes...) - Less integration with the rest of the container ecosystem. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Mesos - Open Source project in the Apache Foundation. - Arbitrary scheduler (not just for containers). - Two-level scheduler. - Top-level scheduler acts as a resource broker. - Second-level schedulers (aka "frameworks") obtain resources from top-level. - Frameworks implement various strategies. (Marathon = long running processes; Chronos = run at intervals; ...) - Commercial offering through DC/OS by Mesosphere. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Rancher - Rancher 1 offered a simple interface for Docker hosts. - Rancher 2 is a complete management platform for Docker and Kubernetes. - Technically not an orchestrator, but it's a popular option. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Swarm - Tightly integrated with the Docker Engine. - Extremely simple to deploy and setup, even in multi-manager (HA) mode. - Secure by default. - Strongly opinionated: - smaller set of features, - easier to operate. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Kubernetes - Open Source project initiated by Google. - Contributions from many other actors. - *De facto* standard for container orchestration. - Many deployment options; some of them very complex. - Reputation: steep learning curve. - Reality: - true, if we try to understand *everything*; - false, if we focus on what matters. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic .interstitial[] --- name: toc-from-docker-to-kubernetes class: title From Docker to Kubernetes .nav[ [Previous section](#toc-orchestration-an-overview) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-our-sample-application) ] .debug[(automatically generated title slide)] --- # From Docker to Kubernetes - We are now going to run a demo app made of multiple containers - We will start by running it on one node, with Compose - Then we will deploy that application on a Kubernetes cluster - We will identify performance bottlenecks and scale out that app (and learn Kubernetes in the process) .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- ## Our new environment - Since a 1-node cluster isn't fun, we will switch to a new environment! - This environment is a 4-node Kubernetes cluster - Also, from now on, demos and labs are identified with these gray boxes .exercise[ - You should run this command: ```bash echo Hello world ``` ] .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: in-person ## Connecting to our lab environment .exercise[ - Log into the first VM (`node1`) with your SSH client - Check that you can SSH (without password) to `node2`: ```bash ssh node2 ``` - Type `exit` or `^D` to come back to `node1` ] If anything goes wrong β ask for help! .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Doing or re-doing the workshop on your own? - Use something like [Play-With-Docker](http://play-with-docker.com/) or [Play-With-Kubernetes](https://training.play-with-kubernetes.com/) Zero setup effort; but environment are short-lived and might have limited resources - Create your own cluster (local or cloud VMs) Small setup effort; small cost; flexible environments - Create a bunch of clusters for you and your friends ([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms)) Bigger setup effort; ideal for group training .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- class: self-paced ## Get your own Docker nodes - If you already have some Docker nodes: great! - If not: let's get some thanks to Play-With-Docker .exercise[ - Go to http://www.play-with-docker.com/ - Log in - Create your first node ] You will need a Docker ID to use Play-With-Docker. (Creating a Docker ID is free.) .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## We will (mostly) interact with node1 only *These remarks apply only when using multiple nodes, of course.* - Unless instructed, **all commands must be run from the first VM, `node1`** - We will only check out/copy the code on `node1` - During normal operations, we do not need access to the other nodes - If we had to troubleshoot issues, we would use a combination of: - SSH (to access system logs, daemon status...) - Docker API (to check running containers and container engine status) .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Terminals Once in a while, the instructions will say: "Open a new terminal." There are multiple ways to do this: - create a new window or tab on your machine, and SSH into the VM; - use screen or tmux on the VM and open a new window from there. You are welcome to use the method that you feel the most comfortable with. .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Tmux cheatsheet [Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`. *You don't have to use it or even know about it to follow along. But some of us like to use it to switch between terminals. It has been preinstalled on your workshop nodes.* - Ctrl-b c β creates a new window - Ctrl-b n β go to next window - Ctrl-b p β go to previous window - Ctrl-b " β split window top/bottom - Ctrl-b % β split window left/right - Ctrl-b Alt-1 β rearrange windows in columns - Ctrl-b Alt-2 β rearrange windows in rows - Ctrl-b arrows β navigate to other windows - Ctrl-b d β detach session - tmux attach β reattach to session .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Versions installed - Kubernetes 1.15.0 - Docker Engine 18.09.7 - Docker Compose 1.24.1 .exercise[ - Check all installed versions: ```bash kubectl version docker version docker-compose -v ``` ] .debug[[k8s/versions-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/versions-k8s.md)] --- class: extra-details ## Kubernetes and Docker compatibility - Kubernetes 1.15 validates Docker Engine versions [up to 18.09](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#dependencies) (the latest version when Kubernetes 1.14 was released) - Kubernetes 1.13 only validates Docker Engine versions [up to 18.06](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#external-dependencies) - Is it a problem if I use Kubernetes with a "too recent" Docker Engine? -- class: extra-details - No! - "Validates" = continuous integration builds with very extensive (and expensive) testing - The Docker API is versioned, and offers strong backward-compatibility (If a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way) .debug[[k8s/versions-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/versions-k8s.md)] --- class: pic .interstitial[] --- name: toc-our-sample-application class: title Our sample application .nav[ [Previous section](#toc-from-docker-to-kubernetes) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-kubernetes-concepts) ] .debug[(automatically generated title slide)] --- # Our sample application - We will clone the GitHub repository onto our `node1` - The repository also contains scripts and tools that we will use through the workshop .exercise[ - Clone the repository on `node1`: ```bash git clone https://github.com/jpetazzo/container.training ``` ] (You can also fork the repository on GitHub and clone your fork if you prefer that.) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Downloading and running the application Let's start this before we look around, as downloading will take a little time... .exercise[ - Go to the `dockercoins` directory, in the cloned repo: ```bash cd ~/container.training/dockercoins ``` - Use Compose to build and run all containers: ```bash docker-compose up ``` ] Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## What's this application? -- - It is a DockerCoin miner! .emoji[π°π³π¦π’] -- - No, you can't buy coffee with DockerCoins -- - How DockerCoins works: - generate a few random bytes - hash these bytes - increment a counter (to keep track of speed) - repeat forever! -- - DockerCoins is *not* a cryptocurrency (the only common points are "randomness," "hashing," and "coins" in the name) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## DockerCoins in the microservices era - DockerCoins is made of 5 services: - `rng` = web service generating random bytes - `hasher` = web service computing hash of POSTed data - `worker` = background process calling `rng` and `hasher` - `webui` = web interface to watch progress - `redis` = data store (holds a counter updated by `worker`) - These 5 services are visible in the application's Compose file, [docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## How DockerCoins works - `worker` invokes web service `rng` to generate random bytes - `worker` invokes web service `hasher` to hash these bytes - `worker` does this in an infinite loop - every second, `worker` updates `redis` to indicate how many loops were done - `webui` queries `redis`, and computes and exposes "hashing speed" in our browser *(See diagram on next slide!)* .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: pic  .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Service discovery in container-land How does each service find out the address of the other ones? -- - We do not hard-code IP addresses in the code - We do not hard-code FQDNs in the code, either - We just connect to a service name, and container-magic does the rest (And by container-magic, we mean "a crafty, dynamic, embedded DNS server") .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Example in `worker/worker.py` ```python redis = Redis("`redis`") def get_random_bytes(): r = requests.get("http://`rng`/32") return r.content def hash_bytes(data): r = requests.post("http://`hasher`/", data=data, headers={"Content-Type": "application/octet-stream"}) ``` (Full source code available [here]( https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17 )) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## Links, naming, and service discovery - Containers can have network aliases (resolvable through DNS) - Compose file version 2+ makes each container reachable through its service name - Compose file version 1 required "links" sections to accomplish this - Network aliases are automatically namespaced - you can have multiple apps declaring and using a service named `database` - containers in the blue app will resolve `database` to the IP of the blue database - containers in the green app will resolve `database` to the IP of the green database .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Show me the code! - You can check the GitHub repository with all the materials of this workshop: https://github.com/jpetazzo/container.training - The application is in the [dockercoins]( https://github.com/jpetazzo/container.training/tree/master/dockercoins) subdirectory - The Compose file ([docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml)) lists all 5 services - `redis` is using an official image from the Docker Hub - `hasher`, `rng`, `worker`, `webui` are each built from a Dockerfile - Each service's Dockerfile and source code is in its own directory (`hasher` is in the [hasher](https://github.com/jpetazzo/container.training/blob/master/dockercoins/hasher/) directory, `rng` is in the [rng](https://github.com/jpetazzo/container.training/blob/master/dockercoins/rng/) directory, etc.) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## Compose file format version *This is relevant only if you have used Compose before 2016...* - Compose 1.6 introduced support for a new Compose file format (aka "v2") - Services are no longer at the top level, but under a `services` section - There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer) - Containers are placed on a dedicated network, making links unnecessary - There are other minor differences, but upgrade is easy and straightforward .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Our application at work - On the left-hand side, the "rainbow strip" shows the container names - On the right-hand side, we see the output of our containers - We can see the `worker` service making requests to `rng` and `hasher` - For `rng` and `hasher`, we see HTTP access logs .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Connecting to the web UI - "Logs are exciting and fun!" (No-one, ever) - The `webui` container exposes a web dashboard; let's view it .exercise[ - With a web browser, connect to `node1` on port 8000 - Remember: the `nodeX` aliases are valid only on the nodes themselves - In your browser, you need to enter the IP address of your node ] A drawing area should show up, and after a few seconds, a blue graph will appear. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: self-paced, extra-details ## If the graph doesn't load If you just see a `Page not found` error, it might be because your Docker Engine is running on a different machine. This can be the case if: - you are using the Docker Toolbox - you are using a VM (local or remote) created with Docker Machine - you are controlling a remote Docker Engine When you run DockerCoins in development mode, the web UI static files are mapped to the container using a volume. Alas, volumes can only work on a local environment, or when using Docker Desktop for Mac or Windows. How to fix this? Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## Why does the speed seem irregular? - It *looks like* the speed is approximately 4 hashes/second - Or more precisely: 4 hashes/second, with regular dips down to zero - Why? -- class: extra-details - The app actually has a constant, steady speed: 3.33 hashes/second (which corresponds to 1 hash every 0.3 seconds, for *reasons*) - Yes, and? .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## The reason why this graph is *not awesome* - The worker doesn't update the counter after every loop, but up to once per second - The speed is computed by the browser, checking the counter about once per second - Between two consecutive updates, the counter will increase either by 4, or by 0 - The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc. - What can we conclude from this? -- class: extra-details - "I'm clearly incapable of writing good frontend code!" π β JΓ©rΓ΄me .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Stopping the application - If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app - The Docker Engine will send a `TERM` signal to the containers - If the containers do not exit in a timely manner, the Engine sends a `KILL` signal .exercise[ - Stop the application by hitting `^C` ] -- Some containers exit immediately, others take longer. The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time! .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Clean up - Before moving on, let's remove those containers .exercise[ - Tell Compose to remove everything: ```bash docker-compose down ``` ] .debug[[shared/composedown.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/composedown.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-concepts class: title Kubernetes concepts .nav[ [Previous section](#toc-our-sample-application) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-first-contact-with-kubectl) ] .debug[(automatically generated title slide)] --- # Kubernetes concepts - Kubernetes is a container management system - It runs and manages containerized applications on a cluster -- - What does that really mean? .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Basic things we can ask Kubernetes to do -- - Start 5 containers using image `atseashop/api:v1.3` -- - Place an internal load balancer in front of these containers -- - Start 10 containers using image `atseashop/webfront:v1.3` -- - Place a public load balancer in front of these containers -- - It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers -- - New release! Replace my containers with the new image `atseashop/webfront:v1.4` -- - Keep processing requests during the upgrade; update my containers one at a time .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Other things that Kubernetes can do for us - Basic autoscaling - Blue/green deployment, canary deployment - Long running services, but also batch (one-off) jobs - Overcommit our cluster and *evict* low-priority jobs - Run services with *stateful* data (databases etc.) - Fine-grained access control defining *what* can be done by *whom* on *which* resources - Integrating third party services (*service catalog*) - Automating complex tasks (*operators*) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture - Ha ha ha ha - OK, I was trying to scare you, it's much simpler than that β€οΈ .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Credits - The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI (Courtesy of [Yongbok Kim](https://www.yongbok.net/blog/)) - The second one is a simplified representation of a Kubernetes cluster (Courtesy of [Imesh Gunaratne](https://medium.com/containermind/a-reference-architecture-for-deploying-wso2-middleware-on-kubernetes-d4dee7601e8e)) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture: the nodes - The nodes executing our containers run a collection of services: - a container Engine (typically Docker) - kubelet (the "node agent") - kube-proxy (a necessary but not sufficient network component) - Nodes were formerly called "minions" (You might see that word in older articles or documentation) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture: the control plane - The Kubernetes logic (its "brains") is a collection of services: - the API server (our point of entry to everything!) - core services like the scheduler and controller manager - `etcd` (a highly available key/value store; the "database" of Kubernetes) - Together, these services form the control plane of our cluster - The control plane is also called the "master" .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Running the control plane on special nodes - It is common to reserve a dedicated node for the control plane (Except for single-node development clusters, like when using minikube) - This node is then called a "master" (Yes, this is ambiguous: is the "master" a node, or the whole control plane?) - Normal applications are restricted from running on this node (By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)) - When high availability is required, each service of the control plane must be resilient - The control plane is then replicated on multiple nodes (This is sometimes called a "multi-master" setup) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Running the control plane outside containers - The services of the control plane can run in or out of containers - For instance: since `etcd` is a critical service, some people deploy it directly on a dedicated cluster (without containers) (This is illustrated on the first "super complicated" schema) - In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible (We only "see" a Kubernetes API endpoint) - In that case, there is no "master node" *For this reason, it is more accurate to say "control plane" rather than "master."* .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? No! -- - By default, Kubernetes uses the Docker Engine to run containers - We could also use `rkt` ("Rocket") from CoreOS - Or leverage other pluggable runtimes through the *Container Runtime Interface* (like CRI-O, or containerd) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? Yes! -- - In this workshop, we run our app on a single node first - We will need to build images and ship them around - We can do these things without Docker (and get diagnosed with NIHΒΉ syndrome) - Docker is still the most stable container engine today (but other options are maturing very quickly) .footnote[ΒΉ[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)] .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? - On our development environments, CI pipelines ... : *Yes, almost certainly* - On our production servers: *Yes (today)* *Probably not (in the future)* .footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)] .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Interacting with Kubernetes - We will interact with our Kubernetes cluster through the Kubernetes API - The Kubernetes API is (mostly) RESTful - It allows us to create, read, update, delete *resources* - A few common resource types are: - node (a machine β physical or virtual β in our cluster) - pod (group of containers running together on a node) - service (stable network endpoint to connect to one or multiple containers) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Credits - The first diagram is courtesy of Lucas KΓ€ldstrΓΆm, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha) - it's one of the best Kubernetes architecture diagrams available! - The second diagram is courtesy of Weave Works - a *pod* can have multiple containers working together - IP addresses are associated with *pods*, not with individual containers Both diagrams used with permission. .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic .interstitial[] --- name: toc-first-contact-with-kubectl class: title First contact with `kubectl` .nav[ [Previous section](#toc-kubernetes-concepts) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-running-our-first-containers-on-kubernetes) ] .debug[(automatically generated title slide)] --- # First contact with `kubectl` - `kubectl` is (almost) the only tool we'll need to talk to Kubernetes - It is a rich CLI tool around the Kubernetes API (Everything you can do with `kubectl`, you can do directly with the API) - On our machines, there is a `~/.kube/config` file with: - the Kubernetes API address - the path to our TLS certificates used to authenticate - You can also use the `--kubeconfig` flag to pass a config file - Or directly `--server`, `--user`, etc. - `kubectl` can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"... .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## `kubectl get` - Let's look at our `Node` resources with `kubectl get`! .exercise[ - Look at the composition of our cluster: ```bash kubectl get node ``` - These commands are equivalent: ```bash kubectl get no kubectl get node kubectl get nodes ``` ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Obtaining machine-readable output - `kubectl get` can output JSON, YAML, or be directly formatted .exercise[ - Give us more info about the nodes: ```bash kubectl get nodes -o wide ``` - Let's have some YAML: ```bash kubectl get no -o yaml ``` See that `kind: List` at the end? It's the type of our result! ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## (Ab)using `kubectl` and `jq` - It's super easy to build custom reports .exercise[ - Show the capacity of all our nodes as a stream of JSON objects: ```bash kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity" ``` ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring types and definitions - We can list all available resource types by running `kubectl api-resources` (In Kubernetes 1.10 and prior, this command used to be `kubectl get`) - We can view the definition for a resource type with: ```bash kubectl explain type ``` - We can view the definition of a field in a resource, for instance: ```bash kubectl explain node.spec ``` - Or get the full definition of all fields and sub-fields: ```bash kubectl explain node --recursive ``` .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Introspection vs. documentation - We can access the same information by reading the [API documentation](https://kubernetes.io/docs/reference/#api-reference) - The API documentation is usually easier to read, but: - it won't show custom types (like Custom Resource Definitions) - we need to make sure that we look at the correct version - `kubectl api-resources` and `kubectl explain` perform *introspection* (they communicate with the API server and obtain the exact type definitions) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Type names - The most common resource names have three forms: - singular (e.g. `node`, `service`, `deployment`) - plural (e.g. `nodes`, `services`, `deployments`) - short (e.g. `no`, `svc`, `deploy`) - Some resources do not have a short name - `Endpoints` only have a plural form (because even a single `Endpoints` resource is actually a list of endpoints) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Viewing details - We can use `kubectl get -o yaml` to see all available details - However, YAML output is often simultaneously too much and not enough - For instance, `kubectl get node node1 -o yaml` is: - too much information (e.g.: list of images available on this node) - not enough information (e.g.: doesn't show pods running on this node) - difficult to read for a human operator - For a comprehensive overview, we can use `kubectl describe` instead .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## `kubectl describe` - `kubectl describe` needs a resource type and (optionally) a resource name - It is possible to provide a resource name *prefix* (all matching objects will be displayed) - `kubectl describe` will retrieve some extra information about the resource .exercise[ - Look at the information available for `node1` with one of the following commands: ```bash kubectl describe node/node1 kubectl describe node node1 ``` ] (We should notice a bunch of control plane pods.) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services kubectl get svc ``` ] -- There is already one service on our cluster: the Kubernetes API itself. .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## ClusterIP services - A `ClusterIP` service is internal, available from the cluster only - This is useful for introspection from within containers .exercise[ - Try to connect to the API: ```bash curl -k https://`10.96.0.1` ``` - `-k` is used to skip certificate verification - Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc` ] -- The error that we see is expected: the Kubernetes API requires authentication. .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Listing running containers - Containers are manipulated through *pods* - A pod is a group of containers: - running together (on the same node) - sharing resources (RAM, CPU; but also network, volumes) .exercise[ - List pods on our cluster: ```bash kubectl get pods ``` ] -- *Where are the pods that we saw just a moment earlier?!?* .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Namespaces - Namespaces allow us to segregate resources .exercise[ - List the namespaces on our cluster with one of these commands: ```bash kubectl get namespaces kubectl get namespace kubectl get ns ``` ] -- *You know what ... This `kube-system` thing looks suspicious.* *In fact, I'm pretty sure it showed up earlier, when we did:* `kubectl describe node node1` .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Accessing namespaces - By default, `kubectl` uses the `default` namespace - We can see resources in all namespaces with `--all-namespaces` .exercise[ - List the pods in all namespaces: ```bash kubectl get pods --all-namespaces ``` - Since Kubernetes 1.14, we can also use `-A` as a shorter version: ```bash kubectl get pods -A ``` ] *Here are our system pods!* .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## What are all these control plane pods? - `etcd` is our etcd server - `kube-apiserver` is the API server - `kube-controller-manager` and `kube-scheduler` are other control plane components - `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/)) - `kube-proxy` is the (per-node) component managing port mappings and such - `weave` is the (per-node) component managing the network overlay - the `READY` column indicates the number of containers in each pod (1 for most pods, but `weave` has 2, for instance) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Scoping another namespace - We can also look at a different namespace (other than `default`) .exercise[ - List only the pods in the `kube-system` namespace: ```bash kubectl get pods --namespace=kube-system kubectl get pods -n kube-system ``` ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Namespaces and other `kubectl` commands - We can use `-n`/`--namespace` with almost every `kubectl` command - Example: - `kubectl create --namespace=X` to create something in namespace X - We can use `-A`/`--all-namespaces` with most commands that manipulate multiple objects - Examples: - `kubectl delete` can delete resources across multiple namespaces - `kubectl label` can add/remove/update labels across multiple namespaces .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-public`? .exercise[ - List the pods in the `kube-public` namespace: ```bash kubectl -n kube-public get pods ``` ] Nothing! `kube-public` is created by kubeadm & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters). .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring `kube-public` - The only interesting object in `kube-public` is a ConfigMap named `cluster-info` .exercise[ - List ConfigMap objects: ```bash kubectl -n kube-public get configmaps ``` - Inspect `cluster-info`: ```bash kubectl -n kube-public get configmap cluster-info -o yaml ``` ] Note the `selfLink` URI: `/api/v1/namespaces/kube-public/configmaps/cluster-info` We can use that! .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Accessing `cluster-info` - Earlier, when trying to access the API server, we got a `Forbidden` message - But `cluster-info` is readable by everyone (even without authentication) .exercise[ - Retrieve `cluster-info`: ```bash curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info ``` ] - We were able to access `cluster-info` (without auth) - It contains a `kubeconfig` file .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Retrieving `kubeconfig` - We can easily extract the `kubeconfig` file from this ConfigMap .exercise[ - Display the content of `kubeconfig`: ```bash curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \ | jq -r .data.kubeconfig ``` ] - This file holds the canonical address of the API server, and the public key of the CA - This file *does not* hold client keys or tokens - This is not sensitive information, but allows us to establish trust .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-node-lease`? - Starting with Kubernetes 1.14, there is a `kube-node-lease` namespace (or in Kubernetes 1.13 if the NodeLease feature gate is enabled) - That namespace contains one Lease object per node - *Node leases* are a new way to implement node heartbeats (i.e. node regularly pinging the control plane to say "I'm alive!") - For more details, see [KEP-0009] or the [node controller documentation] [KEP-0009]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0009-node-heartbeat.md [node controller documentation]: https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: pic .interstitial[] --- name: toc-running-our-first-containers-on-kubernetes class: title Running our first containers on Kubernetes .nav[ [Previous section](#toc-first-contact-with-kubectl) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-accessing-logs-from-the-cli) ] .debug[(automatically generated title slide)] --- # Running our first containers on Kubernetes - First things first: we cannot run a container -- - We are going to run a pod, and in that pod there will be a single container -- - In that container in the pod, we are going to run a simple `ping` command - Then we are going to start additional copies of the pod .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Starting a simple pod with `kubectl run` - We need to specify at least a *name* and the image we want to use .exercise[ - Let's ping `1.1.1.1`, Cloudflare's [public DNS resolver](https://blog.cloudflare.com/announcing-1111/): ```bash kubectl run pingpong --image alpine ping 1.1.1.1 ``` ] -- (Starting with Kubernetes 1.12, we get a message telling us that `kubectl run` is deprecated. Let's ignore it for now.) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Behind the scenes of `kubectl run` - Let's look at the resources that were created by `kubectl run` .exercise[ - List most resource types: ```bash kubectl get all ``` ] -- We should see the following things: - `deployment.apps/pingpong` (the *deployment* that we just created) - `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment) - `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set) Note: as of 1.10.1, resource types are displayed in more detail. .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## What are these different things? - A *deployment* is a high-level construct - allows scaling, rolling updates, rollbacks - multiple deployments can be used together to implement a [canary deployment](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) - delegates pods management to *replica sets* - A *replica set* is a low-level construct - makes sure that a given number of identical pods are running - allows scaling - rarely used directly - A *replication controller* is the (deprecated) predecessor of a replica set .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Our `pingpong` deployment - `kubectl run` created a *deployment*, `deployment.apps/pingpong` ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/pingpong 1 1 1 1 10m ``` - That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx` ``` NAME DESIRED CURRENT READY AGE replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m ``` - That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy` ``` NAME READY STATUS RESTARTS AGE pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m ``` - We'll see later how these folks play together for: - scaling, high availability, rolling updates .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Viewing container output - Let's use the `kubectl logs` command - We will pass either a *pod name*, or a *type/name* (E.g. if we specify a deployment or replica set, it will get the first pod in it) - Unless specified otherwise, it will only show logs of the first container in the pod (Good thing there's only one in ours!) .exercise[ - View the result of our `ping` command: ```bash kubectl logs deploy/pingpong ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Streaming logs in real time - Just like `docker logs`, `kubectl logs` supports convenient options: - `-f`/`--follow` to stream logs in real time (Γ la `tail -f`) - `--tail` to indicate how many lines you want to see (from the end) - `--since` to get logs only after a given timestamp .exercise[ - View the latest logs of our `ping` command: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Scaling our application - We can create additional copies of our container (I mean, our pod) with `kubectl scale` .exercise[ - Scale our `pingpong` deployment: ```bash kubectl scale deploy/pingpong --replicas 3 ``` - Note that this command does exactly the same thing: ```bash kubectl scale deployment pingpong --replicas 3 ``` ] Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`? We could! But the *deployment* would notice it right away, and scale back to the initial level. .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Resilience - The *deployment* `pingpong` watches its *replica set* - The *replica set* ensures that the right number of *pods* are running - What happens if pods disappear? .exercise[ - In a separate window, list pods, and keep watching them: ```bash kubectl get pods -w ``` - Destroy a pod: ``` kubectl delete pod pingpong-xxxxxxxxxx-yyyyy ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## What if we wanted something different? - What if we wanted to start a "one-shot" container that *doesn't* get restarted? - We could use `kubectl run --restart=OnFailure` or `kubectl run --restart=Never` - These commands would create *jobs* or *pods* instead of *deployments* - Under the hood, `kubectl run` invokes "generators" to create resource descriptions - We could also write these resource descriptions ourselves (typically in YAML), and create them on the cluster with `kubectl apply -f` (discussed later) - With `kubectl run --schedule=...`, we can also create *cronjobs* .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## What about that deprecation warning? - As we can see from the previous slide, `kubectl run` can do many things - The exact type of resource created is not obvious - To make things more explicit, it is better to use `kubectl create`: - `kubectl create deployment` to create a deployment - `kubectl create job` to create a job - `kubectl create cronjob` to run a job periodically (since Kubernetes 1.14) - Eventually, `kubectl run` will be used only to start one-shot pods (see https://github.com/kubernetes/kubernetes/pull/68132) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Various ways of creating resources - `kubectl run` - easy way to get started - versatile - `kubectl create ` - explicit, but lacks some features - can't create a CronJob before Kubernetes 1.14 - can't pass command-line arguments to deployments - `kubectl create -f foo.yaml` or `kubectl apply -f foo.yaml` - all features are available - requires writing YAML .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Viewing logs of multiple pods - When we specify a deployment name, only one single pod's logs are shown - We can view the logs of multiple pods by specifying a *selector* - A selector is a logic expression using *labels* - Conveniently, when you `kubectl run somename`, the associated objects have a `run=somename` label .exercise[ - View the last line of log from all pods with the `run=pingpong` label: ```bash kubectl logs -l run=pingpong --tail 1 ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ### Streaming logs of multiple pods - Can we stream the logs of all our `pingpong` pods? .exercise[ - Combine `-l` and `-f` flags: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` ] *Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!* *Let's try to understand why ...* .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: extra-details ### Streaming logs of many pods - Let's see what happens if we try to stream the logs for more than 5 pods .exercise[ - Scale up our deployment: ```bash kubectl scale deployment pingpong --replicas=8 ``` - Stream the logs: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` ] We see a message like the following one: ``` error: you are attempting to follow 8 log streams, but maximum allowed concurency is 5, use --max-log-requests to increase the limit ``` .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: extra-details ## Why can't we stream the logs of many pods? - `kubectl` opens one connection to the API server per pod - For each pod, the API server opens one extra connection to the corresponding kubelet - If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server - This could easily put a lot of stress on the API server - Prior Kubernetes 1.14, it was decided to *not* allow multiple connections - From Kubernetes 1.14, it is allowed, but limited to 5 connections (this can be changed with `--max-log-requests`) - For more details about the rationale, see [PR #67573](https://github.com/kubernetes/kubernetes/pull/67573) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Shortcomings of `kubectl logs` - We don't see which pod sent which log line - If pods are restarted / replaced, the log stream stops - If new pods are added, we don't see their logs - To stream the logs of multiple pods, we need to write a selector - There are external tools to address these shortcomings (e.g.: [Stern](https://github.com/wercker/stern)) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: extra-details ## `kubectl logs -l ... --tail N` - If we run this with Kubernetes 1.12, the last command shows multiple lines - This is a regression when `--tail` is used together with `-l`/`--selector` - It always shows the last 10 lines of output for each container (instead of the number of lines specified on the command line) - The problem was fixed in Kubernetes 1.13 *See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.* .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Aren't we flooding 1.1.1.1? - If you're wondering this, good question! - Don't worry, though: *APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.* (Source: https://blog.cloudflare.com/announcing-1111/) - It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC! .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: pic .interstitial[] --- name: toc-accessing-logs-from-the-cli class: title Accessing logs from the CLI .nav[ [Previous section](#toc-running-our-first-containers-on-kubernetes) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-declarative-vs-imperative) ] .debug[(automatically generated title slide)] --- # Accessing logs from the CLI - The `kubectl logs` command has limitations: - it cannot stream logs from multiple pods at a time - when showing logs from multiple pods, it mixes them all together - We are going to see how to do it better .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Doing it manually - We *could* (if we were so inclined) write a program or script that would: - take a selector as an argument - enumerate all pods matching that selector (with `kubectl get -l ...`) - fork one `kubectl logs --follow ...` command per container - annotate the logs (the output of each `kubectl logs ...` process) with their origin - preserve ordering by using `kubectl logs --timestamps ...` and merge the output -- - We *could* do it, but thankfully, others did it for us already! .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Stern [Stern](https://github.com/wercker/stern) is an open source project by [Wercker](http://www.wercker.com/). From the README: *Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.* *The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.* Exactly what we need! .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Installing Stern - Run `stern` (without arguments) to check if it's installed: ``` $ stern Tail multiple pods and containers from Kubernetes Usage: stern pod-query [flags] ``` - If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases) - The following commands will install Stern on a Linux Intel 64 bit machine: ```bash sudo curl -L -o /usr/local/bin/stern \ https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 sudo chmod +x /usr/local/bin/stern ``` .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Using Stern - There are two ways to specify the pods whose logs we want to see: - `-l` followed by a selector expression (like with many `kubectl` commands) - with a "pod query," i.e. a regex used to match pod names - These two ways can be combined if necessary .exercise[ - View the logs for all the rng containers: ```bash stern rng ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Stern convenient options - The `--tail N` flag shows the last `N` lines for each container (Instead of showing the logs since the creation of the container) - The `-t` / `--timestamps` flag shows timestamps - The `--all-namespaces` flag is self-explanatory .exercise[ - View what's up with the `weave` system containers: ```bash stern --tail 1 --timestamps --all-namespaces weave ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Using Stern with a selector - When specifying a selector, we can omit the value for a label - This will match all objects having that label (regardless of the value) - Everything created with `kubectl run` has a label `run` - We can use that property to view the logs of all the pods created with `kubectl run` - Similarly, everything created with `kubectl create deployment` has a label `app` .exercise[ - View the logs for all the things started with `kubectl create deployment`: ```bash stern -l app ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- class: pic .interstitial[] --- name: toc-declarative-vs-imperative class: title Declarative vs imperative .nav[ [Previous section](#toc-accessing-logs-from-the-cli) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-kubernetes-network-model) ] .debug[(automatically generated title slide)] --- # Declarative vs imperative - Our container orchestrator puts a very strong emphasis on being *declarative* - Declarative: *I would like a cup of tea.* - Imperative: *Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.* -- - Declarative seems simpler at first ... -- - ... As long as you know how to brew tea .debug[[shared/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/declarative.md)] --- ## Declarative vs imperative - What declarative would really be: *I want a cup of tea, obtained by pouring an infusionΒΉ of tea leaves in a cup.* -- *ΒΉAn infusion is obtained by letting the object steep a few minutes in hotΒ² water.* -- *Β²Hot liquid is obtained by pouring it in an appropriate containerΒ³ and setting it on a stove.* -- *Β³Ah, finally, containers! Something we know about. Let's get to work, shall we?* -- .footnote[Did you know there was an [ISO standard](https://en.wikipedia.org/wiki/ISO_3103) specifying how to brew tea?] .debug[[shared/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/declarative.md)] --- ## Declarative vs imperative - Imperative systems: - simpler - if a task is interrupted, we have to restart from scratch - Declarative systems: - if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary - we need to be able to *observe* the system - ... and compute a "diff" between *what we have* and *what we want* .debug[[shared/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/declarative.md)] --- ## Declarative vs imperative in Kubernetes - With Kubernetes, we cannot say: "run this container" - All we can do is write a *spec* and push it to the API server (by creating a resource like e.g. a Pod or a Deployment) - The API server will validate that spec (and reject it if it's invalid) - Then it will store it in etcd - A *controller* will "notice" that spec and act upon it .debug[[k8s/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/declarative.md)] --- ## Reconciling state - Watch for the `spec` fields in the YAML files later! - The *spec* describes *how we want the thing to be* - Kubernetes will *reconcile* the current state with the spec (technically, this is done by a number of *controllers*) - When we want to change some resource, we update the *spec* - Kubernetes will then *converge* that resource .debug[[k8s/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/declarative.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl run web --image=nginx --replicas=3 ``` .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-network-model class: title Kubernetes network model .nav[ [Previous section](#toc-declarative-vs-imperative) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-exposing-containers) ] .debug[(automatically generated title slide)] --- # Kubernetes network model - TL,DR: *Our cluster (nodes and pods) is one big flat IP network.* -- - In detail: - all nodes must be able to reach each other, without NAT - all pods must be able to reach each other, without NAT - pods and nodes must be able to reach each other, without NAT - each pod is aware of its IP address (no NAT) - pod IP addresses are assigned by the network implementation - Kubernetes doesn't mandate any particular implementation .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the good - Everything can reach everything - No address translation - No port translation - No new protocol - The network implementation can decide how to allocate addresses - IP addresses don't have to be "portable" from a node to another (We can use e.g. a subnet per node and use a simple routed topology) - The specification is simple enough to allow many various implementations .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the less good - Everything can reach everything - if you want security, you need to add network policies - the network implementation that you use needs to support them - There are literally dozens of implementations out there (15 are listed in the Kubernetes documentation) - Pods have level 3 (IP) connectivity, but *services* are level 4 (TCP or UDP) (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets) - `kube-proxy` is on the data path when connecting to a pod or container, and it's not particularly fast (relies on userland proxying or iptables) .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- ## Kubernetes network model: in practice - The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave) - We don't endorse Weave in a particular way, it just Works For Us - Don't worry about the warning about `kube-proxy` performance - Unless you: - routinely saturate 10G network interfaces - count packet rates in millions per second - run high-traffic VOIP or gaming platforms - do weird things that involve millions of simultaneous connections (in which case you're already familiar with kernel tuning) - If necessary, there are alternatives to `kube-proxy`; e.g. [`kube-router`](https://www.kube-router.io) .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: extra-details ## The Container Network Interface (CNI) - Most Kubernetes clusters use CNI "plugins" to implement networking - When a pod is created, Kubernetes delegates the network setup to these plugins (it can be a single plugin, or a combination of plugins, each doing one task) - Typically, CNI plugins will: - allocate an IP address (by calling an IPAM plugin) - add a network interface into the pod's network namespace - configure the interface as well as required routes etc. .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: extra-details ## Multiple moving parts - The "pod-to-pod network" or "pod network": - provides communication between pods and nodes - is generally implemented with CNI plugins - The "pod-to-service network": - provides internal communication and load balancing - is generally implemented with kube-proxy (or e.g. kube-router) - Network policies: - provide firewalling and isolation - can be bundled with the "pod network" or provided by another component .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: extra-details ## Even more moving parts - Inbound traffic can be handled by multiple components: - something like kube-proxy or kube-router (for NodePort services) - load balancers (ideally, connected to the pod network) - It is possible to use multiple pod networks in parallel (with "meta-plugins" like CNI-Genie or Multus) - Some solutions can fill multiple roles (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy) .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: pic .interstitial[] --- name: toc-exposing-containers class: title Exposing containers .nav[ [Previous section](#toc-kubernetes-network-model) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-shipping-images-with-a-registry) ] .debug[(automatically generated title slide)] --- # Exposing containers - `kubectl expose` creates a *service* for existing pods - A *service* is a stable address for a pod (or a bunch of pods) - If we want to connect to our pod(s), we need to create a *service* - Once a service is created, CoreDNS will allow us to resolve it by name (i.e. after creating service `hello`, the name `hello` will resolve to something) - There are different types of services, detailed on the following slides: `ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName` .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Basic service types - `ClusterIP` (default type) - a virtual IP address is allocated for the service (in an internal, private range) - this IP address is reachable only from within the cluster (nodes and pods) - our code can connect to the service using the original port number - `NodePort` - a port is allocated for the service (by default, in the 30000-32768 range) - that port is made available *on all our nodes* and anybody can connect to it - our code must be changed to connect to that new port number These service types are always available. Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables` rules. .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## More service types - `LoadBalancer` - an external load balancer is allocated for the service - the load balancer is configured accordingly (e.g.: a `NodePort` service is created, and the load balancer sends traffic to that port) - available only when the underlying infrastructure provides some "load balancer as a service" (e.g. AWS, Azure, GCE, OpenStack...) - `ExternalName` - the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record - no port, no IP address, no nothing else is allocated .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Running containers with open ports - Since `ping` doesn't have anything to connect to, we'll have to run something else - We could use the `nginx` official image, but ... ... we wouldn't be able to tell the backends from each other! - We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go - `jpetazzo/httpenv` listens on port 8888 - It serves its environment variables in JSON format - The environment variables will include `HOSTNAME`, which will be the pod name (and therefore, will be different on each backend) .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Creating a deployment for our HTTP server - We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ... - But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead .exercise[ - In another window, watch the pods (to see when they are created): ```bash kubectl get pods -w ``` - Create a deployment for this very lightweight HTTP server: ```bash kubectl create deployment httpenv --image=jpetazzo/httpenv ``` - Scale it to 10 replicas: ```bash kubectl scale deployment httpenv --replicas=10 ``` ] .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Exposing our deployment - We'll create a default `ClusterIP` service .exercise[ - Expose the HTTP port of our server: ```bash kubectl expose deployment httpenv --port 8888 ``` - Look up which IP address was allocated: ```bash kubectl get service ``` ] .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Services are layer 4 constructs - You can assign IP addresses to services, but they are still *layer 4* (i.e. a service is not an IP address; it's an IP address + protocol + port) - This is caused by the current implementation of `kube-proxy` (it relies on mechanisms that don't support layer 3) - As a result: you *have to* indicate the port number for your service - Running services with arbitrary port (or port ranges) requires hacks (e.g. host networking mode) .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Testing our service - We will now send a few HTTP requests to our pods .exercise[ - Let's obtain the IP address that was allocated for our service, *programmatically:* ```bash IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}') ``` - Send a few requests: ```bash curl http://$IP:8888/ ``` - Too much output? Filter it with `jq`: ```bash curl -s http://$IP:8888/ | jq .HOSTNAME ``` ] -- Try it a few times! Our requests are load balanced across multiple pods. .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## If we don't need a load balancer - Sometimes, we want to access our scaled services directly: - if we want to save a tiny little bit of latency (typically less than 1ms) - if we need to connect over arbitrary ports (instead of a few fixed ones) - if we need to communicate over another protocol than UDP or TCP - if we want to decide how to balance the requests client-side - ... - In that case, we can use a "headless service" .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Headless services - A headless service is obtained by setting the `clusterIP` field to `None` (Either with `--cluster-ip=None`, or by providing a custom YAML) - As a result, the service doesn't have a virtual IP address - Since there is no virtual IP address, there is no load balancer either - CoreDNS will return the pods' IP addresses as multiple `A` records - This gives us an easy way to discover all the replicas for a deployment .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Services and endpoints - A service has a number of "endpoints" - Each endpoint is a host + port where the service is available - The endpoints are maintained and updated automatically by Kubernetes .exercise[ - Check the endpoints that Kubernetes has associated with our `httpenv` service: ```bash kubectl describe service httpenv ``` ] In the output, there will be a line starting with `Endpoints:`. That line will list a bunch of addresses in `host:port` format. .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Viewing endpoint details - When we have many endpoints, our display commands truncate the list ```bash kubectl get endpoints ``` - If we want to see the full list, we can use one of the following commands: ```bash kubectl describe endpoints httpenv kubectl get endpoints httpenv -o yaml ``` - These commands will show us a list of IP addresses - These IP addresses should match the addresses of the corresponding pods: ```bash kubectl get pods -l app=httpenv -o wide ``` .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## `endpoints` not `endpoint` - `endpoints` is the only resource that cannot be singular ```bash $ kubectl get endpoint error: the server doesn't have a resource type "endpoint" ``` - This is because the type itself is plural (unlike every other resource) - There is no `endpoint` object: `type Endpoints struct` - The type doesn't represent a single endpoint, but a list of endpoints .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Exposing services to the outside world - The default type (ClusterIP) only works for internal traffic - If we want to accept external traffic, we can use one of these: - NodePort (expose a service on a TCP port between 30000-32768) - LoadBalancer (provision a cloud load balancer for our service) - ExternalIP (use one node's external IP address) - Ingress (a special mechanism for HTTP services) *We'll see NodePorts and Ingresses more in detail later.* .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: pic .interstitial[] --- name: toc-shipping-images-with-a-registry class: title Shipping images with a registry .nav[ [Previous section](#toc-exposing-containers) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-running-our-application-on-kubernetes) ] .debug[(automatically generated title slide)] --- # Shipping images with a registry - Initially, our app was running on a single node - We could *build* and *run* in the same place - Therefore, we did not need to *ship* anything - Now that we want to run on a cluster, things are different - The easiest way to ship container images is to use a registry .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## How Docker registries work (a reminder) - What happens when we execute `docker run alpine` ? - If the Engine needs to pull the `alpine` image, it expands it into `library/alpine` - `library/alpine` is expanded into `index.docker.io/library/alpine` - The Engine communicates with `index.docker.io` to retrieve `library/alpine:latest` - To use something else than `index.docker.io`, we specify it in the image name - Examples: ```bash docker pull gcr.io/google-containers/alpine-with-bash:1.0 docker build -t registry.mycompany.io:5000/myimage:awesome . docker push registry.mycompany.io:5000/myimage:awesome ``` .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Running DockerCoins on Kubernetes - Create one deployment for each component (hasher, redis, rng, webui, worker) - Expose deployments that need to accept connections (hasher, redis, rng, webui) - For redis, we can use the official redis image - For the 4 others, we need to build images and push them to some registry .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Building and shipping images - There are *many* options! - Manually: - build locally (with `docker build` or otherwise) - push to the registry - Automatically: - build and test locally - when ready, commit and push a code repository - the code repository notifies an automated build system - that system gets the code, builds it, pushes the image to the registry .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Which registry do we want to use? - There are SAAS products like Docker Hub, Quay ... - Each major cloud provider has an option as well (ACR on Azure, ECR on AWS, GCR on Google Cloud...) - There are also commercial products to run our own registry (Docker EE, Quay...) - And open source options, too! - When picking a registry, pay attention to its build system (when it has one) .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Using images from the Docker Hub - For everyone's convenience, we took care of building DockerCoins images - We pushed these images to the DockerHub, under the [dockercoins](https://hub.docker.com/u/dockercoins) user - These images are *tagged* with a version number, `v0.1` - The full image names are therefore: - `dockercoins/hasher:v0.1` - `dockercoins/rng:v0.1` - `dockercoins/webui:v0.1` - `dockercoins/worker:v0.1` .debug[[k8s/buildshiprun-dockerhub.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/buildshiprun-dockerhub.md)] --- ## Setting `$REGISTRY` and `$TAG` - In the upcoming exercises and labs, we use a couple of environment variables: - `$REGISTRY` as a prefix to all image names - `$TAG` as the image version tag - For example, the worker image is `$REGISTRY/worker:$TAG` - If you copy-paste the commands in these exercises: **make sure that you set `$REGISTRY` and `$TAG` first!** - For example: ``` export REGISTRY=dockercoins TAG=v0.1 ``` (this will expand `$REGISTRY/worker:$TAG` to `dockercoins/worker:v0.1`) .debug[[k8s/buildshiprun-dockerhub.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/buildshiprun-dockerhub.md)] --- class: pic .interstitial[] --- name: toc-running-our-application-on-kubernetes class: title Running our application on Kubernetes .nav[ [Previous section](#toc-shipping-images-with-a-registry) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-scaling-our-demo-app) ] .debug[(automatically generated title slide)] --- # Running our application on Kubernetes - We can now deploy our code (as well as a redis instance) .exercise[ - Deploy `redis`: ```bash kubectl create deployment redis --image=redis ``` - Deploy everything else: ```bash set -u for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG done ``` ] .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Is this working? - After waiting for the deployment to complete, let's look at the logs! (Hint: use `kubectl get deploy -w` to watch deployment events) .exercise[ - Look at some logs: ```bash kubectl logs deploy/rng kubectl logs deploy/worker ``` ] -- π€ `rng` is fine ... But not `worker`. -- π‘ Oh right! We forgot to `expose`. .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Connecting containers together - Three deployments need to be reachable by others: `hasher`, `redis`, `rng` - `worker` doesn't need to be exposed - `webui` will be dealt with later .exercise[ - Expose each deployment, specifying the right port: ```bash kubectl expose deployment redis --port 6379 kubectl expose deployment rng --port 80 kubectl expose deployment hasher --port 80 ``` ] .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Is this working yet? - The `worker` has an infinite loop, that retries 10 seconds after an error .exercise[ - Stream the worker's logs: ```bash kubectl logs deploy/worker --follow ``` (Give it about 10 seconds to recover) ] -- We should now see the `worker`, well, working happily. .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Exposing services for external access - Now we would like to access the Web UI - We will expose it with a `NodePort` (just like we did for the registry) .exercise[ - Create a `NodePort` service for the Web UI: ```bash kubectl expose deploy/webui --type=NodePort --port=80 ``` - Check the port that was allocated: ```bash kubectl get svc ``` ] .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Accessing the web UI - We can now connect to *any node*, on the allocated node port, to view the web UI .exercise[ - Open the web UI in your browser (http://node-ip-address:3xxxx/) ] -- Yes, this may take a little while to update. *(Narrator: it was DNS.)* -- *Alright, we're back to where we started, when we were running on a single node!* .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- class: pic .interstitial[] --- name: toc-scaling-our-demo-app class: title Scaling our demo app .nav[ [Previous section](#toc-running-our-application-on-kubernetes) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-namespaces) ] .debug[(automatically generated title slide)] --- # Scaling our demo app - Our ultimate goal is to get more DockerCoins (i.e. increase the number of loops per second shown on the web UI) - Let's look at the architecture again:  - The loop is done in the worker; perhaps we could try adding more workers? .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Adding another worker - All we have to do is scale the `worker` Deployment .exercise[ - Open two new terminals to check what's going on with pods and deployments: ```bash kubectl get pods -w kubectl get deployments -w ``` - Now, create more `worker` replicas: ```bash kubectl scale deployment worker --replicas=2 ``` ] After a few seconds, the graph in the web UI should show up. .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Adding more workers - If 2 workers give us 2x speed, what about 3 workers? .exercise[ - Scale the `worker` Deployment further: ```bash kubectl scale deployment worker --replicas=3 ``` ] The graph in the web UI should go up again. (This is looking great! We're gonna be RICH!) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Adding even more workers - Let's see if 10 workers give us 10x speed! .exercise[ - Scale the `worker` Deployment to a bigger number: ```bash kubectl scale deployment worker --replicas=10 ``` ] -- The graph will peak at 10 hashes/second. (We can add as many workers as we want: we will never go past 10 hashes/second.) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- class: extra-details ## Didn't we briefly exceed 10 hashes/second? - It may *look like it*, because the web UI shows instant speed - The instant speed can briefly exceed 10 hashes/second - The average speed cannot - The instant speed can be biased because of how it's computed .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- class: extra-details ## Why instant speed is misleading - The instant speed is computed client-side by the web UI - The web UI checks the hash counter once per second (and does a classic (h2-h1)/(t2-t1) speed computation) - The counter is updated once per second by the workers - These timings are not exact (e.g. the web UI check interval is client-side JavaScript) - Sometimes, between two web UI counter measurements, the workers are able to update the counter *twice* - During that cycle, the instant speed will appear to be much bigger (but it will be compensated by lower instant speed before and after) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Why are we stuck at 10 hashes per second? - If this was high-quality, production code, we would have instrumentation (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...) - It's not! - Perhaps we could benchmark our web services? (with tools like `ab`, or even simpler, `httping`) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Benchmarking our web services - We want to check `hasher` and `rng` - We are going to use `httping` - It's just like `ping`, but using HTTP `GET` requests (it measures how long it takes to perform one `GET` request) - It's used like this: ``` httping [-c count] http://host:port/path ``` - Or even simpler: ``` httping ip.ad.dr.ess ``` - We will use `httping` on the ClusterIP addresses of our services .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Obtaining ClusterIP addresses - We can simply check the output of `kubectl get services` - Or do it programmatically, as in the example below .exercise[ - Retrieve the IP addresses: ```bash HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}}) RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}}) ``` ] Now we can access the IP addresses of our services through `$HASHER` and `$RNG`. .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Checking `hasher` and `rng` response times .exercise[ - Check the response times for both services: ```bash httping -c 3 $HASHER httping -c 3 $RNG ``` ] - `hasher` is fine (it should take a few milliseconds to reply) - `rng` is not (it should take about 700 milliseconds if there are 10 workers) - Something is wrong with `rng`, but ... what? .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Let's draw hasty conclusions - The bottleneck seems to be `rng` - *What if* we don't have enough entropy and can't generate enough random numbers? - We need to scale out the `rng` service on multiple machines! Note: this is a fiction! We have enough entropy. But we need a pretext to scale out. (In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy... ...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).) .debug[[shared/hastyconclusions.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/hastyconclusions.md)] --- class: pic .interstitial[] --- name: toc-namespaces class: title Namespaces .nav[ [Previous section](#toc-scaling-our-demo-app) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-daemon-sets) ] .debug[(automatically generated title slide)] --- # Namespaces - We would like to deploy another copy of DockerCoins on our cluster - We could rename all our deployments and services: hasher β hasher2, redis β redis2, rng β rng2, etc. - That would require updating the code - There has to be a better way! -- - As hinted by the title of this section, we will use *namespaces* .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Identifying a resource - We cannot have two resources with the same name (or can we...?) -- - We cannot have two resources *of the same kind* with the same name (but it's OK to have an `rng` service, an `rng` deployment, and an `rng` daemon set) -- - We cannot have two resources of the same kind with the same name *in the same namespace* (but it's OK to have e.g. two `rng` services in different namespaces) -- - Except for resources that exist at the *cluster scope* (these do not belong to a namespace) .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Uniquely identifying a resource - For *namespaced* resources: the tuple *(kind, name, namespace)* needs to be unique - For resources at the *cluster scope*: the tuple *(kind, name)* needs to be unique .exercise[ - List resource types again, and check the NAMESPACED column: ```bash kubectl api-resources ``` ] .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Pre-existing namespaces - If we deploy a cluster with `kubeadm`, we have three or four namespaces: - `default` (for our applications) - `kube-system` (for the control plane) - `kube-public` (contains one ConfigMap for cluster discovery) - `kube-node-lease` (in Kubernetes 1.14 and later; contains Lease objects) - If we deploy differently, we may have different namespaces .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Creating namespaces - Let's see two identical methods to create a namespace .exercise[ - We can use `kubectl create namespace`: ```bash kubectl create namespace blue ``` - Or we can construct a very minimal YAML snippet: ```bash kubectl apply -f- < - Unfortunately, as of Kubernetes 1.15, the CLI cannot create daemon sets -- - More precisely: it doesn't have a subcommand to create a daemon set -- - But any kind of resource can always be created by providing a YAML description: ```bash kubectl apply -f foo.yaml ``` -- - How do we create the YAML file for our daemon set? -- - option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset) -- - option 2: `vi` our way out of it .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Creating the YAML file for our daemon set - Let's start with the YAML file for the current `rng` resource .exercise[ - Dump the `rng` resource in YAML: ```bash kubectl get deploy/rng -o yaml >rng.yml ``` - Edit `rng.yml` ] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## "Casting" a resource to another - What if we just changed the `kind` field? (It can't be that easy, right?) .exercise[ - Change `kind: Deployment` to `kind: DaemonSet` - Save, quit - Try to create our new resource: ``` kubectl apply -f rng.yml ``` ] -- We all knew this couldn't be that easy, right! .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Understanding the problem - The core of the error is: ``` error validating data: [ValidationError(DaemonSet.spec): unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec, ... ``` -- - *Obviously,* it doesn't make sense to specify a number of replicas for a daemon set -- - Workaround: fix the YAML - remove the `replicas` field - remove the `strategy` field (which defines the rollout mechanism for a deployment) - remove the `progressDeadlineSeconds` field (also used by the rollout mechanism) - remove the `status: {}` line at the end -- - Or, we could also ... .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Use the `--force`, Luke - We could also tell Kubernetes to ignore these errors and try anyway - The `--force` flag's actual name is `--validate=false` .exercise[ - Try to load our YAML file and ignore errors: ```bash kubectl apply -f rng.yml --validate=false ``` ] -- π©β¨π -- Wait ... Now, can it be *that* easy? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Checking what we've done - Did we transform our `deployment` into a `daemonset`? .exercise[ - Look at the resources that we have now: ```bash kubectl get all ``` ] -- We have two resources called `rng`: - the *deployment* that was existing before - the *daemon set* that we just created We also have one too many pods. (The pod corresponding to the *deployment* still exists.) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## `deploy/rng` and `ds/rng` - You can have different resource types with the same name (i.e. a *deployment* and a *daemon set* both named `rng`) - We still have the old `rng` *deployment* ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/rng 1 1 1 1 18m ``` - But now we have the new `rng` *daemon set* as well ``` NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/rng 2 2 2 2 2 9s ``` .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Too many pods - If we check with `kubectl get pods`, we see: - *one pod* for the deployment (named `rng-xxxxxxxxxx-yyyyy`) - *one pod per node* for the daemon set (named `rng-zzzzz`) ``` NAME READY STATUS RESTARTS AGE rng-54f57d4d49-7pt82 1/1 Running 0 11m rng-b85tm 1/1 Running 0 25s rng-hfbrr 1/1 Running 0 25s [...] ``` -- The daemon set created one pod per node, except on the master node. The master node has [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) preventing pods from running there. (To schedule a pod on this node anyway, the pod will require appropriate [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).) .footnote[(Off by one? We don't run these pods on the node hosting the control plane.)] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Is this working? - Look at the web UI -- - The graph should now go above 10 hashes per second! -- - It looks like the newly created pods are serving traffic correctly - How and why did this happen? (We didn't do anything special to add them to the `rng` service load balancer!) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: pic .interstitial[] --- name: toc-labels-and-selectors class: title Labels and selectors .nav[ [Previous section](#toc-daemon-sets) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-exercise--from-compose-to-kubernetes) ] .debug[(automatically generated title slide)] --- # Labels and selectors - The `rng` *service* is load balancing requests to a set of pods - That set of pods is defined by the *selector* of the `rng` service .exercise[ - Check the *selector* in the `rng` service definition: ```bash kubectl describe service rng ``` ] - The selector is `app=rng` - It means "all the pods having the label `app=rng`" (They can have additional labels as well, that's OK!) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Selector evaluation - We can use selectors with many `kubectl` commands - For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more .exercise[ - Get the list of pods matching selector `app=rng`: ```bash kubectl get pods -l app=rng kubectl get pods --selector app=rng ``` ] But ... why do these pods (in particular, the *new* ones) have this `app=rng` label? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Where do labels come from? - When we create a deployment with `kubectl create deployment rng`, this deployment gets the label `app=rng` - The replica sets created by this deployment also get the label `app=rng` - The pods created by these replica sets also get the label `app=rng` - When we created the daemon set from the deployment, we re-used the same spec - Therefore, the pods created by the daemon set get the same labels .footnote[Note: when we use `kubectl run stuff`, the label is `run=stuff` instead.] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating load balancer configuration - We would like to remove a pod from the load balancer - What would happen if we removed that pod, with `kubectl delete pod ...`? -- It would be re-created immediately (by the replica set or the daemon set) -- - What would happen if we removed the `app=rng` label from that pod? -- It would *also* be re-created immediately -- Why?!? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Selectors for replica sets and daemon sets - The "mission" of a replica set is: "Make sure that there is the right number of pods matching this spec!" - The "mission" of a daemon set is: "Make sure that there is a pod matching this spec on each node!" -- - *In fact,* replica sets and daemon sets do not check pod specifications - They merely have a *selector*, and they look for pods matching that selector - Yes, we can fool them by manually creating pods with the "right" labels - Bottom line: if we remove our `app=rng` label ... ... The pod "disappears" for its parent, which re-creates another pod to replace it .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: extra-details ## Isolation of replica sets and daemon sets - Since both the `rng` daemon set and the `rng` replica set use `app=rng` ... ... Why don't they "find" each other's pods? -- - *Replica sets* have a more specific selector, visible with `kubectl describe` (It looks like `app=rng,pod-template-hash=abcd1234`) - *Daemon sets* also have a more specific selector, but it's invisible (It looks like `app=rng,controller-revision-hash=abcd1234`) - As a result, each controller only "sees" the pods it manages .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer - Currently, the `rng` service is defined by the `app=rng` selector - The only way to remove a pod is to remove or change the `app` label - ... But that will cause another pod to be created instead! - What's the solution? -- - We need to change the selector of the `rng` service! - Let's add another label to that selector (e.g. `enabled=yes`) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Complex selectors - If a selector specifies multiple labels, they are understood as a logical *AND* (In other words: the pods must match all the labels) - Kubernetes has support for advanced, set-based selectors (But these cannot be used with services, at least not yet!) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## The plan 1. Add the label `enabled=yes` to all our `rng` pods 2. Update the selector for the `rng` service to also include `enabled=yes` 3. Toggle traffic to a pod by manually adding/removing the `enabled` label 4. Profit! *Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.* .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Adding labels to pods - We want to add the label `enabled=yes` to all pods that have `app=rng` - We could edit each pod one by one with `kubectl edit` ... - ... Or we could use `kubectl label` to label them all - `kubectl label` can use selectors itself .exercise[ - Add `enabled=yes` to all pods that have `app=rng`: ```bash kubectl label pods -l app=rng enabled=yes ``` ] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating the service selector - We need to edit the service specification - Reminder: in the service definition, we will see `app: rng` in two places - the label of the service itself (we don't need to touch that one) - the selector of the service (that's the one we want to change) .exercise[ - Update the service to add `enabled: yes` to its selector: ```bash kubectl edit service rng ``` ] -- ... And then we get *the weirdest error ever.* Why? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## When the YAML parser is being too smart - YAML parsers try to help us: - `xyz` is the string `"xyz"` - `42` is the integer `42` - `yes` is the boolean value `true` - If we want the string `"42"` or the string `"yes"`, we have to quote them - So we have to use `enabled: "yes"` .footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating the service selector, take 2 .exercise[ - Update the service to add `enabled: "yes"` to its selector: ```bash kubectl edit service rng ``` ] This time it should work! If we did everything correctly, the web UI shouldn't show any change. .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating labels - We want to disable the pod that was created by the deployment - All we have to do, is remove the `enabled` label from that pod - To identify that pod, we can use its name - ... Or rely on the fact that it's the only one with a `pod-template-hash` label - Good to know: - `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string) - to remove label `foo`, use `kubectl label ... foo-` - to change an existing label, we would need to add `--overwrite` .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer .exercise[ - In one window, check the logs of that pod: ```bash POD=$(kubectl get pod -l app=rng,pod-template-hash -o name) kubectl logs --tail 1 --follow $POD ``` (We should see a steady stream of HTTP logs) - In another window, remove the label from the pod: ```bash kubectl label pod -l app=rng,pod-template-hash enabled- ``` (The stream of HTTP logs should stop immediately) ] There might be a slight change in the web UI (since we removed a bit of capacity from the `rng` service). If we remove more pods, the effect should be more visible. .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: extra-details ## Updating the daemon set - If we scale up our cluster by adding new nodes, the daemon set will create more pods - These pods won't have the `enabled=yes` label - If we want these pods to have that label, we need to edit the daemon set spec - We can do that with e.g. `kubectl edit daemonset rng` .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: extra-details ## We've put resources in your resources - Reminder: a daemon set is a resource that creates more resources! - There is a difference between: - the label(s) of a resource (in the `metadata` block in the beginning) - the selector of a resource (in the `spec` block) - the label(s) of the resource(s) created by the first resource (in the `template` block) - We would need to update the selector and the template (metadata labels are not mandatory) - The template must match the selector (i.e. the resource will refuse to create resources that it will not select) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Labels and debugging - When a pod is misbehaving, we can delete it: another one will be recreated - But we can also change its labels - It will be removed from the load balancer (it won't receive traffic anymore) - Another pod will be recreated immediately - But the problematic pod is still here, and we can inspect and debug it - We can even re-add it to the rotation if necessary (Very useful to troubleshoot intermittent and elusive bugs) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Labels and advanced rollout control - Conversely, we can add pods matching a service's selector - These pods will then receive requests and serve traffic - Examples: - one-shot pod with all debug flags enabled, to collect logs - pods created automatically, but added to rotation in a second step (by setting their label accordingly) - This gives us building blocks for canary and blue/green deployments .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: pic .interstitial[] --- name: toc-exercise--from-compose-to-kubernetes class: title Exercise β from Compose to Kubernetes .nav[ [Previous section](#toc-labels-and-selectors) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-rolling-updates) ] .debug[(automatically generated title slide)] --- # Exercise β from Compose to Kubernetes Let's run the wordsmith app on Kubernetes! The code is at: https://github.com/jpetazzo/wordsmith .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: pic .interstitial[] --- name: toc-rolling-updates class: title Rolling updates .nav[ [Previous section](#toc-exercise--from-compose-to-kubernetes) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-healthchecks) ] .debug[(automatically generated title slide)] --- # Rolling updates - By default (without rolling updates), when a scaled resource is updated: - new pods are created - old pods are terminated - ... all at the same time - if something goes wrong, Β―\\\_(γ)\_/Β― .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling updates - With rolling updates, when a resource is updated, it happens progressively - Two parameters determine the pace of the rollout: `maxUnavailable` and `maxSurge` - They can be specified in absolute number of pods, or percentage of the `replicas` count - At any given time ... - there will always be at least `replicas`-`maxUnavailable` pods available - there will never be more than `replicas`+`maxSurge` pods in total - there will therefore be up to `maxUnavailable`+`maxSurge` pods being updated - We have the possibility of rolling back to the previous version (if the update fails or is unsatisfactory in any way) .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Checking current rollout parameters - Recall how we build custom reports with `kubectl` and `jq`: .exercise[ - Show the rollout plan for our deployments: ```bash kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling updates in practice - As of Kubernetes 1.8, we can do rolling updates with: `deployments`, `daemonsets`, `statefulsets` - Editing one of these resources will automatically result in a rolling update - Rolling updates can be monitored with the `kubectl rollout` subcommand .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Building a new version of the `worker` service .warning[ Only run these commands if you have built and pushed DockerCoins to a local registry. If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip this. ] .exercise[ - Go to the `stacks` directory (`~/container.training/stacks`) - Edit `dockercoins/worker/worker.py`; update the first `sleep` line to sleep 1 second - Build a new tag and push it to the registry: ```bash #export REGISTRY=localhost:3xxxx export TAG=v0.2 docker-compose -f dockercoins.yml build docker-compose -f dockercoins.yml push ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling out the new `worker` service .exercise[ - Let's monitor what's going on by opening a few terminals, and run: ```bash kubectl get pods -w kubectl get replicasets -w kubectl get deployments -w ``` - Update `worker` either with `kubectl edit`, or by running: ```bash kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` ] -- That rollout should be pretty quick. What shows in the web UI? .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Give it some time - At first, it looks like nothing is happening (the graph remains at the same level) - According to `kubectl get deploy -w`, the `deployment` was updated really quickly - But `kubectl get pods -w` tells a different story - The old `pods` are still here, and they stay in `Terminating` state for a while - Eventually, they are terminated; and then the graph decreases significantly - This delay is due to the fact that our worker doesn't handle signals - Kubernetes sends a "polite" shutdown request to the worker, which ignores it - After a grace period, Kubernetes gets impatient and kills the container (The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed) .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling out something invalid - What happens if we make a mistake? .exercise[ - Update `worker` by specifying a non-existent image: ```bash export TAG=v0.3 kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` - Check what's going on: ```bash kubectl rollout status deploy worker ``` ] -- Our rollout is stuck. However, the app is not dead. (After a minute, it will stabilize to be 20-25% slower.) .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## What's going on with our rollout? - Why is our app a bit slower? - Because `MaxUnavailable=25%` ... So the rollout terminated 2 replicas out of 10 available - Okay, but why do we see 5 new replicas being rolled out? - Because `MaxSurge=25%` ... So in addition to replacing 2 replicas, the rollout is also starting 3 more - It rounded down the number of MaxUnavailable pods conservatively, but the total number of pods being rolled out is allowed to be 25+25=50% .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: extra-details ## The nitty-gritty details - We start with 10 pods running for the `worker` deployment - Current settings: MaxUnavailable=25% and MaxSurge=25% - When we start the rollout: - two replicas are taken down (as per MaxUnavailable=25%) - two others are created (with the new version) to replace them - three others are created (with the new version) per MaxSurge=25%) - Now we have 8 replicas up and running, and 5 being deployed - Our rollout is stuck at this point! .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Checking the dashboard during the bad rollout If you didn't deploy the Kubernetes dashboard earlier, just skip this slide. .exercise[ - Check which port the dashboard is on: ```bash kubectl -n kube-system get svc socat ``` ] Note the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] -- - We have failures in Deployments, Pods, and Replica Sets .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Recovering from a bad rollout - We could push some `v0.3` image (the pod retry logic will eventually catch it and the rollout will proceed) - Or we could invoke a manual rollback .exercise[ - Cancel the deployment and wait for the dust to settle: ```bash kubectl rollout undo deploy worker kubectl rollout status deploy worker ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: extra-details ## Changing rollout parameters - We want to: - revert to `v0.1` - be conservative on availability (always have desired number of available workers) - go slow on rollout speed (update only one pod at a time) - give some time to our workers to "warm up" before starting more The corresponding changes can be expressed in the following YAML snippet: .small[ ```yaml spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: extra-details ## Applying changes through a YAML patch - We could use `kubectl edit deployment worker` - But we could also use `kubectl patch` with the exact YAML shown before .exercise[ .small[ - Apply all our changes and wait for them to take effect: ```bash kubectl patch deployment worker -p " spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 " kubectl rollout status deployment worker kubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: pic .interstitial[] --- name: toc-healthchecks class: title Healthchecks .nav[ [Previous section](#toc-rolling-updates) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-exposing-http-services-with-ingress-resources) ] .debug[(automatically generated title slide)] --- # Healthchecks - Kubernetes provides two kinds of healthchecks: liveness and readiness - Healthchecks are *probes* that apply to *containers* (not to pods) - Each container can have two (optional) probes: - liveness = is this container dead or alive? - readiness = is this container ready to serve traffic? - Different probes are available (HTTP, TCP, program execution) - Let's see the difference and how to use them! .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Liveness probe - Indicates if the container is dead or alive - A dead container cannot come back to life - If the liveness probe fails, the container is killed (to make really sure that it's really dead; no zombies or undeads!) - What happens next depends on the pod's `restartPolicy`: - `Never`: the container is not restarted - `OnFailure` or `Always`: the container is restarted .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## When to use a liveness probe - To indicate failures that can't be recovered - deadlocks (causing all requests to time out) - internal corruption (causing all requests to error) - If the liveness probe fails *N* consecutive times, the container is killed - *N* is the `failureThreshold` (3 by default) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Readiness probe - Indicates if the container is ready to serve traffic - If a container becomes "unready" (let's say busy!) it might be ready again soon - If the readiness probe fails: - the container is *not* killed - if the pod is a member of a service, it is temporarily removed - it is re-added as soon as the readiness probe passes again .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## When to use a readiness probe - To indicate temporary failures - the application can only service *N* parallel connections - the runtime is busy doing garbage collection or initial data load - The container is marked as "not ready" after `failureThreshold` failed attempts (3 by default) - It is marked again as "ready" after `successThreshold` successful attempts (1 by default) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Different types of probes - HTTP request - specify URL of the request (and optional headers) - any status code between 200 and 399 indicates success - TCP connection - the probe succeeds if the TCP port is open - arbitrary exec - a command is executed in the container - exit status of zero indicates success .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Benefits of using probes - Rolling updates proceed when containers are *actually ready* (as opposed to merely started) - Containers in a broken state get killed and restarted (instead of serving errors or timeouts) - Overloaded backends get removed from load balancer rotation (thus improving response times across the board) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Example: HTTP probe Here is a pod template for the `rng` web service of the DockerCoins app: ```yaml apiVersion: v1 kind: Pod metadata: name: rng-with-liveness spec: containers: - name: rng image: dockercoins/rng:v0.1 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 10 periodSeconds: 1 ``` If the backend serves an error, or takes longer than 1s, 3 times in a row, it gets killed. .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Example: exec probe Here is a pod template for a Redis server: ```yaml apiVersion: v1 kind: Pod metadata: name: redis-with-liveness spec: containers: - name: redis image: redis livenessProbe: exec: command: ["redis-cli", "ping"] ``` If the Redis process becomes unresponsive, it will be killed. .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Details about liveness and readiness probes - Probes are executed at intervals of `periodSeconds` (default: 10) - The timeout for a probe is set with `timeoutSeconds` (default: 1) - A probe is considered successful after `successThreshold` successes (default: 1) - A probe is considered failing after `failureThreshold` failures (default: 3) - If a probe is not defined, it's as if there was an "always successful" probe .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Questions to ask before adding healthchecks - Do we want liveness, readiness, both? (sometimes, we can use the same check, but with different failure thresholds) - Do we have existing HTTP endpoints that we can use? - Do we need to add new endpoints, or perhaps use something else? - Are our healthchecks likely to use resources and/or slow down the app? - Do they depend on additional services? (this can be particularly tricky, see next slide) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Healthchecks and dependencies - A good healthcheck should always indicate the health of the service itself - It should not be affected by the state of the service's dependencies - Example: a web server requiring a database connection to operate (make sure that the healthcheck can report "OK" even if the database is down; because it won't help us to restart the web server if the issue is with the DB!) - Example: a microservice calling other microservices - Example: a worker process (these will generally require minor code changes to report health) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Adding healthchecks to an app - Let's add healthchecks to DockerCoins! - We will examine the questions of the previous slide - Then we will review each component individually to add healthchecks .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Liveness, readiness, or both? - To answer that question, we need to see the app run for a while - Do we get temporary, recoverable glitches? β then use readiness - Or do we get hard lock-ups requiring a restart? β then use liveness - In the case of DockerCoins, we don't know yet! - Let's pick liveness .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Do we have HTTP endpoints that we can use? - Each of the 3 web services (hasher, rng, webui) has a trivial route on `/` - These routes: - don't seem to perform anything complex or expensive - don't seem to call other services - Perfect! (See next slides for individual details) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- - [hasher.rb](https://github.com/jpetazzo/container.training/blob/master/dockercoins/hasher/hasher.rb) ```ruby get '/' do "HASHER running on #{Socket.gethostname}\n" end ``` - [rng.py](https://github.com/jpetazzo/container.training/blob/master/dockercoins/rng/rng.py) ```python @app.route("/") def index(): return "RNG running on {}\n".format(hostname) ``` - [webui.js](https://github.com/jpetazzo/container.training/blob/master/dockercoins/webui/webui.js) ```javascript app.get('/', function (req, res) { res.redirect('/index.html'); }); ``` .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Running DockerCoins - We will run DockerCoins in a new, separate namespace - We will use a set of YAML manifests and pre-built images - We will add our new liveness probe to the YAML of the `rng` DaemonSet - Then, we will deploy the application .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Creating a new namespace - This will make sure that we don't collide / conflict with previous exercises .exercise[ - Create the yellow namespace: ```bash kubectl create namespace probes ``` - Switch to that namespace: ```bash kns probes ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Retrieving DockerCoins manifests - All the manifests that we need are on a convenient repository: https://github.com/jpetazzo/kubercoins .exercise[ - Clone that repository, if we haven't done it yet: ```bash cd ~ git clone https://github.com/jpetazzo/kubercoins ``` - Change directory to the repository: ```bash cd kubercoins ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## A simple HTTP liveness probe This is what our liveness probe should look like: ```yaml containers: - name: ... image: ... livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 30 periodSeconds: 5 ``` This will give 30 seconds to the service to start. (Way more than necessary!) It will run the probe every 5 seconds. It will use the default timeout (1 second). It will use the default failure threshold (3 failed attempts = dead). It will use the default success threshold (1 successful attempt = alive). .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Adding the liveness probe - Let's add the liveness probe, then deploy DockerCoins .exercise[ - Edit `rng-deployment.yaml` and add the liveness probe ```bash vim rng-deployment.yaml ``` - Load the YAML for all the resources of DockerCoins: ```bash kubectl apply -f . ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Testing the liveness probe - The rng service needs 100ms to process a request (because it is single-threaded and sleeps 0.1s in each request) - The probe timeout is set to 1 second - If we send more than 10 requests per second per backend, it will break - Let's generate traffic and see what happens! .exercise[ - Get the ClusterIP address of the rng service: ```bash kubectl get svc rng ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Monitoring the rng service - Each command below will show us what's happening on a different level .exercise[ - In one window, monitor cluster events: ```bash kubectl get events -w ``` - In another window, monitor the response time of rng: ```bash httping `` ``` - In another window, monitor pods status: ```bash kubectl get pods -w ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Generating traffic - Let's use `ab` to send concurrent requests to rng .exercise[ - In yet another window, generate traffic: ```bash ab -c 10 -n 1000 http://``/1 ``` - Experiment with higher values of `-c` and see what happens ] - The `-c` parameter indicates the number of concurrent requests - The final `/1` is important to generate actual traffic (otherwise we would use the ping endpoint, which doesn't sleep 0.1s per request) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Discussion - Above a given threshold, the liveness probe starts failing (about 10 concurrent requests per backend should be plenty enough) - When the liveness probe fails 3 times in a row, the container is restarted - During the restart, there is *less* capacity available - ... Meaning that the other backends are likely to timeout as well - ... Eventually causing all backends to be restarted - ... And each fresh backend gets restarted, too - This goes on until the load goes down, or we add capacity *This wouldn't be a good healthcheck in a real application!* .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Better healthchecks - We need to make sure that the healthcheck doesn't trip when performance degrades due to external pressure - Using a readiness check would have fewer effects (but it would still be an imperfect solution) - A possible combination: - readiness check with a short timeout / low failure threshold - liveness check with a longer timeout / higher failure threshold .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Healthchecks for redis - A liveness probe is enough (it's not useful to remove a backend from rotation when it's the only one) - We could use an exec probe running `redis-cli ping` .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- class: extra-details ## Exec probes and zombies - When using exec probes, we should make sure that we have a *zombie reaper* π€π§π§ Wait, what? - When a process terminates, its parent must call `wait()`/`waitpid()` (this is how the parent process retrieves the child's exit status) - In the meantime, the process is in *zombie* state (the process state will show as `Z` in `ps`, `top` ...) - When a process is killed, its children are *orphaned* and attached to PID 1 - PID 1 has the responsibility of *reaping* these processes when they terminate - OK, but how does that affect us? .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- class: extra-details ## PID 1 in containers - On ordinary systems, PID 1 (`/sbin/init`) has logic to reap processes - In containers, PID 1 is typically our application process (e.g. Apache, the JVM, NGINX, Redis ...) - These *do not* take care of reaping orphans - If we use exec probes, we need to add a process reaper - We can add [tini](https://github.com/krallin/tini) to our images - Or [share the PID namespace between containers of a pod](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/) (and have gcr.io/pause take care of the reaping) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Healthchecks for worker - Readiness isn't useful (because worker isn't a backend for a service) - Liveness may help us restart a broken worker, but how can we check it? - Embedding an HTTP server is an option (but it has a high potential for unwanted side effects and false positives) - Using a "lease" file can be relatively easy: - touch a file during each iteration of the main loop - check the timestamp of that file from an exec probe - Writing logs (and checking them from the probe) also works .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- class: pic .interstitial[] --- name: toc-exposing-http-services-with-ingress-resources class: title Exposing HTTP services with Ingress resources .nav[ [Previous section](#toc-healthchecks) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-exercise--creating-an-ingress) ] .debug[(automatically generated title slide)] --- # Exposing HTTP services with Ingress resources - *Services* give us a way to access a pod or a set of pods - Services can be exposed to the outside world: - with type `NodePort` (on a port >30000) - with type `LoadBalancer` (allocating an external load balancer) - What about HTTP services? - how can we expose `webui`, `rng`, `hasher`? - the Kubernetes dashboard? - a new version of `webui`? .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Exposing HTTP services - If we use `NodePort` services, clients have to specify port numbers (i.e. http://xxxxx:31234 instead of just http://xxxxx) - `LoadBalancer` services are nice, but: - they are not available in all environments - they often carry an additional cost (e.g. they provision an ELB) - they require one extra step for DNS integration (waiting for the `LoadBalancer` to be provisioned; then adding it to DNS) - We could build our own reverse proxy .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Building a custom reverse proxy - There are many options available: Apache, HAProxy, Hipache, NGINX, Traefik, ... (look at [jpetazzo/aiguillage](https://github.com/jpetazzo/aiguillage) for a minimal reverse proxy configuration using NGINX) - Most of these options require us to update/edit configuration files after each change - Some of them can pick up virtual hosts and backends from a configuration store - Wouldn't it be nice if this configuration could be managed with the Kubernetes API? -- - Enter.red[ΒΉ] *Ingress* resources! .footnote[.red[ΒΉ] Pun maybe intended.] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress resources - Kubernetes API resource (`kubectl get ingress`/`ingresses`/`ing`) - Designed to expose HTTP services - Basic features: - load balancing - SSL termination - name-based virtual hosting - Can also route to different services depending on: - URI path (e.g. `/api`β`api-service`, `/static`β`assets-service`) - Client headers, including cookies (for A/B testing, canary deployment...) - and more! .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Principle of operation - Step 1: deploy an *ingress controller* - ingress controller = load balancer + control loop - the control loop watches over ingress resources, and configures the LB accordingly - Step 2: set up DNS - associate DNS entries with the load balancer address - Step 3: create *ingress resources* - the ingress controller picks up these resources and configures the LB - Step 4: profit! .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress in action - We will deploy the Traefik ingress controller - this is an arbitrary choice - maybe motivated by the fact that Traefik releases are named after cheeses - For DNS, we will use [nip.io](http://nip.io/) - `*.1.2.3.4.nip.io` resolves to `1.2.3.4` - We will create ingress resources for various HTTP services .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Deploying pods listening on port 80 - We want our ingress load balancer to be available on port 80 - We could do that with a `LoadBalancer` service ... but it requires support from the underlying infrastructure - We could use pods specifying `hostPort: 80` ... but with most CNI plugins, this [doesn't work or requires additional setup](https://github.com/kubernetes/kubernetes/issues/23920) - We could use a `NodePort` service ... but that requires [changing the `--service-node-port-range` flag in the API server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) - Last resort: the `hostNetwork` mode .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Without `hostNetwork` - Normally, each pod gets its own *network namespace* (sometimes called sandbox or network sandbox) - An IP address is assigned to the pod - This IP address is routed/connected to the cluster network - All containers of that pod are sharing that network namespace (and therefore using the same IP address) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## With `hostNetwork: true` - No network namespace gets created - The pod is using the network namespace of the host - It "sees" (and can use) the interfaces (and IP addresses) of the host - The pod can receive outside traffic directly, on any port - Downside: with most network plugins, network policies won't work for that pod - most network policies work at the IP address level - filtering that pod = filtering traffic from the node .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Running Traefik - The [Traefik documentation](https://docs.traefik.io/user-guide/kubernetes/#deploy-trfik-using-a-deployment-or-daemonset) tells us to pick between Deployment and Daemon Set - We are going to use a Daemon Set so that each node can accept connections - We will do two minor changes to the [YAML provided by Traefik](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml): - enable `hostNetwork` - add a *toleration* so that Traefik also runs on `node1` .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Taints and tolerations - A *taint* is an attribute added to a node - It prevents pods from running on the node - ... Unless they have a matching *toleration* - When deploying with `kubeadm`: - a taint is placed on the node dedicated to the control plane - the pods running the control plane have a matching toleration .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Checking taints on our nodes .exercise[ - Check our nodes specs: ```bash kubectl get node node1 -o json | jq .spec kubectl get node node2 -o json | jq .spec ``` ] We should see a result only for `node1` (the one with the control plane): ```json "taints": [ { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ] ``` .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Understanding a taint - The `key` can be interpreted as: - a reservation for a special set of pods (here, this means "this node is reserved for the control plane") - an error condition on the node (for instance: "disk full," do not start new pods here!) - The `effect` can be: - `NoSchedule` (don't run new pods here) - `PreferNoSchedule` (try not to run new pods here) - `NoExecute` (don't run new pods and evict running pods) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Checking tolerations on the control plane .exercise[ - Check tolerations for CoreDNS: ```bash kubectl -n kube-system get deployments coredns -o json | jq .spec.template.spec.tolerations ``` ] The result should include: ```json { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ``` It means: "bypass the exact taint that we saw earlier on `node1`." .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Special tolerations .exercise[ - Check tolerations on `kube-proxy`: ```bash kubectl -n kube-system get ds kube-proxy -o json | jq .spec.template.spec.tolerations ``` ] The result should include: ```json { "operator": "Exists" } ``` This one is a special case that means "ignore all taints and run anyway." .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Running Traefik on our cluster - We provide a YAML file (`k8s/traefik.yaml`) which is essentially the sum of: - [Traefik's Daemon Set resources](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml) (patched with `hostNetwork` and tolerations) - [Traefik's RBAC rules](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml) allowing it to watch necessary API objects .exercise[ - Apply the YAML: ```bash kubectl apply -f ~/container.training/k8s/traefik.yaml ``` ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Checking that Traefik runs correctly - If Traefik started correctly, we now have a web server listening on each node .exercise[ - Check that Traefik is serving 80/tcp: ```bash curl localhost ``` ] We should get a `404 page not found` error. This is normal: we haven't provided any ingress rule yet. .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Setting up DNS - To make our lives easier, we will use [nip.io](http://nip.io) - Check out `http://cheddar.A.B.C.D.nip.io` (replacing A.B.C.D with the IP address of `node1`) - We should get the same `404 page not found` error (meaning that our DNS is "set up properly", so to speak!) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Traefik web UI - Traefik provides a web dashboard - With the current install method, it's listening on port 8080 .exercise[ - Go to `http://node1:8080` (replacing `node1` with its IP address) ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Setting up host-based routing ingress rules - We are going to use `errm/cheese` images (there are [3 tags available](https://hub.docker.com/r/errm/cheese/tags/): wensleydale, cheddar, stilton) - These images contain a simple static HTTP server sending a picture of cheese - We will run 3 deployments (one for each cheese) - We will create 3 services (one for each deployment) - Then we will create 3 ingress rules (one for each service) - We will route `.A.B.C.D.nip.io` to the corresponding deployment .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Running cheesy web servers .exercise[ - Run all three deployments: ```bash kubectl create deployment cheddar --image=errm/cheese:cheddar kubectl create deployment stilton --image=errm/cheese:stilton kubectl create deployment wensleydale --image=errm/cheese:wensleydale ``` - Create a service for each of them: ```bash kubectl expose deployment cheddar --port=80 kubectl expose deployment stilton --port=80 kubectl expose deployment wensleydale --port=80 ``` ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## What does an ingress resource look like? Here is a minimal host-based ingress resource: ```yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cheddar spec: rules: - host: cheddar.`A.B.C.D`.nip.io http: paths: - path: / backend: serviceName: cheddar servicePort: 80 ``` (It is in `k8s/ingress.yaml`.) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Creating our first ingress resources .exercise[ - Edit the file `~/container.training/k8s/ingress.yaml` - Replace A.B.C.D with the IP address of `node1` - Apply the file - Open http://cheddar.A.B.C.D.nip.io ] (An image of a piece of cheese should show up.) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Creating the other ingress resources .exercise[ - Edit the file `~/container.training/k8s/ingress.yaml` - Replace `cheddar` with `stilton` (in `name`, `host`, `serviceName`) - Apply the file - Check that `stilton.A.B.C.D.nip.io` works correctly - Repeat for `wensleydale` ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Using multiple ingress controllers - You can have multiple ingress controllers active simultaneously (e.g. Traefik and NGINX) - You can even have multiple instances of the same controller (e.g. one for internal, another for external traffic) - The `kubernetes.io/ingress.class` annotation can be used to tell which one to use - It's OK if multiple ingress controllers configure the same resource (it just means that the service will be accessible through multiple paths) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress: the good - The traffic flows directly from the ingress load balancer to the backends - it doesn't need to go through the `ClusterIP` - in fact, we don't even need a `ClusterIP` (we can use a headless service) - The load balancer can be outside of Kubernetes (as long as it has access to the cluster subnet) - This allows the use of external (hardware, physical machines...) load balancers - Annotations can encode special features (rate-limiting, A/B testing, session stickiness, etc.) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress: the bad - Aforementioned "special features" are not standardized yet - Some controllers will support them; some won't - Even relatively common features (stripping a path prefix) can differ: - [traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip](https://docs.traefik.io/user-guide/kubernetes/#path-based-routing) - [ingress.kubernetes.io/rewrite-target: /](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/rewrite) - This should eventually stabilize (remember that ingresses are currently `apiVersion: extensions/v1beta1`) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: pic .interstitial[] --- name: toc-exercise--creating-an-ingress class: title Exercise β creating an Ingress .nav[ [Previous section](#toc-exposing-http-services-with-ingress-resources) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- # Exercise β creating an Ingress Add an Ingress resource for the wordsmith app. .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: pic .interstitial[] --- name: toc-setting-up-kubernetes class: title Setting up Kubernetes .nav[ [Previous section](#toc-exercise--creating-an-ingress) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-the-kubernetes-dashboard) ] .debug[(automatically generated title slide)] --- # Setting up Kubernetes - How did we set up these Kubernetes clusters that we're using? -- - We used `kubeadm` on freshly installed VM instances running Ubuntu LTS 1. Install Docker 2. Install Kubernetes packages 3. Run `kubeadm init` on the first node (it deploys the control plane on that node) 4. Set up Weave (the overlay network) (that step is just one `kubectl apply` command; discussed later) 5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`) 6. Copy the configuration file generated by `kubeadm init` - Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- ## `kubeadm` drawbacks - Doesn't set up Docker or any other container engine - Doesn't set up the overlay network - Doesn't set up multi-master (no high availability) -- (At least ... not yet! Though it's [experimental in 1.12](https://kubernetes.io/docs/setup/independent/high-availability/).) -- - "It's still twice as many steps as setting up a Swarm cluster π" -- JΓ©rΓ΄me .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- ## Other deployment options - [AKS](https://azure.microsoft.com/services/kubernetes-service/): managed Kubernetes on Azure - [GKE](https://cloud.google.com/kubernetes-engine/): managed Kubernetes on Google Cloud - [EKS](https://aws.amazon.com/eks/), [eksctl](https://eksctl.io/): managed Kubernetes on AWS - [kops](https://github.com/kubernetes/kops): customizable deployments on AWS, Digital Ocean, GCE (beta), vSphere (alpha) - [minikube](https://kubernetes.io/docs/setup/minikube/), [kubespawn](https://github.com/kinvolk/kube-spawn), [Docker Desktop](https://docs.docker.com/docker-for-mac/kubernetes/): for local development - [kubicorn](https://github.com/kubicorn/kubicorn), the [Cluster API](https://blogs.vmware.com/cloudnative/2019/03/14/what-and-why-of-cluster-api/): deploy your clusters declaratively, "the Kubernetes way" .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- ## Even more deployment options - If you like Ansible: [kubespray](https://github.com/kubernetes-incubator/kubespray) - If you like Terraform: [typhoon](https://github.com/poseidon/typhoon) - If you like Terraform and Puppet: [tarmak](https://github.com/jetstack/tarmak) - You can also learn how to install every component manually, with the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) *Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.* - There are also many commercial options available! - For a longer list, check the Kubernetes documentation: it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/#production-environment) to set up Kubernetes. .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- class: pic .interstitial[] --- name: toc-the-kubernetes-dashboard class: title The Kubernetes dashboard .nav[ [Previous section](#toc-setting-up-kubernetes) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-security-implications-of-kubectl-apply) ] .debug[(automatically generated title slide)] --- # The Kubernetes dashboard - Kubernetes resources can also be viewed with a web dashboard - That dashboard is usually exposed over HTTPS (this requires obtaining a proper TLS certificate) - Dashboard users need to authenticate - We are going to take a *dangerous* shortcut .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## The insecure method - We could (and should) use [Let's Encrypt](https://letsencrypt.org/) ... - ... but we don't want to deal with TLS certificates - We could (and should) learn how authentication and authorization work ... - ... but we will use a guest account with admin access instead .footnote[.warning[Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.]] .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Running a very insecure dashboard - We are going to deploy that dashboard with *one single command* - This command will create all the necessary resources (the dashboard itself, the HTTP wrapper, the admin/guest account) - All these resources are defined in a YAML file - All we have to do is load that YAML file with with `kubectl apply -f` .exercise[ - Create all the dashboard resources, with the following command: ```bash kubectl apply -f ~/container.training/k8s/insecure-dashboard.yaml ``` ] .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Connecting to the dashboard .exercise[ - Check which port the dashboard is on: ```bash kubectl get svc dashboard ``` ] You'll want the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] The dashboard will then ask you which authentication you want to use. .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Dashboard authentication - We have three authentication options at this point: - token (associated with a role that has appropriate permissions) - kubeconfig (e.g. using the `~/.kube/config` file from `node1`) - "skip" (use the dashboard "service account") - Let's use "skip": we're logged in! -- .warning[By the way, we just added a backdoor to our Kubernetes cluster!] .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Running the Kubernetes dashboard securely - The steps that we just showed you are *for educational purposes only!* - If you do that on your production cluster, people [can and will abuse it](https://redlock.io/blog/cryptojacking-tesla) - For an in-depth discussion about securing the dashboard, check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca) .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- class: pic .interstitial[] --- name: toc-security-implications-of-kubectl-apply class: title Security implications of `kubectl apply` .nav[ [Previous section](#toc-the-kubernetes-dashboard) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-volumes) ] .debug[(automatically generated title slide)] --- # Security implications of `kubectl apply` - When we do `kubectl apply -f `, we create arbitrary resources - Resources can be evil; imagine a `deployment` that ... -- - starts bitcoin miners on the whole cluster -- - hides in a non-default namespace -- - bind-mounts our nodes' filesystem -- - inserts SSH keys in the root account (on the node) -- - encrypts our data and ransoms it -- - β οΈβ οΈβ οΈ .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## `kubectl apply` is the new `curl | sh` - `curl | sh` is convenient - It's safe if you use HTTPS URLs from trusted sources -- - `kubectl apply -f` is convenient - It's safe if you use HTTPS URLs from trusted sources - Example: the official setup instructions for most pod networks -- - It introduces new failure modes (for instance, if you try to apply YAML from a link that's no longer valid) .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- class: pic .interstitial[] --- name: toc-volumes class: title Volumes .nav[ [Previous section](#toc-security-implications-of-kubectl-apply) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-managing-configuration) ] .debug[(automatically generated title slide)] --- # Volumes - Volumes are special directories that are mounted in containers - Volumes can have many different purposes: - share files and directories between containers running on the same machine - share files and directories between containers and their host - centralize configuration information in Kubernetes and expose it to containers - manage credentials and secrets and expose them securely to containers - store persistent data for stateful services - access storage systems (like Ceph, EBS, NFS, Portworx, and many others) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- class: extra-details ## Kubernetes volumes vs. Docker volumes - Kubernetes and Docker volumes are very similar (the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/) says otherwise ... but it refers to Docker 1.7, which was released in 2015!) - Docker volumes allow us to share data between containers running on the same host - Kubernetes volumes allow us to share data between containers in the same pod - Both Docker and Kubernetes volumes enable access to storage systems - Kubernetes volumes are also used to expose configuration and secrets - Docker has specific concepts for configuration and secrets (but under the hood, the technical implementation is similar) - If you're not familiar with Docker volumes, you can safely ignore this slide! .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Volumes β Persistent Volumes - Volumes and Persistent Volumes are related, but very different! - *Volumes*: - appear in Pod specifications (see next slide) - do not exist as API resources (**cannot** do `kubectl get volumes`) - *Persistent Volumes*: - are API resources (**can** do `kubectl get persistentvolumes`) - correspond to concrete volumes (e.g. on a SAN, EBS, etc.) - cannot be associated with a Pod directly; but through a Persistent Volume Claim - won't be discussed further in this section .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## A simple volume example ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-volume spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ ``` .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## A simple volume example, explained - We define a standalone `Pod` named `nginx-with-volume` - In that pod, there is a volume named `www` - No type is specified, so it will default to `emptyDir` (as the name implies, it will be initialized as an empty directory at pod creation) - In that pod, there is also a container named `nginx` - That container mounts the volume `www` to path `/usr/share/nginx/html/` .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## A volume shared between two containers .small[ ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-volume spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ - name: git image: alpine command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/ restartPolicy: OnFailure ``` ] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Sharing a volume, explained - We added another container to the pod - That container mounts the `www` volume on a different path (`/www`) - It uses the `alpine` image - When started, it installs `git` and clones the `octocat/Spoon-Knife` repository (that repository contains a tiny HTML website) - As a result, NGINX now serves this website .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Sharing a volume, in action - Let's try it! .exercise[ - Create the pod by applying the YAML file: ```bash kubectl apply -f ~/container.training/k8s/nginx-with-volume.yaml ``` - Check the IP address that was allocated to our pod: ```bash kubectl get pod nginx-with-volume -o wide IP=$(kubectl get pod nginx-with-volume -o json | jq -r .status.podIP) ``` - Access the web server: ```bash curl $IP ``` ] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## The devil is in the details - The default `restartPolicy` is `Always` - This would cause our `git` container to run again ... and again ... and again (with an exponential back-off delay, as explained [in the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)) - That's why we specified `restartPolicy: OnFailure` - There is a short period of time during which the website is not available (because the `git` container hasn't done its job yet) - This could be avoided by using [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) (we will see a live example in a few sections) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Volume lifecycle - The lifecycle of a volume is linked to the pod's lifecycle - This means that a volume is created when the pod is created - This is mostly relevant for `emptyDir` volumes (other volumes, like remote storage, are not "created" but rather "attached" ) - A volume survives across container restarts - A volume is destroyed (or, for remote storage, detached) when the pod is destroyed .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- class: pic .interstitial[] --- name: toc-managing-configuration class: title Managing configuration .nav[ [Previous section](#toc-volumes) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-stateful-sets) ] .debug[(automatically generated title slide)] --- # Managing configuration - Some applications need to be configured (obviously!) - There are many ways for our code to pick up configuration: - command-line arguments - environment variables - configuration files - configuration servers (getting configuration from a database, an API...) - ... and more (because programmers can be very creative!) - How can we do these things with containers and Kubernetes? .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Passing configuration to containers - There are many ways to pass configuration to code running in a container: - baking it into a custom image - command-line arguments - environment variables - injecting configuration files - exposing it over the Kubernetes API - configuration servers - Let's review these different strategies! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Baking custom images - Put the configuration in the image (it can be in a configuration file, but also `ENV` or `CMD` actions) - It's easy! It's simple! - Unfortunately, it also has downsides: - multiplication of images - different images for dev, staging, prod ... - minor reconfigurations require a whole build/push/pull cycle - Avoid doing it unless you don't have the time to figure out other options .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Command-line arguments - Pass options to `args` array in the container specification - Example ([source](https://github.com/coreos/pods/blob/master/kubernetes.yaml#L29)): ```yaml args: - "--data-dir=/var/lib/etcd" - "--advertise-client-urls=http://127.0.0.1:2379" - "--listen-client-urls=http://127.0.0.1:2379" - "--listen-peer-urls=http://127.0.0.1:2380" - "--name=etcd" ``` - The options can be passed directly to the program that we run ... ... or to a wrapper script that will use them to e.g. generate a config file .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Command-line arguments, pros & cons - Works great when options are passed directly to the running program (otherwise, a wrapper script can work around the issue) - Works great when there aren't too many parameters (to avoid a 20-lines `args` array) - Requires documentation and/or understanding of the underlying program ("which parameters and flags do I need, again?") - Well-suited for mandatory parameters (without default values) - Not ideal when we need to pass a real configuration file anyway .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Environment variables - Pass options through the `env` map in the container specification - Example: ```yaml env: - name: ADMIN_PORT value: "8080" - name: ADMIN_AUTH value: Basic - name: ADMIN_CRED value: "admin:0pensesame!" ``` .warning[`value` must be a string! Make sure that numbers and fancy strings are quoted.] π€ Why this weird `{name: xxx, value: yyy}` scheme? It will be revealed soon! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## The downward API - In the previous example, environment variables have fixed values - We can also use a mechanism called the *downward API* - The downward API allows exposing pod or container information - either through special files (we won't show that for now) - or through environment variables - The value of these environment variables is computed when the container is started - Remember: environment variables won't (can't) change after container start - Let's see a few concrete examples! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing the pod's namespace ```yaml - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ``` - Useful to generate FQDN of services (in some contexts, a short name is not enough) - For instance, the two commands should be equivalent: ``` curl api-backend curl api-backend.$MY_POD_NAMESPACE.svc.cluster.local ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing the pod's IP address ```yaml - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP ``` - Useful if we need to know our IP address (we could also read it from `eth0`, but this is more solid) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing the container's resource limits ```yaml - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory ``` - Useful for runtimes where memory is garbage collected - Example: the JVM (the memory available to the JVM should be set with the `-Xmx ` flag) - Best practice: set a memory limit, and pass it to the runtime (see [this blog post](https://very-serio.us/2017/12/05/running-jvms-in-kubernetes/) for a detailed example) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## More about the downward API - [This documentation page](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) tells more about these environment variables - And [this one](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) explains the other way to use the downward API (through files that get created in the container filesystem) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Environment variables, pros and cons - Works great when the running program expects these variables - Works great for optional parameters with reasonable defaults (since the container image can provide these defaults) - Sort of auto-documented (we can see which environment variables are defined in the image, and their values) - Can be (ab)used with longer values ... - ... You *can* put an entire Tomcat configuration file in an environment ... - ... But *should* you? (Do it if you really need to, we're not judging! But we'll see better ways.) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Injecting configuration files - Sometimes, there is no way around it: we need to inject a full config file - Kubernetes provides a mechanism for that purpose: `configmaps` - A configmap is a Kubernetes resource that exists in a namespace - Conceptually, it's a key/value map (values are arbitrary strings) - We can think about them in (at least) two different ways: - as holding entire configuration file(s) - as holding individual configuration parameters *Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!* .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Configmaps storing entire files - In this case, each key/value pair corresponds to a configuration file - Key = name of the file - Value = content of the file - There can be one key/value pair, or as many as necessary (for complex apps with multiple configuration files) - Examples: ``` # Create a configmap with a single key, "app.conf" kubectl create configmap my-app-config --from-file=app.conf # Create a configmap with a single key, "app.conf" but another file kubectl create configmap my-app-config --from-file=app.conf=app-prod.conf # Create a configmap with multiple keys (one per file in the config.d directory) kubectl create configmap my-app-config --from-file=config.d/ ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Configmaps storing individual parameters - In this case, each key/value pair corresponds to a parameter - Key = name of the parameter - Value = value of the parameter - Examples: ``` # Create a configmap with two keys kubectl create cm my-app-config \ --from-literal=foreground=red \ --from-literal=background=blue # Create a configmap from a file containing key=val pairs kubectl create cm my-app-config \ --from-env-file=app.conf ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing configmaps to containers - Configmaps can be exposed as plain files in the filesystem of a container - this is achieved by declaring a volume and mounting it in the container - this is particularly effective for configmaps containing whole files - Configmaps can be exposed as environment variables in the container - this is achieved with the downward API - this is particularly effective for configmaps containing individual parameters - Let's see how to do both! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Passing a configuration file with a configmap - We will start a load balancer powered by HAProxy - We will use the [official `haproxy` image](https://hub.docker.com/_/haproxy/) - It expects to find its configuration in `/usr/local/etc/haproxy/haproxy.cfg` - We will provide a simple HAproxy configuration, `k8s/haproxy.cfg` - It listens on port 80, and load balances connections between IBM and Google .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Creating the configmap .exercise[ - Go to the `k8s` directory in the repository: ```bash cd ~/container.training/k8s ``` - Create a configmap named `haproxy` and holding the configuration file: ```bash kubectl create configmap haproxy --from-file=haproxy.cfg ``` - Check what our configmap looks like: ```bash kubectl get configmap haproxy -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap We are going to use the following pod definition: ```yaml apiVersion: v1 kind: Pod metadata: name: haproxy spec: volumes: - name: config configMap: name: haproxy containers: - name: haproxy image: haproxy volumeMounts: - name: config mountPath: /usr/local/etc/haproxy/ ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap - The resource definition from the previous slide is in `k8s/haproxy.yaml` .exercise[ - Create the HAProxy pod: ```bash kubectl apply -f ~/container.training/k8s/haproxy.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod haproxy -o wide IP=$(kubectl get pod haproxy -o json | jq -r .status.podIP) ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Testing our load balancer - The load balancer will send: - half of the connections to Google - the other half to IBM .exercise[ - Access the load balancer a few times: ```bash curl $IP curl $IP curl $IP ``` ] We should see connections served by Google, and others served by IBM. (Each server sends us a redirect page. Look at the URL that they send us to!) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing configmaps with the downward API - We are going to run a Docker registry on a custom port - By default, the registry listens on port 5000 - This can be changed by setting environment variable `REGISTRY_HTTP_ADDR` - We are going to store the port number in a configmap - Then we will expose that configmap as a container environment variable .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Creating the configmap .exercise[ - Our configmap will have a single key, `http.addr`: ```bash kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80 ``` - Check our configmap: ```bash kubectl get configmap registry -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap We are going to use the following pod definition: ```yaml apiVersion: v1 kind: Pod metadata: name: registry spec: containers: - name: registry image: registry env: - name: REGISTRY_HTTP_ADDR valueFrom: configMapKeyRef: name: registry key: http.addr ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap - The resource definition from the previous slide is in `k8s/registry.yaml` .exercise[ - Create the registry pod: ```bash kubectl apply -f ~/container.training/k8s/registry.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod registry -o wide IP=$(kubectl get pod registry -o json | jq -r .status.podIP) ``` - Confirm that the registry is available on port 80: ```bash curl $IP/v2/_catalog ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Passwords, tokens, sensitive information - For sensitive information, there is another special resource: *Secrets* - Secrets and Configmaps work almost the same way (we'll expose the differences on the next slide) - The *intent* is different, though: *"You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."* *"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."* (Source: [the author of both features](https://stackoverflow.com/a/36925553/580281 )) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Differences between configmaps and secrets - Secrets are base64-encoded when shown with `kubectl get secrets -o yaml` - keep in mind that this is just *encoding*, not *encryption* - it is very easy to [automatically extract and decode secrets](https://medium.com/@mveritym/decoding-kubernetes-secrets-60deed7a96a3) - [Secrets can be encrypted at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) - With RBAC, we can authorize a user to access configmaps, but not secrets (since they are two different kinds of resources) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- class: pic .interstitial[] --- name: toc-stateful-sets class: title Stateful sets .nav[ [Previous section](#toc-managing-configuration) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-running-a-consul-cluster) ] .debug[(automatically generated title slide)] --- # Stateful sets - Stateful sets are a type of resource in the Kubernetes API (like pods, deployments, services...) - They offer mechanisms to deploy scaled stateful applications - At a first glance, they look like *deployments*: - a stateful set defines a pod spec and a number of replicas *R* - it will make sure that *R* copies of the pod are running - that number can be changed while the stateful set is running - updating the pod spec will cause a rolling update to happen - But they also have some significant differences .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Stateful sets unique features - Pods in a stateful set are numbered (from 0 to *R-1*) and ordered - They are started and updated in order (from 0 to *R-1*) - A pod is started (or updated) only when the previous one is ready - They are stopped in reverse order (from *R-1* to 0) - Each pod know its identity (i.e. which number it is in the set) - Each pod can discover the IP address of the others easily - The pods can persist data on attached volumes π€ Wait a minute ... Can't we already attach volumes to pods and deployments? .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Revisiting volumes - [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) are used for many purposes: - sharing data between containers in a pod - exposing configuration information and secrets to containers - accessing storage systems - Let's see examples of the latter usage .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Volumes types - There are many [types of volumes](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes) available: - public cloud storage (GCEPersistentDisk, AWSElasticBlockStore, AzureDisk...) - private cloud storage (Cinder, VsphereVolume...) - traditional storage systems (NFS, iSCSI, FC...) - distributed storage (Ceph, Glusterfs, Portworx...) - Using a persistent volume requires: - creating the volume out-of-band (outside of the Kubernetes API) - referencing the volume in the pod description, with all its parameters .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Using a cloud volume Here is a pod definition using an AWS EBS volume (that has to be created first): ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-ebs-volume spec: containers: - image: ... name: container-using-my-ebs-volume volumeMounts: - mountPath: /my-ebs name: my-ebs-volume volumes: - name: my-ebs-volume awsElasticBlockStore: volumeID: vol-049df61146c4d7901 fsType: ext4 ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Using an NFS volume Here is another example using a volume on an NFS server: ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-nfs-volume spec: containers: - image: ... name: container-using-my-nfs-volume volumeMounts: - mountPath: /my-nfs name: my-nfs-volume volumes: - name: my-nfs-volume nfs: server: 192.168.0.55 path: "/exports/assets" ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Shortcomings of volumes - Their lifecycle (creation, deletion...) is managed outside of the Kubernetes API (we can't just use `kubectl apply/create/delete/...` to manage them) - If a Deployment uses a volume, all replicas end up using the same volume - That volume must then support concurrent access - some volumes do (e.g. NFS servers support multiple read/write access) - some volumes support concurrent reads - some volumes support concurrent access for colocated pods - What we really need is a way for each replica to have its own volume .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims - To abstract the different types of storage, a pod can use a special volume type - This type is a *Persistent Volume Claim* - A Persistent Volume Claim (PVC) is a resource type (visible with `kubectl get persistentvolumeclaims` or `kubectl get pvc`) - A PVC is not a volume; it is a *request for a volume* .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims in practice - Using a Persistent Volume Claim is a two-step process: - creating the claim - using the claim in a pod (as if it were any other kind of volume) - A PVC starts by being Unbound (without an associated volume) - Once it is associated with a Persistent Volume, it becomes Bound - A Pod referring an unbound PVC will not start (but as soon as the PVC is bound, the Pod can start) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Binding PV and PVC - A Kubernetes controller continuously watches PV and PVC objects - When it notices an unbound PVC, it tries to find a satisfactory PV ("satisfactory" in terms of size and other characteristics; see next slide) - If no PV fits the PVC, a PV can be created dynamically (this requires to configure a *dynamic provisioner*, more on that later) - Otherwise, the PVC remains unbound indefinitely (until we manually create a PV or setup dynamic provisioning) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## What's in a Persistent Volume Claim? - At the very least, the claim should indicate: - the size of the volume (e.g. "5 GiB") - the access mode (e.g. "read-write by a single pod") - Optionally, it can also specify a Storage Class - The Storage Class indicates: - which storage system to use (e.g. Portworx, EBS...) - extra parameters for that storage system e.g.: "replicate the data 3 times, and use SSD media" .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## What's a Storage Class? - A Storage Class is yet another Kubernetes API resource (visible with e.g. `kubectl get storageclass` or `kubectl get sc`) - It indicates which *provisioner* to use (which controller will create the actual volume) - And arbitrary parameters for that provisioner (replication levels, type of disk ... anything relevant!) - Storage Classes are required if we want to use [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) (but we can also create volumes manually, and ignore Storage Classes) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Defining a Persistent Volume Claim Here is a minimal PVC: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Using a Persistent Volume Claim Here is a Pod definition like the ones shown earlier, but using a PVC: ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-a-claim spec: containers: - image: ... name: container-using-a-claim volumeMounts: - mountPath: /my-vol name: my-volume volumes: - name: my-volume persistentVolumeClaim: claimName: my-claim ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims and Stateful sets - The pods in a stateful set can define a `volumeClaimTemplate` - A `volumeClaimTemplate` will dynamically create one Persistent Volume Claim per pod - Each pod will therefore have its own volume - These volumes are numbered (like the pods) - When updating the stateful set (e.g. image upgrade), each pod keeps its volume - When pods get rescheduled (e.g. node failure), they keep their volume (this requires a storage system that is not node-local) - These volumes are not automatically deleted (when the stateful set is scaled down or deleted) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Stateful set recap - A Stateful sets manages a number of identical pods (like a Deployment) - These pods are numbered, and started/upgraded/stopped in a specific order - These pods are aware of their number (e.g., #0 can decide to be the primary, and #1 can be secondary) - These pods can find the IP addresses of the other pods in the set (through a *headless service*) - These pods can each have their own persistent storage (Deployments cannot do that) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- class: pic .interstitial[] --- name: toc-running-a-consul-cluster class: title Running a Consul cluster .nav[ [Previous section](#toc-stateful-sets) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-local-persistent-volumes) ] .debug[(automatically generated title slide)] --- # Running a Consul cluster - Here is a good use-case for Stateful sets! - We are going to deploy a Consul cluster with 3 nodes - Consul is a highly-available key/value store (like etcd or Zookeeper) - One easy way to bootstrap a cluster is to tell each node: - the addresses of other nodes - how many nodes are expected (to know when quorum is reached) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Bootstrapping a Consul cluster *After reading the Consul documentation carefully (and/or asking around), we figure out the minimal command-line to run our Consul cluster.* ``` consul agent -data=dir=/consul/data -client=0.0.0.0 -server -ui \ -bootstrap-expect=3 \ -retry-join=`X.X.X.X` \ -retry-join=`Y.Y.Y.Y` ``` - Replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes - The same command-line can be used on all nodes (convenient!) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Cloud Auto-join - Since version 1.4.0, Consul can use the Kubernetes API to find its peers - This is called [Cloud Auto-join] - Instead of passing an IP address, we need to pass a parameter like this: ``` consul agent -retry-join "provider=k8s label_selector=\"app=consul\"" ``` - Consul needs to be able to talk to the Kubernetes API - We can provide a `kubeconfig` file - If Consul runs in a pod, it will use the *service account* of the pod [Cloud Auto-join]: https://www.consul.io/docs/agent/cloud-auto-join.html#kubernetes-k8s- .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Setting up Cloud auto-join - We need to create a service account for Consul - We need to create a role that can `list` and `get` pods - We need to bind that role to the service account - And of course, we need to make sure that Consul pods use that service account .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Putting it all together - The file `k8s/consul.yaml` defines the required resources (service account, cluster role, cluster role binding, service, stateful set) - It has a few extra touches: - a `podAntiAffinity` prevents two pods from running on the same node - a `preStop` hook makes the pod leave the cluster when shutdown gracefully This was inspired by this [excellent tutorial](https://github.com/kelseyhightower/consul-on-kubernetes) by Kelsey Hightower. Some features from the original tutorial (TLS authentication between nodes and encryption of gossip traffic) were removed for simplicity. .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Running our Consul cluster - We'll use the provided YAML file .exercise[ - Create the stateful set and associated service: ```bash kubectl apply -f ~/container.training/k8s/consul.yaml ``` - Check the logs as the pods come up one after another: ```bash stern consul ``` - Check the health of the cluster: ```bash kubectl exec consul-0 consul members ``` ] .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Caveats - We haven't used a `volumeClaimTemplate` here - That's because we don't have a storage provider yet (except if you're running this on your own and your cluster has one) - What happens if we lose a pod? - a new pod gets rescheduled (with an empty state) - the new pod tries to connect to the two others - it will be accepted (after 1-2 minutes of instability) - and it will retrieve the data from the other pods .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Failure modes - What happens if we lose two pods? - manual repair will be required - we will need to instruct the remaining one to act solo - then rejoin new pods - What happens if we lose three pods? (aka all of them) - we lose all the data (ouch) - If we run Consul without persistent storage, backups are a good idea! .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- class: pic .interstitial[] --- name: toc-local-persistent-volumes class: title Local Persistent Volumes .nav[ [Previous section](#toc-running-a-consul-cluster) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-highly-available-persistent-volumes) ] .debug[(automatically generated title slide)] --- # Local Persistent Volumes - We want to run that Consul cluster *and* actually persist data - But we don't have a distributed storage system - We are going to use local volumes instead (similar conceptually to `hostPath` volumes) - We can use local volumes without installing extra plugins - However, they are tied to a node - If that node goes down, the volume becomes unavailable .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## With or without dynamic provisioning - We will deploy a Consul cluster *with* persistence - That cluster's StatefulSet will create PVCs - These PVCs will remain unboundΒΉ, until we will create local volumes manually (we will basically do the job of the dynamic provisioner) - Then, we will see how to automate that with a dynamic provisioner .footnote[ΒΉUnbound = without an associated Persistent Volume.] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## If we have a dynamic provisioner ... - The labs in this section assume that we *do not* have a dynamic provisioner - If we do have one, we need to disable it .exercise[ - Check if we have a dynamic provisioner: ```bash kubectl get storageclass ``` - If the output contains a line with `(default)`, run this command: ```bash kubectl annotate sc storageclass.kubernetes.io/is-default-class- --all ``` - Check again that it is no longer marked as `(default)` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Work in a separate namespace - To avoid conflicts with existing resources, let's create and use a new namespace .exercise[ - Create a new namespace: ```bash kubectl create namespace orange ``` - Switch to that namespace: ```bash kns orange ``` ] .warning[Make sure to call that namespace `orange`: it is hardcoded in the YAML files.] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Deploying Consul - We will use a slightly different YAML file - The only differences between that file and the previous one are: - `volumeClaimTemplate` defined in the Stateful Set spec - the corresponding `volumeMounts` in the Pod spec - the namespace `orange` used for discovery of Pods .exercise[ - Apply the persistent Consul YAML file: ```bash kubectl apply -f ~/container.training/k8s/persistent-consul.yaml ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Observing the situation - Let's look at Persistent Volume Claims and Pods .exercise[ - Check that we now have an unbound Persistent Volume Claim: ```bash kubectl get pvc ``` - We don't have any Persistent Volume: ```bash kubectl get pv ``` - The Pod `consul-0` is not scheduled yet: ```bash kubectl get pods -o wide ``` ] *Hint: leave these commands running with `-w` in different windows.* .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Explanations - In a Stateful Set, the Pods are started one by one - `consul-1` won't be created until `consul-0` is running - `consul-0` has a dependency on an unbound Persistent Volume Claim - The scheduler won't schedule the Pod until the PVC is bound (because the PVC might be bound to a volume that is only available on a subset of nodes; for instance EBS are tied to an availability zone) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Creating Persistent Volumes - Let's create 3 local directories (`/mnt/consul`) on node2, node3, node4 - Then create 3 Persistent Volumes corresponding to these directories .exercise[ - Create the local directories: ```bash for NODE in node2 node3 node4; do ssh $NODE sudo mkdir -p /mnt/consul done ``` - Create the PV objects: ```bash kubectl apply -f ~/container.training/k8s/volumes-for-consul.yaml ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Check our Consul cluster - The PVs that we created will be automatically matched with the PVCs - Once a PVC is bound, its pod can start normally - Once the pod `consul-0` has started, `consul-1` can be created, etc. - Eventually, our Consul cluster is up, and backend by "persistent" volumes .exercise[ - Check that our Consul clusters has 3 members indeed: ```bash kubectl exec consul-0 consul members ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Devil is in the details (1/2) - The size of the Persistent Volumes is bogus (it is used when matching PVs and PVCs together, but there is no actual quota or limit) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Devil is in the details (2/2) - This specific example worked because we had exactly 1 free PV per node: - if we had created multiple PVs per node ... - we could have ended with two PVCs bound to PVs on the same node ... - which would have required two pods to be on the same node ... - which is forbidden by the anti-affinity constraints in the StatefulSet - To avoid that, we need to associated the PVs with a Storage Class that has: ```yaml volumeBindingMode: WaitForFirstConsumer ``` (this means that a PVC will be bound to a PV only after being used by a Pod) - See [this blog post](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) for more details .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Bulk provisioning - It's not practical to manually create directories and PVs for each app - We *could* pre-provision a number of PVs across our fleet - We could even automate that with a Daemon Set: - creating a number of directories on each node - creating the corresponding PV objects - We also need to recycle volumes - ... This can quickly get out of hand .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Dynamic provisioning - We could also write our own provisioner, which would: - watch the PVCs across all namespaces - when a PVC is created, create a corresponding PV on a node - Or we could use one of the dynamic provisioners for local persistent volumes (for instance the [Rancher local path provisioner](https://github.com/rancher/local-path-provisioner)) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Strategies for local persistent volumes - Remember, when a node goes down, the volumes on that node become unavailable - High availability will require another layer of replication (like what we've just seen with Consul; or primary/secondary; etc) - Pre-provisioning PVs makes sense for machines with local storage (e.g. cloud instance storage; or storage directly attached to a physical machine) - Dynamic provisioning makes sense for large number of applications (when we can't or won't dedicate a whole disk to a volume) - It's possible to mix both (using distinct Storage Classes) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- class: pic .interstitial[] --- name: toc-highly-available-persistent-volumes class: title Highly available Persistent Volumes .nav[ [Previous section](#toc-local-persistent-volumes) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-next-steps) ] .debug[(automatically generated title slide)] --- # Highly available Persistent Volumes - How can we achieve true durability? - How can we store data that would survive the loss of a node? -- - We need to use Persistent Volumes backed by highly available storage systems - There are many ways to achieve that: - leveraging our cloud's storage APIs - using NAS/SAN systems or file servers - distributed storage systems -- - We are going to see one distributed storage system in action .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Our test scenario - We will set up a distributed storage system on our cluster - We will use it to deploy a SQL database (PostgreSQL) - We will insert some test data in the database - We will disrupt the node running the database - We will see how it recovers .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Portworx - Portworx is a *commercial* persistent storage solution for containers - It works with Kubernetes, but also Mesos, Swarm ... - It provides [hyper-converged](https://en.wikipedia.org/wiki/Hyper-converged_infrastructure) storage (=storage is provided by regular compute nodes) - We're going to use it here because it can be deployed on any Kubernetes cluster (it doesn't require any particular infrastructure) - We don't endorse or support Portworx in any particular way (but we appreciate that it's super easy to install!) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## A useful reminder - We're installing Portworx because we need a storage system - If you are using AKS, EKS, GKE ... you already have a storage system (but you might want another one, e.g. to leverage local storage) - If you have setup Kubernetes yourself, there are other solutions available too - on premises, you can use a good old SAN/NAS - on a private cloud like OpenStack, you can use e.g. Cinder - everywhere, you can use other systems, e.g. Gluster, StorageOS .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Portworx requirements - Kubernetes cluster βοΈ - Optional key/value store (etcd or Consul) β - At least one available block device β .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## The key-value store - In the current version of Portworx (1.4) it is recommended to use etcd or Consul - But Portworx also has beta support for an embedded key/value store - For simplicity, we are going to use the latter option (but if we have deployed Consul or etcd, we can use that, too) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## One available block device - Block device = disk or partition on a disk - We can see block devices with `lsblk` (or `cat /proc/partitions` if we're old school like that!) - If we don't have a spare disk or partition, we can use a *loop device* - A loop device is a block device actually backed by a file - These are frequently used to mount ISO (CD/DVD) images or VM disk images .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Setting up a loop device - We are going to create a 10 GB (empty) file on each node - Then make a loop device from it, to be used by Portworx .exercise[ - Create a 10 GB file on each node: ```bash for N in $(seq 1 4); do ssh node$N sudo truncate --size 10G /portworx.blk; done ``` (If SSH asks to confirm host keys, enter `yes` each time.) - Associate the file to a loop device on each node: ```bash for N in $(seq 1 4); do ssh node$N sudo losetup /dev/loop4 /portworx.blk; done ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Installing Portworx - To install Portworx, we need to go to https://install.portworx.com/ - This website will ask us a bunch of questions about our cluster - Then, it will generate a YAML file that we should apply to our cluster -- - Or, we can just apply that YAML file directly (it's in `k8s/portworx.yaml`) .exercise[ - Install Portworx: ```bash kubectl apply -f ~/container.training/k8s/portworx.yaml ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Generating a custom YAML file If you want to generate a YAML file tailored to your own needs, the easiest way is to use https://install.portworx.com/. FYI, this is how we obtained the YAML file used earlier: ``` KBVER=$(kubectl version -o json | jq -r .serverVersion.gitVersion) BLKDEV=/dev/loop4 curl https://install.portworx.com/1.4/?kbver=$KBVER&b=true&s=$BLKDEV&c=px-workshop&stork=true&lh=true ``` If you want to use an external key/value store, add one of the following: ``` &k=etcd://`XXX`:2379 &k=consul://`XXX`:8500 ``` ... where `XXX` is the name or address of your etcd or Consul server. .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Waiting for Portworx to be ready - The installation process will take a few minutes .exercise[ - Check out the logs: ```bash stern -n kube-system portworx ``` - Wait until it gets quiet (you should see `portworx service is healthy`, too) ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Dynamic provisioning of persistent volumes - We are going to run PostgreSQL in a Stateful set - The Stateful set will specify a `volumeClaimTemplate` - That `volumeClaimTemplate` will create Persistent Volume Claims - Kubernetes' [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) will satisfy these Persistent Volume Claims (by creating Persistent Volumes and binding them to the claims) - The Persistent Volumes are then available for the PostgreSQL pods .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Storage Classes - It's possible that multiple storage systems are available - Or, that a storage system offers multiple tiers of storage (SSD vs. magnetic; mirrored or not; etc.) - We need to tell Kubernetes *which* system and tier to use - This is achieved by creating a Storage Class - A `volumeClaimTemplate` can indicate which Storage Class to use - It is also possible to mark a Storage Class as "default" (it will be used if a `volumeClaimTemplate` doesn't specify one) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Our default Storage Class This is our Storage Class (in `k8s/storage-class.yaml`): ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: portworx-replicated annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/portworx-volume parameters: repl: "2" priority_io: "high" ``` - It says "use Portworx to create volumes" - It tells Portworx to "keep 2 replicas of these volumes" - It marks the Storage Class as being the default one .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Creating our Storage Class - Let's apply that YAML file! .exercise[ - Create the Storage Class: ```bash kubectl apply -f ~/container.training/k8s/storage-class.yaml ``` - Check that it is now available: ```bash kubectl get sc ``` ] It should show as `portworx-replicated (default)`. .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Our Postgres Stateful set - The next slide shows `k8s/postgres.yaml` - It defines a Stateful set - With a `volumeClaimTemplate` requesting a 1 GB volume - That volume will be mounted to `/var/lib/postgresql/data` - There is another little detail: we enable the `stork` scheduler - The `stork` scheduler is optional (it's specific to Portworx) - It helps the Kubernetes scheduler to colocate the pod with its volume (see [this blog post](https://portworx.com/stork-storage-orchestration-kubernetes/) for more details about that) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- .small[ ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres spec: selector: matchLabels: app: postgres serviceName: postgres template: metadata: labels: app: postgres spec: schedulerName: stork containers: - name: postgres image: postgres:10.5 volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumeClaimTemplates: - metadata: name: postgres spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Creating the Stateful set - Before applying the YAML, watch what's going on with `kubectl get events -w` .exercise[ - Apply that YAML: ```bash kubectl apply -f ~/container.training/k8s/postgres.yaml ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Testing our PostgreSQL pod - We will use `kubectl exec` to get a shell in the pod - Good to know: we need to use the `postgres` user in the pod .exercise[ - Get a shell in the pod, as the `postgres` user: ```bash kubectl exec -ti postgres-0 su postgres ``` - Check that default databases have been created correctly: ```bash psql -l ``` ] (This should show us 3 lines: postgres, template0, and template1.) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Inserting data in PostgreSQL - We will create a database and populate it with `pgbench` .exercise[ - Create a database named `demo`: ```bash createdb demo ``` - Populate it with `pgbench`: ```bash pgbench -i -s 10 demo ``` ] - The `-i` flag means "create tables" - The `-s 10` flag means "create 10 x 100,000 rows" .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Checking how much data we have now - The `pgbench` tool inserts rows in table `pgbench_accounts` .exercise[ - Check that the `demo` base exists: ```bash psql -l ``` - Check how many rows we have in `pgbench_accounts`: ```bash psql demo -c "select count(*) from pgbench_accounts" ``` ] (We should see a count of 1,000,000 rows.) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Find out which node is hosting the database - We can find that information with `kubectl get pods -o wide` .exercise[ - Check the node running the database: ```bash kubectl get pod postgres-0 -o wide ``` ] We are going to disrupt that node. -- By "disrupt" we mean: "disconnect it from the network". .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Disconnect the node - We will use `iptables` to block all traffic exiting the node (except SSH traffic, so we can repair the node later if needed) .exercise[ - SSH to the node to disrupt: ```bash ssh `nodeX` ``` - Allow SSH traffic leaving the node, but block all other traffic: ```bash sudo iptables -I OUTPUT -p tcp --sport 22 -j ACCEPT sudo iptables -I OUTPUT 2 -j DROP ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Check that the node is disconnected .exercise[ - Check that the node can't communicate with other nodes: ``` ping node1 ``` - Logout to go back on `node1` - Watch the events unfolding with `kubectl get events -w` and `kubectl get pods -w` ] - It will take some time for Kubernetes to mark the node as unhealthy - Then it will attempt to reschedule the pod to another node - In about a minute, our pod should be up and running again .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Check that our data is still available - We are going to reconnect to the (new) pod and check .exercise[ - Get a shell on the pod: ```bash kubectl exec -ti postgres-0 su postgres ``` - Check the number of rows in the `pgbench_accounts` table: ```bash psql demo -c "select count(*) from pgbench_accounts" ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Double-check that the pod has really moved - Just to make sure the system is not bluffing! .exercise[ - Look at which node the pod is now running on ```bash kubectl get pod postgres-0 -o wide ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Re-enable the node - Let's fix the node that we disconnected from the network .exercise[ - SSH to the node: ```bash ssh `nodeX` ``` - Remove the iptables rule blocking traffic: ```bash sudo iptables -D OUTPUT 2 ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## A few words about this PostgreSQL setup - In a real deployment, you would want to set a password - This can be done by creating a `secret`: ``` kubectl create secret generic postgres \ --from-literal=password=$(base64 /dev/urandom | head -c16) ``` - And then passing that secret to the container: ```yaml env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres key: password ``` .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Troubleshooting Portworx - If we need to see what's going on with Portworx: ``` PXPOD=$(kubectl -n kube-system get pod -l name=portworx -o json | jq -r .items[0].metadata.name) kubectl -n kube-system exec $PXPOD -- /opt/pwx/bin/pxctl status ``` - We can also connect to Lighthouse (a web UI) - check the port with `kubectl -n kube-system get svc px-lighthouse` - connect to that port - the default login/password is `admin/Password1` - then specify `portworx-service` as the endpoint .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Removing Portworx - Portworx provides a storage driver - It needs to place itself "above" the Kubelet (it installs itself straight on the nodes) - To remove it, we need to do more than just deleting its Kubernetes resources - It is done by applying a special label: ``` kubectl label nodes --all px/enabled=remove --overwrite ``` - Then removing a bunch of local files: ``` sudo chattr -i /etc/pwx/.private.json sudo rm -rf /etc/pwx /opt/pwx ``` (on each node where Portworx was running) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Dynamic provisioning without a provider - What if we want to use Stateful sets without a storage provider? - We will have to create volumes manually (by creating Persistent Volume objects) - These volumes will be automatically bound with matching Persistent Volume Claims - We can use local volumes (essentially bind mounts of host directories) - Of course, these volumes won't be available in case of node failure - Check [this blog post](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) for more information and gotchas .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Acknowledgements The Portworx installation tutorial, and the PostgreSQL example, were inspired by [Portworx examples on Katacoda](https://katacoda.com/portworx/scenarios/), in particular: - [installing Portworx on Kubernetes](https://www.katacoda.com/portworx/scenarios/deploy-px-k8s) (with adapatations to use a loop device and an embedded key/value store) - [persistent volumes on Kubernetes using Portworx](https://www.katacoda.com/portworx/scenarios/px-k8s-vol-basic) (with adapatations to specify a default Storage Class) - [HA PostgreSQL on Kubernetes with Portworx](https://www.katacoda.com/portworx/scenarios/px-k8s-postgres-all-in-one) (with adaptations to use a Stateful Set and simplify PostgreSQL's setup) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: pic .interstitial[] --- name: toc-next-steps class: title Next steps .nav[ [Previous section](#toc-highly-available-persistent-volumes) | [Back to table of contents](#toc-chapter-13) | [Next section](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Next steps *Alright, how do I get started and containerize my apps?* -- Suggested containerization checklist: .checklist[ - write a Dockerfile for one service in one app - write Dockerfiles for the other (buildable) services - write a Compose file for that whole app - make sure that devs are empowered to run the app in containers - set up automated builds of container images from the code repo - set up a CI pipeline using these container images - set up a CD pipeline (for staging/QA) using these images ] And *then* it is time to look at orchestration! .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Options for our first production cluster - Get a managed cluster from a major cloud provider (AKS, EKS, GKE...) (price: $, difficulty: medium) - Hire someone to deploy it for us (price: $$, difficulty: easy) - Do it ourselves (price: $-$$$, difficulty: hard) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## One big cluster vs. multiple small ones - Yes, it is possible to have prod+dev in a single cluster (and implement good isolation and security with RBAC, network policies...) - But it is not a good idea to do that for our first deployment - Start with a production cluster + at least a test cluster - Implement and check RBAC and isolation on the test cluster (e.g. deploy multiple test versions side-by-side) - Make sure that all our devs have usable dev clusters (whether it's a local minikube or a full-blown multi-node cluster) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Namespaces - Namespaces let you run multiple identical stacks side by side - Two namespaces (e.g. `blue` and `green`) can each have their own `redis` service - Each of the two `redis` services has its own `ClusterIP` - CoreDNS creates two entries, mapping to these two `ClusterIP` addresses: `redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local` - Pods in the `blue` namespace get a *search suffix* of `blue.svc.cluster.local` - As a result, resolving `redis` from a pod in the `blue` namespace yields the "local" `redis` .warning[This does not provide *isolation*! That would be the job of network policies.] .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Relevant sections - [Namespaces](kube-selfpaced.yml.html#toc-namespaces) - [Network Policies](kube-selfpaced.yml.html#toc-network-policies) - [Role-Based Access Control](kube-selfpaced.yml.html#toc-authentication-and-authorization) (covers permissions model, user and service accounts management ...) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Stateful services (databases etc.) - As a first step, it is wiser to keep stateful services *outside* of the cluster - Exposing them to pods can be done with multiple solutions: - `ExternalName` services (`redis.blue.svc.cluster.local` will be a `CNAME` record) - `ClusterIP` services with explicit `Endpoints` (instead of letting Kubernetes generate the endpoints from a selector) - Ambassador services (application-level proxies that can provide credentials injection and more) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Stateful services (second take) - If we want to host stateful services on Kubernetes, we can use: - a storage provider - persistent volumes, persistent volume claims - stateful sets - Good questions to ask: - what's the *operational cost* of running this service ourselves? - what do we gain by deploying this stateful service on Kubernetes? - Relevant sections: [Volumes](kube-selfpaced.yml.html#toc-volumes) | [Stateful Sets](kube-selfpaced.yml.html#toc-stateful-sets) | [Persistent Volumes](kube-selfpaced.yml.html#toc-highly-available-persistent-volumes) - Excellent [blog post](http://www.databasesoup.com/2018/07/should-i-run-postgres-on-kubernetes.html) tackling the question: βShould I run Postgres on Kubernetes?β .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Managing stack deployments - The best deployment tool will vary, depending on: - the size and complexity of your stack(s) - how often you change it (i.e. add/remove components) - the size and skills of your team - A few examples: - shell scripts invoking `kubectl` - YAML resources descriptions committed to a repo - [Helm](https://github.com/kubernetes/helm) (~package manager) - [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform) - [Brigade](https://brigade.sh/) (event-driven scripting; no YAML) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Cluster federation --  -- Sorry Star Trek fans, this is not the federation you're looking for! -- (If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Cluster federation - Kubernetes master operation relies on etcd - etcd uses the [Raft](https://raft.github.io/) protocol - Raft recommends low latency between nodes - What if our cluster spreads to multiple regions? -- - Break it down in local clusters - Regroup them in a *cluster federation* - Synchronize resources across clusters - Discover resources across clusters .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Developer experience *We've put this last, but it's pretty important!* - How do you on-board a new developer? - What do they need to install to get a dev stack? - How does a code change make it from dev to prod? - How does someone add a component to a stack? .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- class: pic .interstitial[] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous section](#toc-next-steps) | [Back to table of contents](#toc-chapter-13) | [Next section](#toc-extra-material) ] .debug[(automatically generated title slide)] --- # Links and resources All things Kubernetes: - [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups - [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes) - [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b) All things Docker: - [Docker documentation](http://docs.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Play With Docker Hands-On Labs](http://training.play-with-docker.com/) Everything else: - [Local meetups](https://www.meetup.com/) .footnote[These slides (and future updates) are on β http://container.training/] .debug[[k8s/links.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/links.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks! Questions?  .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/thankyou.md)] --- class: pic .interstitial[] --- name: toc-extra-material class: title (Extra material) .nav[ [Previous section](#toc-links-and-resources) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-controlling-a-kubernetes-cluster-remotely) ] .debug[(automatically generated title slide)] --- # (Extra material) .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: pic .interstitial[] --- name: toc-controlling-a-kubernetes-cluster-remotely class: title Controlling a Kubernetes cluster remotely .nav[ [Previous section](#toc-extra-material) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-accessing-internal-services) ] .debug[(automatically generated title slide)] --- # Controlling a Kubernetes cluster remotely - `kubectl` can be used either on cluster instances or outside the cluster - Here, we are going to use `kubectl` from our local machine .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Requirements .warning[The exercises in this chapter should be done *on your local machine*.] - `kubectl` is officially available on Linux, macOS, Windows (and unofficially anywhere we can build and run Go binaries) - You may skip these exercises if you are following along from: - a tablet or phone - a web-based terminal - an environment where you can't install and run new binaries .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Installing `kubectl` - If you already have `kubectl` on your local machine, you can skip this .exercise[ - Download the `kubectl` binary from one of these links: [Linux](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl) | [macOS](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/darwin/amd64/kubectl) | [Windows](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe) - On Linux and macOS, make the binary executable with `chmod +x kubectl` (And remember to run it with `./kubectl` or move it to your `$PATH`) ] Note: if you are following along with a different platform (e.g. Linux on an architecture different from amd64, or with a phone or tablet), installing `kubectl` might be more complicated (or even impossible) so feel free to skip this section. .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Testing `kubectl` - Check that `kubectl` works correctly (before even trying to connect to a remote cluster!) .exercise[ - Ask `kubectl` to show its version number: ```bash kubectl version --client ``` ] The output should look like this: ``` Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"} ``` .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Preserving the existing `~/.kube/config` - If you already have a `~/.kube/config` file, rename it (we are going to overwrite it in the following slides!) - If you never used `kubectl` on your machine before: nothing to do! .exercise[ - Make a copy of `~/.kube/config`; if you are using macOS or Linux, you can do: ```bash cp ~/.kube/config ~/.kube/config.before.training ``` - If you are using Windows, you will need to adapt this command ] .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Copying the configuration file from `node1` - The `~/.kube/config` file that is on `node1` contains all the credentials we need - Let's copy it over! .exercise[ - Copy the file from `node1`; if you are using macOS or Linux, you can do: ``` scp `USER`@`X.X.X.X`:.kube/config ~/.kube/config # Make sure to replace X.X.X.X with the IP address of node1, # and USER with the user name used to log into node1! ``` - If you are using Windows, adapt these instructions to your SSH client ] .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Updating the server address - There is a good chance that we need to update the server address - To know if it is necessary, run `kubectl config view` - Look for the `server:` address: - if it matches the public IP address of `node1`, you're good! - if it is anything else (especially a private IP address), update it! - To update the server address, run: ```bash kubectl config set-cluster kubernetes --server=https://`X.X.X.X`:6443 # Make sure to replace X.X.X.X with the IP address of node1! ``` .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- class: extra-details ## What if we get a certificate error? - Generally, the Kubernetes API uses a certificate that is valid for: - `kubernetes` - `kubernetes.default` - `kubernetes.default.svc` - `kubernetes.default.svc.cluster.local` - the ClusterIP address of the `kubernetes` service - the hostname of the node hosting the control plane (e.g. `node1`) - the IP address of the node hosting the control plane - On most clouds, the IP address of the node is an internal IP address - ... And we are going to connect over the external IP address - ... And that external IP address was not used when creating the certificate! .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- class: extra-details ## Working around the certificate error - We need to tell `kubectl` to skip TLS verification (only do this with testing clusters, never in production!) - The following command will do the trick: ```bash kubectl config set-cluster kubernetes --insecure-skip-tls-verify ``` .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Checking that we can connect to the cluster - We can now run a couple of trivial commands to check that all is well .exercise[ - Check the versions of the local client and remote server: ```bash kubectl version ``` - View the nodes of the cluster: ```bash kubectl get nodes ``` ] We can now utilize the cluster exactly as if we're logged into a node, except that it's remote. .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- class: pic .interstitial[] --- name: toc-accessing-internal-services class: title Accessing internal services .nav[ [Previous section](#toc-controlling-a-kubernetes-cluster-remotely) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-kustomize) ] .debug[(automatically generated title slide)] --- # Accessing internal services - When we are logged in on a cluster node, we can access internal services (by virtue of the Kubernetes network model: all nodes can reach all pods and services) - When we are accessing a remote cluster, things are different (generally, our local machine won't have access to the cluster's internal subnet) - How can we temporarily access a service without exposing it to everyone? -- - `kubectl proxy`: gives us access to the API, which includes a proxy for HTTP resources - `kubectl port-forward`: allows forwarding of TCP ports to arbitrary pods, services, ... .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## Suspension of disbelief The exercises in this section assume that we have set up `kubectl` on our local machine in order to access a remote cluster. We will therefore show how to access services and pods of the remote cluster, from our local machine. You can also run these exercises directly on the cluster (if you haven't installed and set up `kubectl` locally). Running commands locally will be less useful (since you could access services and pods directly), but keep in mind that these commands will work anywhere as long as you have installed and set up `kubectl` to communicate with your cluster. .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl proxy` in theory - Running `kubectl proxy` gives us access to the entire Kubernetes API - The API includes routes to proxy HTTP traffic - These routes look like the following: `/api/v1/namespaces//services//proxy` - We just add the URI to the end of the request, for instance: `/api/v1/namespaces//services//proxy/index.html` - We can access `services` and `pods` this way .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl proxy` in practice - Let's access the `webui` service through `kubectl proxy` .exercise[ - Run an API proxy in the background: ```bash kubectl proxy & ``` - Access the `webui` service: ```bash curl localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html ``` - Terminate the proxy: ```bash kill %1 ``` ] .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl port-forward` in theory - What if we want to access a TCP service? - We can use `kubectl port-forward` instead - It will create a TCP relay to forward connections to a specific port (of a pod, service, deployment...) - The syntax is: `kubectl port-forward service/name_of_service local_port:remote_port` - If only one port number is specified, it is used for both local and remote ports .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl port-forward` in practice - Let's access our remote Redis server .exercise[ - Forward connections from local port 10000 to remote port 6379: ```bash kubectl port-forward svc/redis 10000:6379 & ``` - Connect to the Redis server: ```bash telnet localhost 10000 ``` - Issue a few commands, e.g. `INFO server` then `QUIT` - Terminate the port forwarder: ```bash kill %1 ``` ] .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- class: pic .interstitial[] --- name: toc-kustomize class: title Kustomize .nav[ [Previous section](#toc-accessing-internal-services) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-managing-stacks-with-helm) ] .debug[(automatically generated title slide)] --- # Kustomize - Kustomize lets us transform YAML files representing Kubernetes resources - The original YAML files are valid resource files (e.g. they can be loaded with `kubectl apply -f`) - They are left untouched by Kustomize - Kustomize lets us define *overlays* that extend or change the resource files .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Differences with Helm - Helm charts use placeholders `{{ like.this }}` - Kustomize "bases" are standard Kubernetes YAML - It is possible to use an existing set of YAML as a Kustomize base - As a result, writing a Helm chart is more work ... - ... But Helm charts are also more powerful; e.g. they can: - use flags to conditionally include resources or blocks - check if a given Kubernetes API group is supported - [and much more](https://helm.sh/docs/chart_template_guide/) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Kustomize concepts - Kustomize needs a `kustomization.yaml` file - That file can be a *base* or a *variant* - If it's a *base*: - it lists YAML resource files to use - If it's a *variant* (or *overlay*): - it refers to (at least) one *base* - and some *patches* .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## An easy way to get started with Kustomize - We are going to use [Replicated Ship](https://www.replicated.com/ship/) to experiment with Kustomize - The [Replicated Ship CLI](https://github.com/replicatedhq/ship/releases) has been installed on our clusters - Replicated Ship has multiple workflows; here is what we will do: - initialize a Kustomize overlay from a remote GitHub repository - customize some values using the web UI provided by Ship - look at the resulting files and apply them to the cluster .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Getting started with Ship - We need to run `ship init` in a new directory - `ship init` requires a URL to a remote repository containing Kubernetes YAML - It will clone that repository and start a web UI - Later, it can watch that repository and/or update from it - We will use the [jpetazzo/kubercoins](https://github.com/jpetazzo/kubercoins) repository (it contains all the DockerCoins resources as YAML files) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## `ship init` .exercise[ - Change to a new directory: ```bash mkdir ~/kustomcoins cd ~/kustomcoins ``` - Run `ship init` with the kustomcoins repository: ```bash ship init https://github.com/jpetazzo/kubercoins ``` ] .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Access the web UI - `ship init` tells us to connect on `localhost:8800` - We need to replace `localhost` with the address of our node (since we run on a remote machine) - Follow the steps in the web UI, and change one parameter (e.g. set the number of replicas in the worker Deployment) - Complete the web workflow, and go back to the CLI .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Inspect the results - Look at the content of our directory - `base` contains the kubercoins repository + a `kustomization.yaml` file - `overlays/ship` contains the Kustomize overlay referencing the base + our patch(es) - `rendered.yaml` is a YAML bundle containing the patched application - `.ship` contains a state file used by Ship .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Using the results - We can `kubectl apply -f rendered.yaml` (on any version of Kubernetes) - Starting with Kubernetes 1.14, we can apply the overlay directly with: ```bash kubectl apply -k overlays/ship ``` - But let's not do that for now! - We will create a new copy of DockerCoins in another namespace .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Deploy DockerCoins with Kustomize .exercise[ - Create a new namespace: ```bash kubectl create namespace kustomcoins ``` - Deploy DockerCoins: ```bash kubectl apply -f rendered.yaml --namespace=kustomcoins ``` - Or, with Kubernetes 1.14, you can also do this: ```bash kubectl apply -k overlays/ship --namespace=kustomcoins ``` ] .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Checking our new copy of DockerCoins - We can check the worker logs, or the web UI .exercise[ - Retrieve the NodePort number of the web UI: ```bash kubectl get service webui --namespace=kustomcoins ``` - Open it in a web browser - Look at the worker logs: ```bash kubectl logs deploy/worker --tail=10 --follow --namespace=kustomcoins ``` ] Note: it might take a minute or two for the worker to start. .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- class: pic .interstitial[] --- name: toc-managing-stacks-with-helm class: title Managing stacks with Helm .nav[ [Previous section](#toc-kustomize) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-creating-helm-charts) ] .debug[(automatically generated title slide)] --- # Managing stacks with Helm - We created our first resources with `kubectl run`, `kubectl expose` ... - We have also created resources by loading YAML files with `kubectl apply -f` - For larger stacks, managing thousands of lines of YAML is unreasonable - These YAML bundles need to be customized with variable parameters (E.g.: number of replicas, image version to use ...) - It would be nice to have an organized, versioned collection of bundles - It would be nice to be able to upgrade/rollback these bundles carefully - [Helm](https://helm.sh/) is an open source project offering all these things! .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Helm concepts - `helm` is a CLI tool - `tiller` is its companion server-side component - A "chart" is an archive containing templatized YAML bundles - Charts are versioned - Charts can be stored on private or public repositories .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Installing Helm - If the `helm` CLI is not installed in your environment, install it .exercise[ - Check if `helm` is installed: ```bash helm ``` - If it's not installed, run the following command: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash ``` ] .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Installing Tiller - Tiller is composed of a *service* and a *deployment* in the `kube-system` namespace - They can be managed (installed, upgraded...) with the `helm` CLI .exercise[ - Deploy Tiller: ```bash helm init ``` ] If Tiller was already installed, don't worry: this won't break it. At the end of the install process, you will see: ``` Happy Helming! ``` .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Fix account permissions - Helm permission model requires us to tweak permissions - In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings .exercise[ - Grant `cluster-admin` role to `kube-system:default` service account: ```bash kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin --serviceaccount=kube-system:default ``` ] (Defining the exact roles and permissions on your cluster requires a deeper knowledge of Kubernetes' RBAC model. The command above is fine for personal and development clusters.) .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## View available charts - A public repo is pre-configured when installing Helm - We can view available charts with `helm search` (and an optional keyword) .exercise[ - View all available charts: ```bash helm search ``` - View charts related to `prometheus`: ```bash helm search prometheus ``` ] .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Install a chart - Most charts use `LoadBalancer` service types by default - Most charts require persistent volumes to store data - We need to relax these requirements a bit .exercise[ - Install the Prometheus metrics collector on our cluster: ```bash helm install stable/prometheus \ --set server.service.type=NodePort \ --set server.persistentVolume.enabled=false ``` ] Where do these `--set` options come from? .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Inspecting a chart - `helm inspect` shows details about a chart (including available options) .exercise[ - See the metadata and all available options for `stable/prometheus`: ```bash helm inspect stable/prometheus ``` ] The chart's metadata includes a URL to the project's home page. (Sometimes it conveniently points to the documentation for the chart.) .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Viewing installed charts - Helm keeps track of what we've installed .exercise[ - List installed Helm charts: ```bash helm list ``` ] .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Creating a chart - We are going to show a way to create a *very simplified* chart - In a real chart, *lots of things* would be templatized (Resource names, service types, number of replicas...) .exercise[ - Create a sample chart: ```bash helm create dockercoins ``` - Move away the sample templates and create an empty template directory: ```bash mv dockercoins/templates dockercoins/default-templates mkdir dockercoins/templates ``` ] .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- ## Exporting the YAML for our application - The following section assumes that DockerCoins is currently running .exercise[ - Create one YAML file for each resource that we need: .small[ ```bash while read kind name; do kubectl get -o yaml $kind $name > dockercoins/templates/$name-$kind.yaml done < `Error: release loitering-otter failed: services "hasher" already exists` - To avoid naming conflicts, we will deploy the application in another *namespace* .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- ## Switching to another namespace - We can create a new namespace and switch to it (Helm will automatically use the namespace specified in our context) - We can also tell Helm which namespace to use .exercise[ - Tell Helm to use a specific namespace: ```bash helm install dockercoins --namespace=magenta ``` ] .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- ## Checking our new copy of DockerCoins - We can check the worker logs, or the web UI .exercise[ - Retrieve the NodePort number of the web UI: ```bash kubectl get service webui --namespace=magenta ``` - Open it in a web browser - Look at the worker logs: ```bash kubectl logs deploy/worker --tail=10 --follow --namespace=magenta ``` ] Note: it might take a minute or two for the worker to start. .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- class: pic .interstitial[] --- name: toc-creating-helm-charts class: title Creating Helm charts .nav[ [Previous section](#toc-managing-stacks-with-helm) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-authentication-and-authorization) ] .debug[(automatically generated title slide)] --- # Creating Helm charts - We are going to create a generic Helm chart - We will use that Helm chart to deploy DockerCoins - Each component of DockerCoins will have its own *release* - In other words, we will "install" that Helm chart multiple times (one time per component of DockerCoins) .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Creating a generic chart - Rather than starting from scratch, we will use `helm create` - This will give us a basic chart that we will customize .exercise[ - Create a basic chart: ```bash cd ~ helm create helmcoins ``` ] This creates a basic chart in the directory `helmcoins`. .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## What's in the basic chart? - The basic chart will create a Deployment and a Service - Optionally, it will also include an Ingress - If we don't pass any values, it will deploy the `nginx` image - We can override many things in that chart - Let's try to deploy DockerCoins components with that chart! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Writing `values.yaml` for our components - We need to write one `values.yaml` file for each component (hasher, redis, rng, webui, worker) - We will start with the `values.yaml` of the chart, and remove what we don't need - We will create 5 files: hasher.yaml, redis.yaml, rng.yaml, webui.yaml, worker.yaml .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Getting started - For component X, we want to use the image dockercoins/X:v0.1 (for instance, for rng, we want to use the image dockercoins/rng:v0.1) - Exception: for redis, we want to use the official image redis:latest .exercise[ - Write minimal YAML files for the 5 components, specifying only the image ] -- *Hint: our YAML files should look like this.* ```yaml ### rng.yaml image: repository: dockercoins/`rng` tag: v0.1 ``` .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Deploying DockerCoins components - For convenience, let's work in a separate namespace .exercise[ - Create a new namespace: ```bash kubectl create namespace helmcoins ``` - Switch to that namespace: ```bash kns helmcoins ``` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Deploying the chart - To install a chart, we can use the following command: ```bash helm install [--name `X`] ``` - We can also use the following command, which is idempotent: ```bash helm upgrade --install `X` chart ``` .exercise[ - Install the 5 components of DockerCoins: ```bash for COMPONENT in hasher redis rng webui worker; do helm upgrade --install $COMPONENT helmcoins/ --values=$COMPONENT.yaml done ``` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Checking what we've done - Let's see if DockerCoins is working! .exercise[ - Check the logs of the worker: ```bash stern worker ``` - Look at the resources that were created: ```bash kubectl get all ``` ] There are *many* issues to fix! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Service names - Our services should be named `rng`, `hasher`, etc., but they are named differently - Look at the YAML template used for the services - Does it look like we can override the name of the services? -- - *Yes*, we can use `.Values.nameOverride` - This means setting `nameOverride` in the values YAML file .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Setting service names - Let's add `nameOverride: X` in each values YAML file! (where X is hasher, redis, rng, etc.) .exercise[ - Edit the 5 YAML files to add `nameOverride: X` - Deploy the updated Chart: ```bash for COMPONENT in hasher redis rng webui worker; do helm upgrade --install $COMPONENT helmcoins/ --values=$COMPONENT.yaml done ``` (Yes, this is exactly the same command as before!) ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Checking what we've done .exercise[ - Check the service names: ```bash kubectl get services ``` Great! (We have a useless service for `worker`, but let's ignore it for now.) - Check the state of the pods: ```bash kubectl get pods ``` Not so great... Some pods are *not ready.* ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Troubleshooting pods - The easiest way to troubleshoot pods is to look at *events* - We can look at all the events on the cluster (with `kubectl get events`) - Or we can use `kubectl describe` on the objects that have problems (`kubectl describe` will retrieve the events related to the object) .exercise[ - Check the events for the redis pods: ```bash kubectl describe pod -l app.kubernetes.io/name=redis ``` ] What's going on? .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Healthchecks - The default chart defines healthchecks doing HTTP requests on port 80 - That won't work for redis and worker (redis is not HTTP, and not on port 80; worker doesn't even listen) -- - We could comment out the healthchecks - We could also make them conditional - This sounds more interesting, let's do that! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Conditionals - We need to enclose the healthcheck block with: `{{ if CONDITION }}` at the beginning `{{ end }}` at the end - For the condition, we will use `.Values.healthcheck` .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Updating the deployment template .exercise[ - Edit `helmcoins/templates/deployment.yaml` - Before the healthchecks section (it starts with `livenessProbe:`), add: `{{ if .Values.healthcheck }}` - After the healthchecks section (just before `resources:`), add: `{{ end }}` - Edit `hasher.yaml`, `rng.yaml`, `webui.yaml` to add: `healthcheck: true` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Update the deployed charts - We can now apply the new templates (and the new values) .exercise[ - Use the same command as earlier to upgrade all five components - Use `kubectl describe` to confirm that `redis` starts correctly - Use `kubectl describe` to confirm that `hasher` still has healthchecks ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Is it working now? - If we look at the worker logs, it appears that the worker is still stuck - What could be happening? -- - The redis service is not on port 80! - We need to update the port number in redis.yaml - We also need to update the port number in deployment.yaml (it is hard-coded to 80 there) .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Setting the redis port .exercise[ - Edit `redis.yaml` to add: ```yaml service: port: 6379 ``` - Edit `helmcoins/templates/deployment.yaml` - The line with `containerPort` should be: ```yaml containerPort: {{ .Values.service.port }} ``` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Apply changes - Re-run the for loop to execute `helm upgrade` one more time - Check the worker logs - This time, it should be working! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Extra steps - We don't need to create a service for the worker - We can put the whole service block in a conditional (this will require additional changes in other files referencing the service) - We can set the webui to be a NodePort service - We can change the number of workers with `replicaCount` - And much more! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- class: pic .interstitial[] --- name: toc-authentication-and-authorization class: title Authentication and authorization .nav[ [Previous section](#toc-creating-helm-charts) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-network-policies) ] .debug[(automatically generated title slide)] --- # Authentication and authorization *And first, a little refresher!* - Authentication = verifying the identity of a person On a UNIX system, we can authenticate with login+password, SSH keys ... - Authorization = listing what they are allowed to do On a UNIX system, this can include file permissions, sudoer entries ... - Sometimes abbreviated as "authn" and "authz" - In good modular systems, these things are decoupled (so we can e.g. change a password or SSH key without having to reset access rights) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication in Kubernetes - When the API server receives a request, it tries to authenticate it (it examines headers, certificates... anything available) - Many authentication methods are available and can be used simultaneously (we will see them on the next slide) - It's the job of the authentication method to produce: - the user name - the user ID - a list of groups - The API server doesn't interpret these; that'll be the job of *authorizers* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication methods - TLS client certificates (that's what we've been doing with `kubectl` so far) - Bearer tokens (a secret token in the HTTP headers of the request) - [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) (carrying user and password in an HTTP header) - Authentication proxy (sitting in front of the API and setting trusted headers) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Anonymous requests - If any authentication method *rejects* a request, it's denied (`401 Unauthorized` HTTP code) - If a request is neither rejected nor accepted by anyone, it's anonymous - the user name is `system:anonymous` - the list of groups is `[system:unauthenticated]` - By default, the anonymous user can't do anything (that's what you get if you just `curl` the Kubernetes API) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication with TLS certificates - This is enabled in most Kubernetes deployments - The user name is derived from the `CN` in the client certificates - The groups are derived from the `O` fields in the client certificate - From the point of view of the Kubernetes API, users do not exist (i.e. they are not stored in etcd or anywhere else) - Users can be created (and added to groups) independently of the API - The Kubernetes API can be set up to use your custom CA to validate client certs .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Viewing our admin certificate - Let's inspect the certificate we've been using all this time! .exercise[ - This command will show the `CN` and `O` fields for our certificate: ```bash kubectl config view \ --raw \ -o json \ | jq -r .users[0].user[\"client-certificate-data\"] \ | openssl base64 -d -A \ | openssl x509 -text \ | grep Subject: ``` ] Let's break down that command together! π .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Breaking down the command - `kubectl config view` shows the Kubernetes user configuration - `--raw` includes certificate information (which shows as REDACTED otherwise) - `-o json` outputs the information in JSON format - `| jq ...` extracts the field with the user certificate (in base64) - `| openssl base64 -d -A` decodes the base64 format (now we have a PEM file) - `| openssl x509 -text` parses the certificate and outputs it as plain text - `| grep Subject:` shows us the line that interests us β We are user `kubernetes-admin`, in group `system:masters`. (We will see later how and why this gives us the permissions that we have.) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## User certificates in practice - The Kubernetes API server does not support certificate revocation (see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982)) - As a result, we don't have an easy way to terminate someone's access (if their key is compromised, or they leave the organization) - Option 1: re-create a new CA and re-issue everyone's certificates β Maybe OK if we only have a few users; no way otherwise - Option 2: don't use groups; grant permissions to individual users β Inconvenient if we have many users and teams; error-prone - Option 3: issue short-lived certificates (e.g. 24 hours) and renew them often β This can be facilitated by e.g. Vault or by the Kubernetes CSR API .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication with tokens - Tokens are passed as HTTP headers: `Authorization: Bearer and-then-here-comes-the-token` - Tokens can be validated through a number of different methods: - static tokens hard-coded in a file on the API server - [bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) (special case to create a cluster or join nodes) - [OpenID Connect tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) (to delegate authentication to compatible OAuth2 providers) - service accounts (these deserve more details, coming right up!) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Service accounts - A service account is a user that exists in the Kubernetes API (it is visible with e.g. `kubectl get serviceaccounts`) - Service accounts can therefore be created / updated dynamically (they don't require hand-editing a file and restarting the API server) - A service account is associated with a set of secrets (the kind that you can view with `kubectl get secrets`) - Service accounts are generally used to grant permissions to applications, services... (as opposed to humans) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Token authentication in practice - We are going to list existing service accounts - Then we will extract the token for a given service account - And we will use that token to authenticate with the API .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Listing service accounts .exercise[ - The resource name is `serviceaccount` or `sa` for short: ```bash kubectl get sa ``` ] There should be just one service account in the default namespace: `default`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Finding the secret .exercise[ - List the secrets for the `default` service account: ```bash kubectl get sa default -o yaml SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name) ``` ] It should be named `default-token-XXXXX`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Extracting the token - The token is stored in the secret, wrapped with base64 encoding .exercise[ - View the secret: ```bash kubectl get secret $SECRET -o yaml ``` - Extract the token and decode it: ```bash TOKEN=$(kubectl get secret $SECRET -o json \ | jq -r .data.token | openssl base64 -d -A) ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Using the token - Let's send a request to the API, without and with the token .exercise[ - Find the ClusterIP for the `kubernetes` service: ```bash kubectl get svc kubernetes API=$(kubectl get svc kubernetes -o json | jq -r .spec.clusterIP) ``` - Connect without the token: ```bash curl -k https://$API ``` - Connect with the token: ```bash curl -k -H "Authorization: Bearer $TOKEN" https://$API ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Results - In both cases, we will get a "Forbidden" error - Without authentication, the user is `system:anonymous` - With authentication, it is shown as `system:serviceaccount:default:default` - The API "sees" us as a different user - But neither user has any rights, so we can't do nothin' - Let's change that! .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authorization in Kubernetes - There are multiple ways to grant permissions in Kubernetes, called [authorizers](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules): - [Node Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) (used internally by kubelet; we can ignore it) - [Attribute-based access control](https://kubernetes.io/docs/reference/access-authn-authz/abac/) (powerful but complex and static; ignore it too) - [Webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) (each API request is submitted to an external service for approval) - [Role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (associates permissions to users dynamically) - The one we want is the last one, generally abbreviated as RBAC .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Role-based access control - RBAC allows to specify fine-grained permissions - Permissions are expressed as *rules* - A rule is a combination of: - [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete... - resources (as in "API resource," like pods, nodes, services...) - resource names (to specify e.g. one specific pod instead of all pods) - in some case, [subresources](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources) (e.g. logs are subresources of pods) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## From rules to roles to rolebindings - A *role* is an API object containing a list of *rules* Example: role "external-load-balancer-configurator" can: - [list, get] resources [endpoints, services, pods] - [update] resources [services] - A *rolebinding* associates a role with a user Example: rolebinding "external-load-balancer-configurator": - associates user "external-load-balancer-configurator" - with role "external-load-balancer-configurator" - Yes, there can be users, roles, and rolebindings with the same name - It's a good idea for 1-1-1 bindings; not so much for 1-N ones .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Cluster-scope permissions - API resources Role and RoleBinding are for objects within a namespace - We can also define API resources ClusterRole and ClusterRoleBinding - These are a superset, allowing us to: - specify actions on cluster-wide objects (like nodes) - operate across all namespaces - We can create Role and RoleBinding resources within a namespace - ClusterRole and ClusterRoleBinding resources are global .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Pods and service accounts - A pod can be associated with a service account - by default, it is associated with the `default` service account - as we saw earlier, this service account has no permissions anyway - The associated token is exposed to the pod's filesystem (in `/var/run/secrets/kubernetes.io/serviceaccount/token`) - Standard Kubernetes tooling (like `kubectl`) will look for it there - So Kubernetes tools running in a pod will automatically use the service account .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## In practice - We are going to create a service account - We will use a default cluster role (`view`) - We will bind together this role and this service account - Then we will run a pod using that service account - In this pod, we will install `kubectl` and check our permissions .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Creating a service account - We will call the new service account `viewer` (note that nothing prevents us from calling it `view`, like the role) .exercise[ - Create the new service account: ```bash kubectl create serviceaccount viewer ``` - List service accounts now: ```bash kubectl get serviceaccounts ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Binding a role to the service account - Binding a role = creating a *rolebinding* object - We will call that object `viewercanview` (but again, we could call it `view`) .exercise[ - Create the new role binding: ```bash kubectl create rolebinding viewercanview \ --clusterrole=view \ --serviceaccount=default:viewer ``` ] It's important to note a couple of details in these flags... .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Roles vs Cluster Roles - We used `--clusterrole=view` - What would have happened if we had used `--role=view`? - we would have bound the role `view` from the local namespace (instead of the cluster role `view`) - the command would have worked fine (no error) - but later, our API requests would have been denied - This is a deliberate design decision (we can reference roles that don't exist, and create/update them later) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Users vs Service Accounts - We used `--serviceaccount=default:viewer` - What would have happened if we had used `--user=default:viewer`? - we would have bound the role to a user instead of a service account - again, the command would have worked fine (no error) - ...but our API requests would have been denied later - What's about the `default:` prefix? - that's the namespace of the service account - yes, it could be inferred from context, but... `kubectl` requires it .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Testing - We will run an `alpine` pod and install `kubectl` there .exercise[ - Run a one-time pod: ```bash kubectl run eyepod --rm -ti --restart=Never \ --serviceaccount=viewer \ --image alpine ``` - Install `curl`, then use it to install `kubectl`: ```bash apk add --no-cache curl URLBASE=https://storage.googleapis.com/kubernetes-release/release KUBEVER=$(curl -s $URLBASE/stable.txt) curl -LO $URLBASE/$KUBEVER/bin/linux/amd64/kubectl chmod +x kubectl ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Running `kubectl` in the pod - We'll try to use our `view` permissions, then to create an object .exercise[ - Check that we can, indeed, view things: ```bash ./kubectl get all ``` - But that we can't create things: ``` ./kubectl create deployment testrbac --image=nginx ``` - Exit the container with `exit` or `^D` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Testing directly with `kubectl` - We can also check for permission with `kubectl auth can-i`: ```bash kubectl auth can-i list nodes kubectl auth can-i create pods kubectl auth can-i get pod/name-of-pod kubectl auth can-i get /url-fragment-of-api-request/ kubectl auth can-i '*' services ``` - And we can check permissions on behalf of other users: ```bash kubectl auth can-i list nodes \ --as some-user kubectl auth can-i list nodes \ --as system:serviceaccount:: ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Where does this `view` role come from? - Kubernetes defines a number of ClusterRoles intended to be bound to users - `cluster-admin` can do *everything* (think `root` on UNIX) - `admin` can do *almost everything* (except e.g. changing resource quotas and limits) - `edit` is similar to `admin`, but cannot view or edit permissions - `view` has read-only access to most resources, except permissions and secrets *In many situations, these roles will be all you need.* *You can also customize them!* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Customizing the default roles - If you need to *add* permissions to these default roles (or others), you can do it through the [ClusterRole Aggregation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) mechanism - This happens by creating a ClusterRole with the following labels: ```yaml metadata: labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" ``` - This ClusterRole permissions will be added to `admin`/`edit`/`view` respectively - This is particulary useful when using CustomResourceDefinitions (since Kubernetes cannot guess which resources are sensitive and which ones aren't) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Where do our permissions come from? - When interacting with the Kubernetes API, we are using a client certificate - We saw previously that this client certificate contained: `CN=kubernetes-admin` and `O=system:masters` - Let's look for these in existing ClusterRoleBindings: ```bash kubectl get clusterrolebindings -o yaml | grep -e kubernetes-admin -e system:masters ``` (`system:masters` should show up, but not `kubernetes-admin`.) - Where does this match come from? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## The `system:masters` group - If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out! - It is in the `cluster-admin` binding: ```bash kubectl describe clusterrolebinding cluster-admin ``` - This binding associates `system:masters` with the cluster role `cluster-admin` - And the `cluster-admin` is, basically, `root`: ```bash kubectl describe clusterrole cluster-admin ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Figuring out who can do what - For auditing purposes, sometimes we want to know who can perform an action - There is a proof-of-concept tool by Aqua Security which does exactly that: https://github.com/aquasecurity/kubectl-who-can - This is one way to install it: ```bash docker run --rm -v /usr/local/bin:/go/bin golang \ go get -v github.com/aquasecurity/kubectl-who-can ``` - This is one way to use it: ```bash kubectl-who-can create pods ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: pic .interstitial[] --- name: toc-network-policies class: title Network policies .nav[ [Previous section](#toc-authentication-and-authorization) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Network policies - Namespaces help us to *organize* resources - Namespaces do not provide isolation - By default, every pod can contact every other pod - By default, every service accepts traffic from anyone - If we want this to be different, we need *network policies* .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## What's a network policy? A network policy is defined by the following things. - A *pod selector* indicating which pods it applies to e.g.: "all pods in namespace `blue` with the label `zone=internal`" - A list of *ingress rules* indicating which inbound traffic is allowed e.g.: "TCP connections to ports 8000 and 8080 coming from pods with label `zone=dmz`, and from the external subnet 4.42.6.0/24, except 4.42.6.5" - A list of *egress rules* indicating which outbound traffic is allowed A network policy can provide ingress rules, egress rules, or both. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## How do network policies apply? - A pod can be "selected" by any number of network policies - If a pod isn't selected by any network policy, then its traffic is unrestricted (In other words: in the absence of network policies, all traffic is allowed) - If a pod is selected by at least one network policy, then all traffic is blocked ... ... unless it is explicitly allowed by one of these network policies .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- class: extra-details ## Traffic filtering is flow-oriented - Network policies deal with *connections*, not individual packets - Example: to allow HTTP (80/tcp) connections to pod A, you only need an ingress rule (You do not need a matching egress rule to allow response traffic to go through) - This also applies for UDP traffic (Allowing DNS traffic can be done with a single rule) - Network policy implementations use stateful connection tracking .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Pod-to-pod traffic - Connections from pod A to pod B have to be allowed by both pods: - pod A has to be unrestricted, or allow the connection as an *egress* rule - pod B has to be unrestricted, or allow the connection as an *ingress* rule - As a consequence: if a network policy restricts traffic going from/to a pod, the restriction cannot be overridden by a network policy selecting another pod - This prevents an entity managing network policies in namespace A (but without permission to do so in namespace B) from adding network policies giving them access to namespace B .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## The rationale for network policies - In network security, it is generally considered better to "deny all, then allow selectively" (The other approach, "allow all, then block selectively" makes it too easy to leave holes) - As soon as one network policy selects a pod, the pod enters this "deny all" logic - Further network policies can open additional access - Good network policies should be scoped as precisely as possible - In particular: make sure that the selector is not too broad (Otherwise, you end up affecting pods that were otherwise well secured) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Our first network policy This is our game plan: - run a web server in a pod - create a network policy to block all access to the web server - create another network policy to allow access only from specific pods .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Running our test web server .exercise[ - Let's use the `nginx` image: ```bash kubectl create deployment testweb --image=nginx ``` - Find out the IP address of the pod with one of these two commands: ```bash kubectl get pods -o wide -l app=testweb IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP) ``` - Check that we can connect to the server: ```bash curl $IP ``` ] The `curl` command should show us the "Welcome to nginx!" page. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Adding a very restrictive network policy - The policy will select pods with the label `app=testweb` - It will specify an empty list of ingress rules (matching nothing) .exercise[ - Apply the policy in this YAML file: ```bash kubectl apply -f ~/container.training/k8s/netpol-deny-all-for-testweb.yaml ``` - Check if we can still access the server: ```bash curl $IP ``` ] The `curl` command should now time out. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-all-for-testweb spec: podSelector: matchLabels: app: testweb ingress: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Allowing connections only from specific pods - We want to allow traffic from pods with the label `run=testcurl` - Reminder: this label is automatically applied when we do `kubectl run testcurl ...` .exercise[ - Apply another policy: ```bash kubectl apply -f ~/container.training/k8s/netpol-allow-testcurl-for-testweb.yaml ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the second file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-testcurl-for-testweb spec: podSelector: matchLabels: app: testweb ingress: - from: - podSelector: matchLabels: run: testcurl ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Testing the network policy - Let's create pods with, and without, the required label .exercise[ - Try to connect to testweb from a pod with the `run=testcurl` label: ```bash kubectl run testcurl --rm -i --image=centos -- curl -m3 $IP ``` - Try to connect to testweb with a different label: ```bash kubectl run testkurl --rm -i --image=centos -- curl -m3 $IP ``` ] The first command will work (and show the "Welcome to nginx!" page). The second command will fail and time out after 3 seconds. (The timeout is obtained with the `-m3` option.) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## An important warning - Some network plugins only have partial support for network policies - For instance, Weave added support for egress rules [in version 2.4](https://github.com/weaveworks/weave/pull/3313) (released in July 2018) - But only recently added support for ipBlock [in version 2.5](https://github.com/weaveworks/weave/pull/3367) (released in Nov 2018) - Unsupported features might be silently ignored (Making you believe that you are secure, when you're not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Network policies, pods, and services - Network policies apply to *pods* - A *service* can select multiple pods (And load balance traffic across them) - It is possible that we can connect to some pods, but not some others (Because of how network policies have been defined for these pods) - In that case, connections to the service will randomly pass or fail (Depending on whether the connection was sent to a pod that we have access to or not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Network policies and namespaces - A good strategy is to isolate a namespace, so that: - all the pods in the namespace can communicate together - other namespaces cannot access the pods - external access has to be enabled explicitly - Let's see what this would look like for the DockerCoins app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Network policies for DockerCoins - We are going to apply two policies - The first policy will prevent traffic from other namespaces - The second policy will allow traffic to the `webui` pods - That's all we need for that app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Blocking traffic from other namespaces This policy selects all pods in the current namespace. It allows traffic only from pods in the current namespace. (An empty `podSelector` means "all pods.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-from-other-namespaces spec: podSelector: {} ingress: - from: - podSelector: {} ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Allowing traffic to `webui` pods This policy selects all pods with label `app=webui`. It allows traffic from any source. (An empty `from` field means "all sources.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-webui spec: podSelector: matchLabels: app: webui ingress: - from: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Applying both network policies - Both network policies are declared in the file `k8s/netpol-dockercoins.yaml` .exercise[ - Apply the network policies: ```bash kubectl apply -f ~/container.training/k8s/netpol-dockercoins.yaml ``` - Check that we can still access the web UI from outside (and that the app is still working correctly!) - Check that we can't connect anymore to `rng` or `hasher` through their ClusterIP ] Note: using `kubectl proxy` or `kubectl port-forward` allows us to connect regardless of existing network policies. This allows us to debug and troubleshoot easily, without having to poke holes in our firewall. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Cleaning up our network policies - The network policies that we have installed block all traffic to the default namespace - We should remove them, otherwise further exercises will fail! .exercise[ - Remove all network policies: ```bash kubectl delete networkpolicies --all ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Protecting the control plane - Should we add network policies to block unauthorized access to the control plane? (etcd, API server, etc.) -- - At first, it seems like a good idea ... -- - But it *shouldn't* be necessary: - not all network plugins support network policies - the control plane is secured by other methods (mutual TLS, mostly) - the code running in our pods can reasonably expect to contact the API (and it can do so safely thanks to the API permission model) - If we block access to the control plane, we might disrupt legitimate code - ...Without necessarily improving security .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Further resources - As always, the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is a good starting point - The API documentation has a lot of detail about the format of various objects: - [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicy-v1-networking-k8s-io) - [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyspec-v1-networking-k8s-io) - [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyingressrule-v1-networking-k8s-io) - etc. - And two resources by [Ahmet Alp Balkan](https://ahmet.im/): - a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017 - a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)]
RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous section](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrally opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise β writing better Dockerfiles .nav[ [Previous section](#toc-dockerfile-examples) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Exercise β writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[containers/Exercise_Dockerfile_Advanced.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Exercise_Dockerfile_Advanced.md)] --- class: pic .interstitial[] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous section](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers  .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous section](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-container-networking-basics) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container  .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[] --- name: toc-container-networking-basics class: title Container networking basics .nav[ [Previous section](#toc-getting-inside-a-container) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Container networking basics  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Objectives We will now run network services (accepting requests) in containers. At the end of this section, you will be able to: * Run a network service in a container. * Manipulate container networking basics. * Find a container's IP address. We will also explain the different network models used by Docker. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## A simple, static web server Run the Docker Hub image `nginx`, which contains a basic web server: ```bash $ docker run -d -P nginx 66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e ``` * Docker will download the image from the Docker Hub. * `-d` tells Docker to run the image in the background. * `-P` tells Docker to make this service reachable from other computers. (`-P` is the short version of `--publish-all`.) But, how do we connect to our web server now? .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Finding our web server port We will use `docker ps`: ```bash $ docker ps CONTAINER ID IMAGE ... PORTS ... e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ... ``` * The web server is running on port 80 inside the container. * This port is mapped to port 32768 on our Docker host. We will explain the whys and hows of this port mapping. But first, let's make sure that everything works properly. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (GUI) Point your browser to the IP address of your Docker host, on the port shown by `docker ps` for container port 80.  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (CLI) You can also use `curl` directly from the Docker host. Make sure to use the right port number if it is different from the example below: ```bash $ curl localhost:32768 Welcome to nginx! ... ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## How does Docker know which port to map? * There is metadata in the image telling "this image has something on port 80". * We can see that metadata with `docker inspect`: ```bash $ docker inspect --format '{{.Config.ExposedPorts}}' nginx map[80/tcp:{}] ``` * This metadata was set in the Dockerfile, with the `EXPOSE` keyword. * We can see that with `docker history`: ```bash $ docker history nginx IMAGE CREATED CREATED BY 7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "β¦ 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM] 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Why are we mapping ports? * We are out of IPv4 addresses. * Containers cannot have public IPv4 addresses. * They have private addresses. * Services have to be exposed port by port. * Ports have to be mapped to avoid conflicts. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Finding the web server port in a script Parsing the output of `docker ps` would be painful. There is a command to help us: ```bash $ docker port 80 32768 ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Manual allocation of port numbers If you want to set port numbers yourself, no problem: ```bash $ docker run -d -p 80:80 nginx $ docker run -d -p 8000:80 nginx $ docker run -d -p 8080:80 -p 8888:80 nginx ``` * We are running three NGINX web servers. * The first one is exposed on port 80. * The second one is exposed on port 8000. * The third one is exposed on ports 8080 and 8888. Note: the convention is `port-on-host:port-on-container`. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Plumbing containers into your infrastructure There are many ways to integrate containers in your network. * Start the container, letting Docker allocate a public port for it. Then retrieve that port number and feed it to your configuration. * Pick a fixed port number in advance, when you generate your configuration. Then start your container by setting the port numbers manually. * Use a network plugin, connecting your containers with e.g. VLANs, tunnels... * Enable *Swarm Mode* to deploy across a cluster. The container will then be reachable through any node of the cluster. When using Docker through an extra management layer like Mesos or Kubernetes, these will usually provide their own mechanism to expose containers. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Finding the container's IP address We can use the `docker inspect` command to find the IP address of the container. ```bash $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 172.17.0.3 ``` * `docker inspect` is an advanced command, that can retrieve a ton of information about our containers. * Here, we provide it with a format string to extract exactly the private IP address of the container. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Pinging our container We can test connectivity to the container using the IP address we've just discovered. Let's see this now by using the `ping` tool. ```bash $ ping 64 bytes from : icmp_req=1 ttl=64 time=0.085 ms 64 bytes from : icmp_req=2 ttl=64 time=0.085 ms 64 bytes from : icmp_req=3 ttl=64 time=0.085 ms ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- ## Section summary We've learned how to: * Expose a network port. * Manipulate container networking basics. * Find a container's IP address. In the next chapter, we will see how to connect containers together without exposing their ports. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Networking_Basics.md)] --- class: pic .interstitial[] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous section](#toc-container-networking-basics) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-the-container-network-model) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports many different network drivers. The built-in drivers include: * `bridge` (default) * `none` * `host` * `container` The driver is selected with `docker run --net ...`. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Network_Drivers.md)] --- class: pic .interstitial[] --- name: toc-the-container-network-model class: title The Container Network Model .nav[ [Previous section](#toc-container-network-drivers) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-service-discovery-with-containers) ] .debug[(automatically generated title slide)] --- class: title # The Container Network Model  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Objectives We will learn about the CNM (Container Network Model). At the end of this lesson, you will be able to: * Create a private network for a group of containers. * Use container naming to connect services together. * Dynamically connect and disconnect containers to networks. * Set the IP address of a container. We will also explain the principle of overlay networks and network plugins. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## The Container Network Model The CNM was introduced in Engine 1.9.0 (November 2015). The CNM adds the notion of a *network*, and a new top-level command to manipulate and see those networks: `docker network`. ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f blog-dev overlay 228a4355d548 blog-prod overlay ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## What's in a network? * Conceptually, a network is a virtual switch. * It can be local (to a single Engine) or global (spanning multiple hosts). * A network has an IP subnet associated to it. * Docker will allocate IP addresses to the containers connected to a network. * Containers can be connected to multiple networks. * Containers can be given per-network names and aliases. * The names and aliases can be resolved via an embedded DNS server. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Network implementation details * A network is managed by a *driver*. * The built-in drivers include: * `bridge` (default) * `none` * `host` * `macvlan` * A multi-host driver, *overlay*, is available out of the box (for Swarm clusters). * More drivers can be provided by plugins (OVS, VLAN...) * A network can have a custom IPAM (IP allocator). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Differences with the CNI * CNI = Container Network Interface * CNI is used notably by Kubernetes * With CNI, all the nodes and containers are on a single IP network * Both CNI and CNM offer the same functionality, but with very different methods .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic ## Single container in a Docker network  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on a single Docker network  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on two Docker networks  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Creating a network Let's create a network called `dev`. ```bash $ docker network create dev 4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba ``` The network is now visible with the `network ls` command: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f dev bridge ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Placing containers on a network We will create a *named* container on this network. It will be reachable with its name, `es`. ```bash $ docker run -d --name es --net dev elasticsearch:2 8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863 ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Communication between containers Now, create another container on this network. .small[ ```bash $ docker run -ti --net dev alpine sh root@0ecccdfa45ef:/# ``` ] From this new container, we can resolve and ping the other one, using its assigned name: .small[ ```bash / # ping es PING es (172.18.0.2) 56(84) bytes of data. 64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms ^C --- es ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms root@0ecccdfa45ef:/# ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving container addresses In Docker Engine 1.9, name resolution is implemented with `/etc/hosts`, and updating it each time containers are added/removed. .small[ ```bash [root@0ecccdfa45ef /]# cat /etc/hosts 172.18.0.3 0ecccdfa45ef 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 es 172.18.0.2 es.dev ``` ] In Docker Engine 1.10, this has been replaced by a dynamic resolver. (This avoids race conditions when updating `/etc/hosts`.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-service-discovery-with-containers class: title Service discovery with containers .nav[ [Previous section](#toc-the-container-network-model) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-ambassadors) ] .debug[(automatically generated title slide)] --- # Service discovery with containers * Let's try to run an application that requires two containers. * The first container is a web server. * The other one is a redis data store. * We will place them both on the `dev` network created before. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Running the web server * The application is provided by the container image `jpetazzo/trainingwheels`. * We don't know much about it so we will try to run it and see what happens! Start the container, exposing all its ports: ```bash $ docker run --net dev -d -P jpetazzo/trainingwheels ``` Check the port that has been allocated to it: ```bash $ docker ps -l ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Test the web server * If we connect to the application now, we will see an error page:  * This is because the Redis service is not running. * This container tries to resolve the name `redis`. Note: we're not using a FQDN or an IP address here; just `redis`. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Start the data store * We need to start a Redis container. * That container must be on the same network as the web server. * It must have the right name (`redis`) so the application can find it. Start the container: ```bash $ docker run --net dev --name redis -d redis ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Test the web server again * If we connect to the application now, we should see that the app is working correctly:  * When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## A few words on *scope* * What if we want to run multiple copies of our application? * Since names are unique, there can be only one container named `redis` at a time. * However, we can specify the network name of our container with `--net-alias`. * `--net-alias` is scoped per network, and independent from the container name. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Using a network alias instead of a name Let's remove the `redis` container: ```bash $ docker rm -f redis ``` And create one that doesn't block the `redis` name: ```bash $ docker run --net dev --net-alias redis -d redis ``` Check that the app still works (but the counter is back to 1, since we wiped out the old Redis container). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Names are *local* to each network Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network. ```bash $ docker run --rm alpine ping es ping: bad address 'es' ``` Names can be resolved only when containers are on the same network. Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`! We will use *network aliases*. A container can have multiple network aliases. Network aliases are *local* to a given network (only exist in this network). Multiple containers can have the same network alias (even on the same network). In Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Creating containers on another network Create the `prod` network. ```bash $ docker network create prod 5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c ``` We can now create multiple containers with the `es` alias on the new `prod` network. ```bash $ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2 38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771 $ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2 1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving network aliases Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image. ```bash $ docker run --net prod --rm alpine nslookup es Name: es Address 1: 172.23.0.3 prod-es-2.prod Address 2: 172.23.0.2 prod-es-1.prod ``` (You can ignore the `can't resolve '(null)'` errors.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Connecting to aliased containers Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint. Try the following command a few times: .small[ ```bash $ docker run --rm --net dev centos curl -s es:9200 { "name" : "Tarot", ... } ``` ] Then try it a few times by replacing `--net dev` with `--net prod`: .small[ ```bash $ docker run --rm --net prod centos curl -s es:9200 { "name" : "The Symbiote", ... } ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Good to know ... * Docker will not create network names and aliases on the default `bridge` network. * Therefore, if you want to use those features, you have to create a custom network first. * Network aliases are *not* unique on a given network. * i.e., multiple containers can have the same alias on the same network. * In that scenario, the Docker DNS server will return multiple records. (i.e. you will get DNS round robin out of the box.) * Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS. * Creation of networks and network aliases is generally automated with tools like Compose. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## A few words about round robin DNS Don't rely exclusively on round robin DNS to achieve load balancing. Many factors can affect DNS resolution, and you might see: - all traffic going to a single instance; - traffic being split (unevenly) between some instances; - different behavior depending on your application language; - different behavior depending on your base distro; - different behavior depending on other factors (sic). It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Custom networks When creating a network, extra options can be provided. * `--internal` disables outbound traffic (the network won't have a default gateway). * `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed). * `--subnet` (in CIDR notation) indicates the subnet to use. * `--ip-range` (in CIDR notation) indicates the subnet to allocate from. * `--aux-address` allows specifying a list of reserved addresses (which won't be allocated to containers). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Setting containers' IP address * It is possible to set a container's address with `--ip`. * The IP address has to be within the subnet used for the container. A full example would look like this. ```bash $ docker network create --subnet 10.66.0.0/16 pubnet 42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135 $ docker run --net pubnet --ip 10.66.66.66 -d nginx b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09 ``` *Note: don't hard code container IP addresses in your code!* *I repeat: don't hard code container IP addresses in your code!* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Overlay networks * The features we've seen so far only work when all containers are on a single host. * If containers span multiple hosts, we need an *overlay* network to connect them together. * Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN, *enabled with Swarm Mode*. * Other plugins (Weave, Calico...) can provide overlay networks as well. * Once you have an overlay network, *all the features that we've used in this chapter work identically across multiple hosts.* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (overlay) Out of the scope for this intro-level workshop! Very short instructions: - enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes) - `docker network create mynet --driver overlay` - `docker service create --network mynet myimage` If you want to learn more about Swarm mode, you can check [this video](https://www.youtube.com/watch?v=EuzoEaE6Cqs) or [these slides](https://container.training/swarm-selfpaced.yml.html). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (plugins) Out of the scope for this intro-level workshop! General idea: - install the plugin (they often ship within containers) - run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!) - some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location") - you can then `docker network create --driver pluginname` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Connecting and disconnecting dynamically * So far, we have specified which network to use when starting the container. * The Docker Engine also allows connecting and disconnecting while the container is running. * This feature is exposed through the Docker API, and through two Docker CLI commands: * `docker network connect ` * `docker network disconnect ` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Dynamically connecting to a network * We have a container named `es` connected to a network named `dev`. * Let's start a simple alpine container on the default network: ```bash $ docker run -ti alpine sh / # ``` * In this container, try to ping the `es` container: ```bash / # ping es ping: bad address 'es' ``` This doesn't work, but we will change that by connecting the container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Finding the container ID and connecting it * Figure out the ID of our alpine container; here are two methods: * looking at `/etc/hostname` in the container, * running `docker ps -lq` on the host. * Run the following command on the host: ```bash $ docker network connect dev `` ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Checking what we did * Try again to `ping es` from the container. * It should now work correctly: ```bash / # ping es PING es (172.20.0.3): 56 data bytes 64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms 64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms ^C ``` * Interrupt it with Ctrl-C. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Looking at the network setup in the container We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`: .small[ ```bash / # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 20: eth1@if21: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ``` ] Each network connection is materialized with a virtual network interface. As we can see, we can be connected to multiple networks at the same time. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- ## Disconnecting from a network * Let's try the symmetrical command to disconnect the container: ```bash $ docker network disconnect dev ``` * From now on, if we try to ping `es`, it will not resolve: ```bash / # ping es ping: bad address 'es' ``` * Trying to ping the IP address directly won't work either: ```bash / # ping 172.20.0.3 ... (nothing happens until we interrupt it with Ctrl-C) ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases are scoped per network * Each network has its own set of network aliases. * We saw this earlier: `es` resolves to different addresses in `dev` and `prod`. * If we are connected to multiple networks, the resolver looks up names in each of them (as of Docker Engine 18.03, it is the connection order) and stops as soon as the name is found. * Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not** give us the addresses of all the `es` services; but only the ones in `dev` or `prod`. * However, we can lookup `es.dev` or `es.prod` if we need to. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Finding out about our networks and names * We can do reverse DNS lookups on containers' IP addresses. * If the IP address belongs to a network (other than the default bridge), the result will be: ``` name-or-first-alias-or-container-id.network-name ``` * Example: .small[ ```bash $ docker run -ti --net prod --net-alias hello alpine / # apk add --no-cache drill ... OK: 5 MiB in 13 packages / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03 inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ... / # drill -t ptr `3.0.21.172`.in-addr.arpa ... ;; ANSWER SECTION: 3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`. ... ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Building with a custom network * We can build a Dockerfile with a custom network with `docker build --network NAME`. * This can be used to check that a build doesn't access the network. (But keep in mind that most Dockerfiles will fail, because they need to install remote packages and dependencies!) * This may be used to access an internal package repository. (But try to use a multi-stage build instead, if possible!) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-ambassadors class: title Ambassadors .nav[ [Previous section](#toc-service-discovery-with-containers) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-local-development-workflow-with-docker) ] .debug[(automatically generated title slide)] --- class: title # Ambassadors  .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## The ambassador pattern Ambassadors are containers that "masquerade" or "proxy" for another service. They abstract the connection details for this services, and can help with: * discovery (where is my service actually running?) * migration (what if my service has to be moved while I use it?) * fail over (how do I know to which instance of a replicated service I should connect?) * load balancing (how to I spread my requests across multiple instances of a service?) * authentication (what if my service requires credentials, certificates, or otherwise?) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Introduction to Ambassadors The ambassador pattern: * Takes advantage of Docker's per-container naming system and abstracts connections between services. * Allows you to manage services without hard-coding connection information inside applications. To do this, instead of directly connecting containers you insert ambassador containers. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- class: pic  .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Interacting with ambassadors * The web container uses normal Docker networking to connect to the ambassador. * The database container also talks with an ambassador. * For both containers, the ambassador is totally transparent. (There is no difference between normal operation and operation with an ambassador.) * If the database container is moved (or a failover happens), its new location will be tracked by the ambassador containers, and the web application container will still be able to connect, without reconfiguration. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for simple service discovery Use case: * my application code connects to `redis` on the default port (6379), * my Redis service runs on another machine, on a non-default port (e.g. 12345), * I want to use an ambassador to let my application connect without modification. The ambassador will be: * a container running right next to my application, * using the name `redis` (or linked as `redis`), * listening on port 6379, * forwarding connections to the actual Redis service. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for service migration Use case: * my application code still connects to `redis`, * my Redis service runs somewhere else, * my Redis service is moved to a different host+port, * the location of the Redis service is given to me via e.g. DNS SRV records, * I want to use an ambassador to automatically connect to the new location, with as little disruption as possible. The ambassador will be: * the same kind of container as before, * running an additional routine to monitor DNS SRV records, * updating the forwarding destination when the DNS SRV records are updated. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for credentials injection Use case: * my application code still connects to `redis`, * my application code doesn't provide Redis credentials, * my production Redis service requires credentials, * my staging Redis service requires different credentials, * I want to use an ambassador to abstract those credentials. The ambassador will be: * a container using the name `redis` (or a link), * passed the credentials to use, * running a custom proxy that accepts connections on Redis default port, * performing authentication with the target Redis service before forwarding traffic. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors for load balancing Use case: * my application code connects to a web service called `api`, * I want to run multiple instances of the `api` backend, * those instances will be on different machines and ports, * I want to use an ambassador to abstract those details. The ambassador will be: * a container using the name `api` (or a link), * passed the list of backends to use (statically or dynamically), * running a load balancer (e.g. HAProxy or NGINX), * dispatching requests across all backends transparently. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## "Ambassador" is a *pattern* There are many ways to implement the pattern. Different deployments will use different underlying technologies. * On-premise deployments with a trusted network can track container locations in e.g. Zookeeper, and generate HAproxy configurations each time a location key changes. * Public cloud deployments or deployments across unsafe networks can add TLS encryption. * Ad-hoc deployments can use a master-less discovery protocol like avahi to register and discover services. * It is also possible to do one-shot reconfiguration of the ambassadors. It is slightly less dynamic but has far fewer requirements. * Ambassadors can be used in addition to, or instead of, overlay networks. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Service meshes * A service mesh is a configurable network layer. * It can provide service discovery, high availability, load balancing, observability... * Service meshes are particularly useful for microservices applications. * Service meshes are often implemented as proxies. * Applications connect to the service mesh, which relays the connection where needed. *Does that sound familiar?* .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Ambassadors and service meshes * When using a service mesh, a "sidecar container" is often used as a proxy * Our services connect (transparently) to that sidecar container * That sidecar container figures out where to forward the traffic ... Does that sound familiar? (It should, because service meshes are essentially app-wide or cluster-wide ambassadors!) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Some popular service meshes ... And related projects: * [Consul Connect](https://www.consul.io/docs/connect/index.html) Transparently secures service-to-service connections with mTLS. * [Gloo](https://gloo.solo.io/) API gateway that can interconnect applications on VMs, containers, and serverless. * [Istio](https://istio.io/) A popular service mesh. * [Linkerd](https://linkerd.io/) Another popular service mesh. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- ## Learning more about service meshes A few blog posts about service meshes: * [Containers, microservices, and service meshes](http://jpetazzo.github.io/2019/05/17/containers-microservices-service-meshes/) Provides historical context: how did we do before service meshes were invented? * [Do I Need a Service Mesh?](https://www.nginx.com/blog/do-i-need-a-service-mesh/) Explains the purpose of service meshes. Illustrates some NGINX features. * [Do you need a service mesh?](https://www.oreilly.com/ideas/do-you-need-a-service-mesh) Includes high-level overview and definitions. * [What is Service Mesh and Why Do We Need It?](https://containerjournal.com/2018/12/12/what-is-service-mesh-and-why-do-we-need-it/) Includes a step-by-step demo of Linkerd. And a video: * [What is a Service Mesh, and Do I Need One When Developing Microservices?](https://www.datawire.io/envoyproxy/service-mesh/) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Ambassadors.md)] --- class: pic .interstitial[] --- name: toc-local-development-workflow-with-docker class: title Local development workflow with Docker .nav[ [Previous section](#toc-ambassadors) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-working-with-volumes) ] .debug[(automatically generated title slide)] --- class: title # Local development workflow with Docker  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Objectives At the end of this section, you will be able to: * Share code between container and host. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Local development in a container We want to solve the following issues: - "Works on my machine" - "Not the same version" - "Missing dependency" By using Docker containers, we will get a consistent development environment. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Working on the "namer" application * We have to work on some application whose code is at: https://github.com/jpetazzo/namer. * What is it? We don't know yet! * Let's download the code. ```bash $ git clone https://github.com/jpetazzo/namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the code ```bash $ cd namer $ ls -1 company_name_generator.rb config.ru docker-compose.yml Dockerfile Gemfile ``` -- Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe? .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the `Dockerfile` ```dockerfile FROM ruby COPY . /src WORKDIR /src RUN bundler install CMD ["rackup", "--host", "0.0.0.0"] EXPOSE 9292 ``` * This application is using a base `ruby` image. * The code is copied in `/src`. * Dependencies are installed with `bundler`. * The application is started with `rackup`. * It is listening on port 9292. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Building and running the "namer" application * Let's build the application with the `Dockerfile`! -- ```bash $ docker build -t namer . ``` -- * Then run it. *We need to expose its ports.* -- ```bash $ docker run -dP namer ``` -- * Check on which port the container is listening. -- ```bash $ docker ps -l ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Connecting to our application * Point our browser to our Docker node, on the port allocated to the container. -- * Hit "reload" a few times. -- * This is an enterprise-class, carrier-grade, ISO-compliant company name generator! (With 50% more bullshit than the average competition!) (Wait, was that 50% more, or 50% less? *Anyway!*)  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Making changes to the code Option 1: * Edit the code locally * Rebuild the image * Re-run the container Option 2: * Enter the container (with `docker exec`) * Install an editor * Make changes from within the container Option 3: * Use a *volume* to mount local files into the container * Make changes locally * Changes are reflected in the container .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Our first volume We will tell Docker to map the current directory to `/src` in the container. ```bash $ docker run -d -v $(pwd):/src -P namer ``` * `-d`: the container should run in detached mode (in the background). * `-v`: the following host directory should be mounted inside the container. * `-P`: publish all the ports exposed by this image. * `namer` is the name of the image we will run. * We don't specify a command to run because it is already set in the Dockerfile via `CMD`. Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell). .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Mounting volumes inside containers The `-v` flag mounts a directory from your host into your Docker container. The flag structure is: ```bash [host-path]:[container-path]:[rw|ro] ``` * `[host-path]` and `[container-path]` are created if they don't exist. * You can control the write status of the volume with the `ro` and `rw` options. * If you don't specify `rw` or `ro`, it will be `rw` by default. There will be a full chapter about volumes! .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Testing the development container * Check the port used by our new container. ```bash $ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ... ``` * Open the application in your web browser. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Making a change to our application Our customer really doesn't like the color of our text. Let's change it. ```bash $ vi company_name_generator.rb ``` And change ```css color: royalblue; ``` To: ```css color: red; ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Viewing our changes * Reload the application in our browser. -- * The color should have changed.  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Understanding volumes * Volumes are *not* copying or synchronizing files between the host and the container. * Volumes are *bind mounts*: a kernel mechanism associating one path with another. * Bind mounts are *kind of* similar to symbolic links, but at a very different level. * Changes made on the host or on the container will be visible on the other side. (Under the hood, it's the same file anyway.) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Trash your servers and burn your code *(This is the title of a [2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html) by Chad Fowler, where he explains the concept of immutable infrastructure.)* -- * Let's majorly mess up our container. (Remove files or whatever.) * Now, how can we fix this? -- * Our old container (with the blue version of the code) is still running. * See on which port it is exposed: ```bash docker ps ``` * Point our browser to it to confirm that it still works fine. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Immutable infrastructure in a nutshell * Instead of *updating* a server, we deploy a new one. * This might be challenging with classical servers, but it's trivial with containers. * In fact, with Docker, the most logical workflow is to build a new image and run it. * If something goes wrong with the new image, we can always restart the old one. * We can even keep both versions running side by side. If this pattern sounds interesting, you might want to read about *blue/green deployment* and *canary deployments*. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Recap of the development workflow 1. Write a Dockerfile to build an image containing our development environment. (Rails, Django, ... and all the dependencies for our app) 2. Start a container from that image. Use the `-v` flag to mount our source code inside the container. 3. Edit the source code outside the container, using familiar tools. (vim, emacs, textmate...) 4. Test the application. (Some frameworks pick up changes automatically. Others require you to Ctrl-C + restart after each modification.) 5. Iterate and repeat steps 3 and 4 until satisfied. 6. When done, commit+push source code changes. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Debugging inside the container Docker has a command called `docker exec`. It allows users to run a new process in a container which is already running. If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead. You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## `docker exec` example ```bash $ # You can run ruby commands in the area the app is running and more! $ docker exec -it bash root@5ca27cf74c2e:/opt/namer# irb irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact => [0, 1, 4, 9, 16] irb(main):002:0> exit ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Stopping the container Now that we're done let's stop our container. ```bash $ docker stop ``` And remove it. ```bash $ docker rm ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- ## Section summary We've learned how to: * Share code between container and host. * Set our working directory. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Local_Development_Workflow.md)] --- class: pic .interstitial[] --- name: toc-working-with-volumes class: title Working with volumes .nav[ [Previous section](#toc-local-development-workflow-with-docker) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-compose-for-development-stacks) ] .debug[(automatically generated title slide)] --- class: title # Working with volumes  .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Objectives At the end of this section, you will be able to: * Create containers holding volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Working with volumes Docker volumes can be used to achieve many things, including: * Bypassing the copy-on-write system to obtain native disk I/O performance. * Bypassing copy-on-write to leave some files out of `docker commit`. * Sharing a directory between multiple containers. * Sharing a directory between the host and a container. * Sharing a *single file* between the host and a container. * Using remote storage and custom storage with *volume drivers*. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes are special directories in a container Volumes can be declared in two different ways: * Within a `Dockerfile`, with a `VOLUME` instruction. ```dockerfile VOLUME /uploads ``` * On the command-line, with the `-v` flag for `docker run`. ```bash $ docker run -d -v /uploads myapp ``` In both cases, `/uploads` (inside the container) will be a volume. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes bypass the copy-on-write system Volumes act as passthroughs to the host filesystem. * The I/O performance on a volume is exactly the same as I/O performance on the Docker host. * When you `docker commit`, the content of volumes is not brought into the resulting image. * If a `RUN` instruction in a `Dockerfile` changes the content of a volume, those changes are not recorded neither. * If a container is started with the `--read-only` flag, the volume will still be writable (unless the volume is a read-only volume). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes can be shared across containers You can start a container with *exactly the same volumes* as another one. The new container will have the same volumes, in the same directories. They will contain exactly the same thing, and remain in sync. Under the hood, they are actually the same directories on the host anyway. This is done using the `--volumes-from` flag for `docker run`. We will see an example in the following slides. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Sharing app server logs with another container Let's start a Tomcat container: ```bash $ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat ``` Now, start an `alpine` container accessing the same volume: ```bash $ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*" ``` Then, from another window, send requests to our Tomcat container: ```bash $ curl localhost:8080 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes exist independently of containers If a container is stopped or removed, its volumes still exist and are available. Volumes can be listed and manipulated with `docker volume` subcommands: ```bash $ docker volume ls DRIVER VOLUME NAME local 5b0b65e4316da67c2d471086640e6005ca2264f3... local pgdata-prod local pgdata-dev local 13b59c9936d78d109d094693446e174e5480d973... ``` Some of those volume names were explicit (pgdata-prod, pgdata-dev). The others (the hex IDs) were generated automatically by Docker. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Naming volumes * Volumes can be created without a container, then used in multiple containers. Let's create a couple of volumes directly. ```bash $ docker volume create webapps webapps ``` ```bash $ docker volume create logs logs ``` Volumes are not anchored to a specific path. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Using our named volumes * Volumes are used with the `-v` option. * When a host path does not contain a `/`, it is considered a volume name. Let's start a web server using the two previous volumes. ```bash $ docker run -d -p 1234:8080 \ -v logs:/usr/local/tomcat/logs \ -v webapps:/usr/local/tomcat/webapps \ tomcat ``` Check that it's running correctly: ```bash $ curl localhost:1234 ... (Tomcat tells us how happy it is to be up and running) ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Using a volume in another container * We will make changes to the volume from another container. * In this example, we will run a text editor in the other container. (But this could be an FTP server, a WebDAV server, a Git receiver...) Let's start another container using the `webapps` volume. ```bash $ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp ``` Vandalize the page, save, exit. Then run `curl localhost:1234` again to see your changes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Using custom "bind-mounts" In some cases, you want a specific directory on the host to be mapped inside the container: * You want to manage storage and snapshots yourself. (With LVM, or a SAN, or ZFS, or anything else!) * You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk. * You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed). Wait, we already met the last use-case in our example development workflow! Nice. ```bash $ docker run -d -v /path/on/the/host:/path/in/container image ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Migrating data with `--volumes-from` The `--volumes-from` option tells Docker to re-use all the volumes of an existing container. * Scenario: migrating from Redis 2.8 to Redis 3.0. * We have a container (`myredis`) running Redis 2.8. * Stop the `myredis` container. * Start a new container, using the Redis 3.0 image, and the `--volumes-from` option. * The new container will inherit the data of the old one. * Newer containers can use `--volumes-from` too. * Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Data migration in practice Let's create a Redis container. ```bash $ docker run -d --name redis28 redis:2.8 ``` Connect to the Redis container and set some data. ```bash $ docker run -ti --link redis28:redis busybox telnet redis 6379 ``` Issue the following commands: ```bash SET counter 42 INFO server SAVE QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Upgrading Redis Stop the Redis container. ```bash $ docker stop redis28 ``` Start the new Redis container. ```bash $ docker run -d --name redis30 --volumes-from redis28 redis:3.0 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Testing the new Redis Connect to the Redis container and see our data. ```bash docker run -ti --link redis30:redis busybox telnet redis 6379 ``` Issue a few commands. ```bash GET counter INFO server QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes lifecycle * When you remove a container, its volumes are kept around. * You can list them with `docker volume ls`. * You can access them by creating a container with `docker run -v`. * You can remove them with `docker volume rm` or `docker system prune`. Ultimately, _you_ are the one responsible for logging, monitoring, and backup of your volumes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes defined by an image Wondering if an image has volumes? Just use `docker inspect`: ```bash $ # docker inspect training/datavol [{ "config": { . . . "Volumes": { "/var/webapp": {} }, . . . }] ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes used by a container To look which paths are actually volumes, and to what they are bound, use `docker inspect` (again): ```bash $ docker inspect [{ "ID": "", . . . "Volumes": { "/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468" }, "VolumesRW": { "/var/webapp": true }, }] ``` * We can see that our volume is present on the file system of the Docker host. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Sharing a single file The same `-v` flag can be used to share a single file (instead of a directory). One of the most interesting examples is to share the Docker control socket. ```bash $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh ``` From that container, you can now run `docker` commands communicating with the Docker Engine running on the host. Try `docker ps`! .warning[Since that container has access to the Docker socket, it has root-like access to the host.] .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volume plugins You can install plugins to manage volumes backed by particular storage systems, or providing extra features. For instance: * [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS, EFS). * [Portworx](https://portworx.com/) - provides distributed block store for containers. * [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale to several petabytes. It provides interfaces for object, block and file storage. * and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)! .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Volumes vs. Mounts * Since Docker 17.06, a new options is available: `--mount`. * It offers a new, richer syntax to manipulate data in containers. * It makes an explicit difference between: - volumes (identified with a unique name, managed by a storage plugin), - bind mounts (identified with a host path, not managed). * The former `-v` / `--volume` option is still usable. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## `--mount` syntax Binding a host path to a container path: ```bash $ docker run \ --mount type=bind,source=/path/on/host,target=/path/in/container alpine ``` Mounting a volume to a container path: ```bash $ docker run \ --mount source=myvolume,target=/path/in/container alpine ``` Mounting a tmpfs (in-memory, for temporary files): ```bash $ docker run \ --mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- ## Section summary We've learned how to: * Create and manage volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Working_With_Volumes.md)] --- class: pic .interstitial[] --- name: toc-compose-for-development-stacks class: title Compose for development stacks .nav[ [Previous section](#toc-working-with-volumes) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-exercise--writing-a-compose-file) ] .debug[(automatically generated title slide)] --- # Compose for development stacks Dockerfiles are great to build container images. But what if we work with a complex stack made of multiple containers? Eventually, we will want to write some custom scripts and automation to build, run, and connect our containers together. There is a better way: using Docker Compose. In this section, you will use Compose to bootstrap a development environment. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## What is Docker Compose? Docker Compose (formerly known as `fig`) is an external tool. Unlike the Docker Engine, it is written in Python. It's open source as well. The general idea of Compose is to enable a very simple, powerful onboarding workflow: 1. Checkout your code. 2. Run `docker-compose up`. 3. Your app is up and running! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose overview This is how you work with Compose: * You describe a set (or stack) of containers in a YAML file called `docker-compose.yml`. * You run `docker-compose up`. * Compose automatically pulls images, builds containers, and starts them. * Compose can set up links, volumes, and other Docker options for you. * Compose can run the containers in the background, or in the foreground. * When containers are running in the foreground, their aggregated output is shown. Before diving in, let's see a small example of Compose in action. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic  .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Checking if Compose is installed If you are using the official training virtual machines, Compose has been pre-installed. If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them. If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`. You can always check that it is installed by running: ```bash $ docker-compose --version ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose First step: clone the source code for the app we will be working on. ```bash $ cd $ git clone https://github.com/jpetazzo/trainingwheels ... $ cd trainingwheels ``` Second step: start your app. ```bash $ docker-compose up ``` Watch Compose build and run your app with the correct parameters, including linking the relevant containers together. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose Verify that the app is running at `http://:8000`.  .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Stopping the app When you hit `^C`, Compose tries to gracefully terminate all of the containers. After ten seconds (or if you press `^C` again) it will forcibly kill them. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## The `docker-compose.yml` file Here is the file used in the demo: .small[ ```yaml version: "2" services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis ``` ] .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file structure A Compose file has multiple sections: * `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.) * `services` is mandatory. A service is one or more replicas of the same image running as containers. * `networks` is optional and indicates to which networks containers should be connected. (By default, containers will be connected on a private, per-compose-file network.) * `volumes` is optional and can define volumes to be used and/or shared by the containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file versions * Version 1 is legacy and shouldn't be used. (If you see a Compose file without `version` and `services`, it's a legacy v1 file.) * Version 2 added support for networks and volumes. * Version 3 added support for deployment options (scaling, rolling updates, etc). The [Docker documentation](https://docs.docker.com/compose/compose-file/) has excellent information about the Compose file format if you need to know more about versions. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Containers in `docker-compose.yml` Each service in the YAML file must contain either `build`, or `image`. * `build` indicates a path containing a Dockerfile. * `image` indicates an image name (local, or on a registry). * If both are specified, an image will be built from the `build` directory and named `image`. The other parameters are optional. They encode the parameters that you would typically add to `docker run`. Sometimes they have several minor improvements. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Container parameters * `command` indicates what to run (like `CMD` in a Dockerfile). * `ports` translates to one (or multiple) `-p` options to map ports. You can specify local ports (i.e. `x:y` to expose public port `x`). * `volumes` translates to one (or multiple) `-v` options. You can use relative paths here. For the full list, check: https://docs.docker.com/compose/compose-file/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose commands We already saw `docker-compose up`, but another one is `docker-compose build`. It will execute `docker build` for all containers mentioning a `build` path. It can also be invoked automatically when starting the application: ```bash docker-compose up --build ``` Another common option is to start containers in the background: ```bash docker-compose up -d ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Check container status It can be tedious to check the status of your containers with `docker ps`, especially when running multiple apps at the same time. Compose makes it easier; with `docker-compose ps` you will see only the status of the containers of the current stack: ```bash $ docker-compose ps Name Command State Ports ---------------------------------------------------------------------------- trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (1) If you have started your application in the background with Compose and want to stop it easily, you can use the `kill` command: ```bash $ docker-compose kill ``` Likewise, `docker-compose rm` will let you remove containers (after confirmation): ```bash $ docker-compose rm Going to remove trainingwheels_redis_1, trainingwheels_www_1 Are you sure? [yN] y Removing trainingwheels_redis_1... Removing trainingwheels_www_1... ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (2) Alternatively, `docker-compose down` will stop and remove containers. It will also remove other resources, like networks that were created for the application. ```bash $ docker-compose down Stopping trainingwheels_www_1 ... done Stopping trainingwheels_redis_1 ... done Removing trainingwheels_www_1 ... done Removing trainingwheels_redis_1 ... done ``` Use `docker-compose down -v` to remove everything including volumes. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Special handling of volumes Compose is smart. If your container uses volumes, when you restart your application, Compose will create a new container, but carefully re-use the volumes it was using previously. This makes it easy to upgrade a stateful service, by pulling its new image and just restarting your stack with Compose. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose project name * When you run a Compose command, Compose infers the "project name" of your app. * By default, the "project name" is the name of the current directory. * For instance, if you are in `/home/zelda/src/ocarina`, the project name is `ocarina`. * All resources created by Compose are tagged with this project name. * The project name also appears as a prefix of the names of the resources. E.g. in the previous example, service `www` will create a container `ocarina_www_1`. * The project name can be overridden with `docker-compose -p`. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Running two copies of the same app If you want to run two copies of the same app simultaneously, all you have to do is to make sure that each copy has a different project name. You can: * copy your code in a directory with a different name * start each copy with `docker-compose -p myprojname up` Each copy will run in a different network, totally isolated from the other. This is ideal to debug regressions, do side-by-side comparisons, etc. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-a-compose-file class: title Exercise β writing a Compose file .nav[ [Previous section](#toc-compose-for-development-stacks) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-application-configuration) ] .debug[(automatically generated title slide)] --- # Exercise β writing a Compose file Let's write a Compose file for the wordsmith app! The code is at: https://github.com/jpetazzo/wordsmith .debug[[containers/Exercise_Composefile.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Exercise_Composefile.md)] --- class: pic .interstitial[] --- name: toc-application-configuration class: title Application Configuration .nav[ [Previous section](#toc-exercise--writing-a-compose-file) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-orchestration-an-overview) ] .debug[(automatically generated title slide)] --- # Application Configuration There are many ways to provide configuration to containerized applications. There is no "best way" β it depends on factors like: * configuration size, * mandatory and optional parameters, * scope of configuration (per container, per app, per customer, per site, etc), * frequency of changes in the configuration. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Command-line parameters ```bash docker run jpetazzo/hamba 80 www1:80 www2:80 ``` * Configuration is provided through command-line parameters. * In the above example, the `ENTRYPOINT` is a script that will: - parse the parameters, - generate a configuration file, - start the actual service. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Command-line parameters pros and cons * Appropriate for mandatory parameters (without which the service cannot start). * Convenient for "toolbelt" services instantiated many times. (Because there is no extra step: just run it!) * Not great for dynamic configurations or bigger configurations. (These things are still possible, but more cumbersome.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Environment variables ```bash docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana ``` * Configuration is provided through environment variables. * The environment variable can be used straight by the program, or by a script generating a configuration file. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Environment variables pros and cons * Appropriate for optional parameters (since the image can provide default values). * Also convenient for services instantiated many times. (It's as easy as command-line parameters.) * Great for services with lots of parameters, but you only want to specify a few. (And use default values for everything else.) * Ability to introspect possible parameters and their default values. * Not great for dynamic configurations. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration ``` FROM prometheus COPY prometheus.conf /etc ``` * The configuration is added to the image. * The image may have a default configuration; the new configuration can: - replace the default configuration, - extend it (if the code can read multiple configuration files). .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration pros and cons * Allows arbitrary customization and complex configuration files. * Requires writing a configuration file. (Obviously!) * Requires building an image to start the service. * Requires rebuilding the image to reconfigure the service. * Requires rebuilding the image to upgrade the service. * Configured images can be stored in registries. (Which is great, but requires a registry.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Configuration volume ```bash docker run -v appconfig:/etc/appconfig myapp ``` * The configuration is stored in a volume. * The volume is attached to the container. * The image may have a default configuration. (But this results in a less "obvious" setup, that needs more documentation.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Configuration volume pros and cons * Allows arbitrary customization and complex configuration files. * Requires creating a volume for each different configuration. * Services with identical configurations can use the same volume. * Doesn't require building / rebuilding an image when upgrading / reconfiguring. * Configuration can be generated or edited through another container. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume * This is a powerful pattern for dynamic, complex configurations. * The configuration is stored in a volume. * The configuration is generated / updated by a special container. * The application container detects when the configuration is changed. (And automatically reloads the configuration when necessary.) * The configuration can be shared between multiple services if needed. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume example In a first terminal, start a load balancer with an initial configuration: ```bash $ docker run --name loadbalancer jpetazzo/hamba \ 80 goo.gl:80 ``` In another terminal, reconfigure that load balancer: ```bash $ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \ 80 google.com:80 ``` The configuration could also be updated through e.g. a REST API. (The REST API being itself served from another container.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- ## Keeping secrets .warning[Ideally, you should not put secrets (passwords, tokens...) in:] * command-line or environment variables (anyone with Docker API access can get them), * images, especially stored in a registry. Secrets management is better handled with an orchestrator (like Swarm or Kubernetes). Orchestrators will allow to pass secrets in a "one-way" manner. Managing secrets securely without an orchestrator can be contrived. E.g.: - read the secret on stdin when the service starts, - pass the secret using an API endpoint. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Application_Configuration.md)] --- class: pic .interstitial[] --- name: toc-orchestration-an-overview class: title Orchestration, an overview .nav[ [Previous section](#toc-application-configuration) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-from-docker-to-kubernetes) ] .debug[(automatically generated title slide)] --- # Orchestration, an overview In this chapter, we will: * Explain what is orchestration and why we would need it. * Present (from a high-level perspective) some orchestrators. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## What's orchestration?  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## What's orchestration? According to Wikipedia: *Orchestration describes the __automated__ arrangement, coordination, and management of complex computer systems, middleware, and services.* -- *[...] orchestration is often discussed in the context of __service-oriented architecture__, __virtualization__, provisioning, Converged Infrastructure and __dynamic datacenter__ topics.* -- What does that really mean? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances -- - Q: do we always use 100% of our servers? -- - A: obviously not! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances - Every night, scale down (by shutting down extraneous replicated instances) - Every morning, scale up (by deploying new copies) - "Pay for what you use" (i.e. save big $$$ here) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances How do we implement this? - Crontab - Autoscaling (save even bigger $$$) That's *relatively* easy. Now, how are things for our IAAS provider? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - Q: what's the #1 cost in a datacenter? -- - A: electricity! -- - Q: what uses electricity? -- - A: servers, obviously - A: ... and associated cooling -- - Q: do we always use 100% of our servers? -- - A: obviously not! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - If only we could turn off unused servers during the night... - Problem: we can only turn off a server if it's totally empty! (i.e. all VMs on it are stopped/moved) - Solution: *migrate* VMs and shutdown empty servers (e.g. combine two hypervisors with 40% load into 80%+0%, and shut down the one at 0%) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter How do we implement this? - Shut down empty hosts (but keep some spare capacity) - Start hosts again when capacity gets low - Ability to "live migrate" VMs (Xen already did this 10+ years ago) - Rebalance VMs on a regular basis - what if a VM is stopped while we move it? - should we allow provisioning on hosts involved in a migration? *Scheduling* becomes more complex. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## What is scheduling? According to Wikipedia (again): *In computing, scheduling is the method by which threads, processes or data flows are given access to system resources.* The scheduler is concerned mainly with: - throughput (total amount of work done per time unit); - turnaround time (between submission and completion); - response time (between submission and start); - waiting time (between job readiness and execution); - fairness (appropriate times according to priorities). In practice, these goals often conflict. **"Scheduling" = decide which resources to use.** .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 1 - You have: - 5 hypervisors (physical machines) - Each server has: - 16 GB RAM, 8 cores, 1 TB disk - Each week, your team requests: - one VM with X RAM, Y CPU, Z disk Scheduling = deciding which hypervisor to use for each VM. Difficulty: easy! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. Difficulty: ??? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM.  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Exercise 3 - You have machines (physical and/or virtual) - You have containers - You are trying to put the containers on the machines - Sounds familiar? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## We can't fit a job of size 6 :( .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## ... Now we can! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with two resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with three resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## You need to be good at this .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## But also, you must be quick! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## And be web scale! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## And think outside (?) of the box! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic ## Good luck! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## TL,DR * Scheduling with multiple resources (dimensions) is hard. * Don't expect to solve the problem with a Tiny Shell Script. * There are literally tons of research papers written on this. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## But our orchestrator also needs to manage ... * Network connectivity (or filtering) between containers. * Load balancing (external and internal). * Failure recovery (if a node or a whole datacenter fails). * Rolling out new versions of our applications. (Canary deployments, blue/green deployments...) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Some orchestrators We are going to present briefly a few orchestrators. There is no "absolute best" orchestrator. It depends on: - your applications, - your requirements, - your pre-existing skills... .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Nomad - Open Source project by Hashicorp. - Arbitrary scheduler (not just for containers). - Great if you want to schedule mixed workloads. (VMs, containers, processes...) - Less integration with the rest of the container ecosystem. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Mesos - Open Source project in the Apache Foundation. - Arbitrary scheduler (not just for containers). - Two-level scheduler. - Top-level scheduler acts as a resource broker. - Second-level schedulers (aka "frameworks") obtain resources from top-level. - Frameworks implement various strategies. (Marathon = long running processes; Chronos = run at intervals; ...) - Commercial offering through DC/OS by Mesosphere. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Rancher - Rancher 1 offered a simple interface for Docker hosts. - Rancher 2 is a complete management platform for Docker and Kubernetes. - Technically not an orchestrator, but it's a popular option. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Swarm - Tightly integrated with the Docker Engine. - Extremely simple to deploy and setup, even in multi-manager (HA) mode. - Secure by default. - Strongly opinionated: - smaller set of features, - easier to operate. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- ## Kubernetes - Open Source project initiated by Google. - Contributions from many other actors. - *De facto* standard for container orchestration. - Many deployment options; some of them very complex. - Reputation: steep learning curve. - Reality: - true, if we try to understand *everything*; - false, if we focus on what matters. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/containers/Orchestration_Overview.md)] --- class: pic .interstitial[] --- name: toc-from-docker-to-kubernetes class: title From Docker to Kubernetes .nav[ [Previous section](#toc-orchestration-an-overview) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-our-sample-application) ] .debug[(automatically generated title slide)] --- # From Docker to Kubernetes - We are now going to run a demo app made of multiple containers - We will start by running it on one node, with Compose - Then we will deploy that application on a Kubernetes cluster - We will identify performance bottlenecks and scale out that app (and learn Kubernetes in the process) .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- ## Our new environment - Since a 1-node cluster isn't fun, we will switch to a new environment! - This environment is a 4-node Kubernetes cluster - Also, from now on, demos and labs are identified with these gray boxes .exercise[ - You should run this command: ```bash echo Hello world ``` ] .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: in-person ## Connecting to our lab environment .exercise[ - Log into the first VM (`node1`) with your SSH client - Check that you can SSH (without password) to `node2`: ```bash ssh node2 ``` - Type `exit` or `^D` to come back to `node1` ] If anything goes wrong β ask for help! .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Doing or re-doing the workshop on your own? - Use something like [Play-With-Docker](http://play-with-docker.com/) or [Play-With-Kubernetes](https://training.play-with-kubernetes.com/) Zero setup effort; but environment are short-lived and might have limited resources - Create your own cluster (local or cloud VMs) Small setup effort; small cost; flexible environments - Create a bunch of clusters for you and your friends ([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms)) Bigger setup effort; ideal for group training .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- class: self-paced ## Get your own Docker nodes - If you already have some Docker nodes: great! - If not: let's get some thanks to Play-With-Docker .exercise[ - Go to http://www.play-with-docker.com/ - Log in - Create your first node ] You will need a Docker ID to use Play-With-Docker. (Creating a Docker ID is free.) .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## We will (mostly) interact with node1 only *These remarks apply only when using multiple nodes, of course.* - Unless instructed, **all commands must be run from the first VM, `node1`** - We will only check out/copy the code on `node1` - During normal operations, we do not need access to the other nodes - If we had to troubleshoot issues, we would use a combination of: - SSH (to access system logs, daemon status...) - Docker API (to check running containers and container engine status) .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Terminals Once in a while, the instructions will say: "Open a new terminal." There are multiple ways to do this: - create a new window or tab on your machine, and SSH into the VM; - use screen or tmux on the VM and open a new window from there. You are welcome to use the method that you feel the most comfortable with. .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Tmux cheatsheet [Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`. *You don't have to use it or even know about it to follow along. But some of us like to use it to switch between terminals. It has been preinstalled on your workshop nodes.* - Ctrl-b c β creates a new window - Ctrl-b n β go to next window - Ctrl-b p β go to previous window - Ctrl-b " β split window top/bottom - Ctrl-b % β split window left/right - Ctrl-b Alt-1 β rearrange windows in columns - Ctrl-b Alt-2 β rearrange windows in rows - Ctrl-b arrows β navigate to other windows - Ctrl-b d β detach session - tmux attach β reattach to session .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/connecting.md)] --- ## Versions installed - Kubernetes 1.15.0 - Docker Engine 18.09.7 - Docker Compose 1.24.1 .exercise[ - Check all installed versions: ```bash kubectl version docker version docker-compose -v ``` ] .debug[[k8s/versions-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/versions-k8s.md)] --- class: extra-details ## Kubernetes and Docker compatibility - Kubernetes 1.15 validates Docker Engine versions [up to 18.09](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#dependencies) (the latest version when Kubernetes 1.14 was released) - Kubernetes 1.13 only validates Docker Engine versions [up to 18.06](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#external-dependencies) - Is it a problem if I use Kubernetes with a "too recent" Docker Engine? -- class: extra-details - No! - "Validates" = continuous integration builds with very extensive (and expensive) testing - The Docker API is versioned, and offers strong backward-compatibility (If a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way) .debug[[k8s/versions-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/versions-k8s.md)] --- class: pic .interstitial[] --- name: toc-our-sample-application class: title Our sample application .nav[ [Previous section](#toc-from-docker-to-kubernetes) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-kubernetes-concepts) ] .debug[(automatically generated title slide)] --- # Our sample application - We will clone the GitHub repository onto our `node1` - The repository also contains scripts and tools that we will use through the workshop .exercise[ - Clone the repository on `node1`: ```bash git clone https://github.com/jpetazzo/container.training ``` ] (You can also fork the repository on GitHub and clone your fork if you prefer that.) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Downloading and running the application Let's start this before we look around, as downloading will take a little time... .exercise[ - Go to the `dockercoins` directory, in the cloned repo: ```bash cd ~/container.training/dockercoins ``` - Use Compose to build and run all containers: ```bash docker-compose up ``` ] Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## What's this application? -- - It is a DockerCoin miner! .emoji[π°π³π¦π’] -- - No, you can't buy coffee with DockerCoins -- - How DockerCoins works: - generate a few random bytes - hash these bytes - increment a counter (to keep track of speed) - repeat forever! -- - DockerCoins is *not* a cryptocurrency (the only common points are "randomness," "hashing," and "coins" in the name) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## DockerCoins in the microservices era - DockerCoins is made of 5 services: - `rng` = web service generating random bytes - `hasher` = web service computing hash of POSTed data - `worker` = background process calling `rng` and `hasher` - `webui` = web interface to watch progress - `redis` = data store (holds a counter updated by `worker`) - These 5 services are visible in the application's Compose file, [docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## How DockerCoins works - `worker` invokes web service `rng` to generate random bytes - `worker` invokes web service `hasher` to hash these bytes - `worker` does this in an infinite loop - every second, `worker` updates `redis` to indicate how many loops were done - `webui` queries `redis`, and computes and exposes "hashing speed" in our browser *(See diagram on next slide!)* .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: pic  .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Service discovery in container-land How does each service find out the address of the other ones? -- - We do not hard-code IP addresses in the code - We do not hard-code FQDNs in the code, either - We just connect to a service name, and container-magic does the rest (And by container-magic, we mean "a crafty, dynamic, embedded DNS server") .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Example in `worker/worker.py` ```python redis = Redis("`redis`") def get_random_bytes(): r = requests.get("http://`rng`/32") return r.content def hash_bytes(data): r = requests.post("http://`hasher`/", data=data, headers={"Content-Type": "application/octet-stream"}) ``` (Full source code available [here]( https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17 )) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## Links, naming, and service discovery - Containers can have network aliases (resolvable through DNS) - Compose file version 2+ makes each container reachable through its service name - Compose file version 1 required "links" sections to accomplish this - Network aliases are automatically namespaced - you can have multiple apps declaring and using a service named `database` - containers in the blue app will resolve `database` to the IP of the blue database - containers in the green app will resolve `database` to the IP of the green database .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Show me the code! - You can check the GitHub repository with all the materials of this workshop: https://github.com/jpetazzo/container.training - The application is in the [dockercoins]( https://github.com/jpetazzo/container.training/tree/master/dockercoins) subdirectory - The Compose file ([docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml)) lists all 5 services - `redis` is using an official image from the Docker Hub - `hasher`, `rng`, `worker`, `webui` are each built from a Dockerfile - Each service's Dockerfile and source code is in its own directory (`hasher` is in the [hasher](https://github.com/jpetazzo/container.training/blob/master/dockercoins/hasher/) directory, `rng` is in the [rng](https://github.com/jpetazzo/container.training/blob/master/dockercoins/rng/) directory, etc.) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## Compose file format version *This is relevant only if you have used Compose before 2016...* - Compose 1.6 introduced support for a new Compose file format (aka "v2") - Services are no longer at the top level, but under a `services` section - There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer) - Containers are placed on a dedicated network, making links unnecessary - There are other minor differences, but upgrade is easy and straightforward .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Our application at work - On the left-hand side, the "rainbow strip" shows the container names - On the right-hand side, we see the output of our containers - We can see the `worker` service making requests to `rng` and `hasher` - For `rng` and `hasher`, we see HTTP access logs .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Connecting to the web UI - "Logs are exciting and fun!" (No-one, ever) - The `webui` container exposes a web dashboard; let's view it .exercise[ - With a web browser, connect to `node1` on port 8000 - Remember: the `nodeX` aliases are valid only on the nodes themselves - In your browser, you need to enter the IP address of your node ] A drawing area should show up, and after a few seconds, a blue graph will appear. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: self-paced, extra-details ## If the graph doesn't load If you just see a `Page not found` error, it might be because your Docker Engine is running on a different machine. This can be the case if: - you are using the Docker Toolbox - you are using a VM (local or remote) created with Docker Machine - you are controlling a remote Docker Engine When you run DockerCoins in development mode, the web UI static files are mapped to the container using a volume. Alas, volumes can only work on a local environment, or when using Docker Desktop for Mac or Windows. How to fix this? Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## Why does the speed seem irregular? - It *looks like* the speed is approximately 4 hashes/second - Or more precisely: 4 hashes/second, with regular dips down to zero - Why? -- class: extra-details - The app actually has a constant, steady speed: 3.33 hashes/second (which corresponds to 1 hash every 0.3 seconds, for *reasons*) - Yes, and? .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- class: extra-details ## The reason why this graph is *not awesome* - The worker doesn't update the counter after every loop, but up to once per second - The speed is computed by the browser, checking the counter about once per second - Between two consecutive updates, the counter will increase either by 4, or by 0 - The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc. - What can we conclude from this? -- class: extra-details - "I'm clearly incapable of writing good frontend code!" π β JΓ©rΓ΄me .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Stopping the application - If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app - The Docker Engine will send a `TERM` signal to the containers - If the containers do not exit in a timely manner, the Engine sends a `KILL` signal .exercise[ - Stop the application by hitting `^C` ] -- Some containers exit immediately, others take longer. The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time! .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/sampleapp.md)] --- ## Clean up - Before moving on, let's remove those containers .exercise[ - Tell Compose to remove everything: ```bash docker-compose down ``` ] .debug[[shared/composedown.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/composedown.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-concepts class: title Kubernetes concepts .nav[ [Previous section](#toc-our-sample-application) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-first-contact-with-kubectl) ] .debug[(automatically generated title slide)] --- # Kubernetes concepts - Kubernetes is a container management system - It runs and manages containerized applications on a cluster -- - What does that really mean? .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Basic things we can ask Kubernetes to do -- - Start 5 containers using image `atseashop/api:v1.3` -- - Place an internal load balancer in front of these containers -- - Start 10 containers using image `atseashop/webfront:v1.3` -- - Place a public load balancer in front of these containers -- - It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers -- - New release! Replace my containers with the new image `atseashop/webfront:v1.4` -- - Keep processing requests during the upgrade; update my containers one at a time .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Other things that Kubernetes can do for us - Basic autoscaling - Blue/green deployment, canary deployment - Long running services, but also batch (one-off) jobs - Overcommit our cluster and *evict* low-priority jobs - Run services with *stateful* data (databases etc.) - Fine-grained access control defining *what* can be done by *whom* on *which* resources - Integrating third party services (*service catalog*) - Automating complex tasks (*operators*) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture - Ha ha ha ha - OK, I was trying to scare you, it's much simpler than that β€οΈ .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Credits - The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI (Courtesy of [Yongbok Kim](https://www.yongbok.net/blog/)) - The second one is a simplified representation of a Kubernetes cluster (Courtesy of [Imesh Gunaratne](https://medium.com/containermind/a-reference-architecture-for-deploying-wso2-middleware-on-kubernetes-d4dee7601e8e)) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture: the nodes - The nodes executing our containers run a collection of services: - a container Engine (typically Docker) - kubelet (the "node agent") - kube-proxy (a necessary but not sufficient network component) - Nodes were formerly called "minions" (You might see that word in older articles or documentation) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture: the control plane - The Kubernetes logic (its "brains") is a collection of services: - the API server (our point of entry to everything!) - core services like the scheduler and controller manager - `etcd` (a highly available key/value store; the "database" of Kubernetes) - Together, these services form the control plane of our cluster - The control plane is also called the "master" .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Running the control plane on special nodes - It is common to reserve a dedicated node for the control plane (Except for single-node development clusters, like when using minikube) - This node is then called a "master" (Yes, this is ambiguous: is the "master" a node, or the whole control plane?) - Normal applications are restricted from running on this node (By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)) - When high availability is required, each service of the control plane must be resilient - The control plane is then replicated on multiple nodes (This is sometimes called a "multi-master" setup) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Running the control plane outside containers - The services of the control plane can run in or out of containers - For instance: since `etcd` is a critical service, some people deploy it directly on a dedicated cluster (without containers) (This is illustrated on the first "super complicated" schema) - In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible (We only "see" a Kubernetes API endpoint) - In that case, there is no "master node" *For this reason, it is more accurate to say "control plane" rather than "master."* .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? No! -- - By default, Kubernetes uses the Docker Engine to run containers - We could also use `rkt` ("Rocket") from CoreOS - Or leverage other pluggable runtimes through the *Container Runtime Interface* (like CRI-O, or containerd) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? Yes! -- - In this workshop, we run our app on a single node first - We will need to build images and ship them around - We can do these things without Docker (and get diagnosed with NIHΒΉ syndrome) - Docker is still the most stable container engine today (but other options are maturing very quickly) .footnote[ΒΉ[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)] .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? - On our development environments, CI pipelines ... : *Yes, almost certainly* - On our production servers: *Yes (today)* *Probably not (in the future)* .footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)] .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Interacting with Kubernetes - We will interact with our Kubernetes cluster through the Kubernetes API - The Kubernetes API is (mostly) RESTful - It allows us to create, read, update, delete *resources* - A few common resource types are: - node (a machine β physical or virtual β in our cluster) - pod (group of containers running together on a node) - service (stable network endpoint to connect to one or multiple containers) .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic  .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- ## Credits - The first diagram is courtesy of Lucas KΓ€ldstrΓΆm, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha) - it's one of the best Kubernetes architecture diagrams available! - The second diagram is courtesy of Weave Works - a *pod* can have multiple containers working together - IP addresses are associated with *pods*, not with individual containers Both diagrams used with permission. .debug[[k8s/concepts-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/concepts-k8s.md)] --- class: pic .interstitial[] --- name: toc-first-contact-with-kubectl class: title First contact with `kubectl` .nav[ [Previous section](#toc-kubernetes-concepts) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-running-our-first-containers-on-kubernetes) ] .debug[(automatically generated title slide)] --- # First contact with `kubectl` - `kubectl` is (almost) the only tool we'll need to talk to Kubernetes - It is a rich CLI tool around the Kubernetes API (Everything you can do with `kubectl`, you can do directly with the API) - On our machines, there is a `~/.kube/config` file with: - the Kubernetes API address - the path to our TLS certificates used to authenticate - You can also use the `--kubeconfig` flag to pass a config file - Or directly `--server`, `--user`, etc. - `kubectl` can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"... .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## `kubectl get` - Let's look at our `Node` resources with `kubectl get`! .exercise[ - Look at the composition of our cluster: ```bash kubectl get node ``` - These commands are equivalent: ```bash kubectl get no kubectl get node kubectl get nodes ``` ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Obtaining machine-readable output - `kubectl get` can output JSON, YAML, or be directly formatted .exercise[ - Give us more info about the nodes: ```bash kubectl get nodes -o wide ``` - Let's have some YAML: ```bash kubectl get no -o yaml ``` See that `kind: List` at the end? It's the type of our result! ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## (Ab)using `kubectl` and `jq` - It's super easy to build custom reports .exercise[ - Show the capacity of all our nodes as a stream of JSON objects: ```bash kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity" ``` ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring types and definitions - We can list all available resource types by running `kubectl api-resources` (In Kubernetes 1.10 and prior, this command used to be `kubectl get`) - We can view the definition for a resource type with: ```bash kubectl explain type ``` - We can view the definition of a field in a resource, for instance: ```bash kubectl explain node.spec ``` - Or get the full definition of all fields and sub-fields: ```bash kubectl explain node --recursive ``` .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Introspection vs. documentation - We can access the same information by reading the [API documentation](https://kubernetes.io/docs/reference/#api-reference) - The API documentation is usually easier to read, but: - it won't show custom types (like Custom Resource Definitions) - we need to make sure that we look at the correct version - `kubectl api-resources` and `kubectl explain` perform *introspection* (they communicate with the API server and obtain the exact type definitions) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Type names - The most common resource names have three forms: - singular (e.g. `node`, `service`, `deployment`) - plural (e.g. `nodes`, `services`, `deployments`) - short (e.g. `no`, `svc`, `deploy`) - Some resources do not have a short name - `Endpoints` only have a plural form (because even a single `Endpoints` resource is actually a list of endpoints) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Viewing details - We can use `kubectl get -o yaml` to see all available details - However, YAML output is often simultaneously too much and not enough - For instance, `kubectl get node node1 -o yaml` is: - too much information (e.g.: list of images available on this node) - not enough information (e.g.: doesn't show pods running on this node) - difficult to read for a human operator - For a comprehensive overview, we can use `kubectl describe` instead .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## `kubectl describe` - `kubectl describe` needs a resource type and (optionally) a resource name - It is possible to provide a resource name *prefix* (all matching objects will be displayed) - `kubectl describe` will retrieve some extra information about the resource .exercise[ - Look at the information available for `node1` with one of the following commands: ```bash kubectl describe node/node1 kubectl describe node node1 ``` ] (We should notice a bunch of control plane pods.) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services kubectl get svc ``` ] -- There is already one service on our cluster: the Kubernetes API itself. .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## ClusterIP services - A `ClusterIP` service is internal, available from the cluster only - This is useful for introspection from within containers .exercise[ - Try to connect to the API: ```bash curl -k https://`10.96.0.1` ``` - `-k` is used to skip certificate verification - Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc` ] -- The error that we see is expected: the Kubernetes API requires authentication. .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Listing running containers - Containers are manipulated through *pods* - A pod is a group of containers: - running together (on the same node) - sharing resources (RAM, CPU; but also network, volumes) .exercise[ - List pods on our cluster: ```bash kubectl get pods ``` ] -- *Where are the pods that we saw just a moment earlier?!?* .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Namespaces - Namespaces allow us to segregate resources .exercise[ - List the namespaces on our cluster with one of these commands: ```bash kubectl get namespaces kubectl get namespace kubectl get ns ``` ] -- *You know what ... This `kube-system` thing looks suspicious.* *In fact, I'm pretty sure it showed up earlier, when we did:* `kubectl describe node node1` .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Accessing namespaces - By default, `kubectl` uses the `default` namespace - We can see resources in all namespaces with `--all-namespaces` .exercise[ - List the pods in all namespaces: ```bash kubectl get pods --all-namespaces ``` - Since Kubernetes 1.14, we can also use `-A` as a shorter version: ```bash kubectl get pods -A ``` ] *Here are our system pods!* .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## What are all these control plane pods? - `etcd` is our etcd server - `kube-apiserver` is the API server - `kube-controller-manager` and `kube-scheduler` are other control plane components - `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/)) - `kube-proxy` is the (per-node) component managing port mappings and such - `weave` is the (per-node) component managing the network overlay - the `READY` column indicates the number of containers in each pod (1 for most pods, but `weave` has 2, for instance) .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Scoping another namespace - We can also look at a different namespace (other than `default`) .exercise[ - List only the pods in the `kube-system` namespace: ```bash kubectl get pods --namespace=kube-system kubectl get pods -n kube-system ``` ] .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- ## Namespaces and other `kubectl` commands - We can use `-n`/`--namespace` with almost every `kubectl` command - Example: - `kubectl create --namespace=X` to create something in namespace X - We can use `-A`/`--all-namespaces` with most commands that manipulate multiple objects - Examples: - `kubectl delete` can delete resources across multiple namespaces - `kubectl label` can add/remove/update labels across multiple namespaces .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-public`? .exercise[ - List the pods in the `kube-public` namespace: ```bash kubectl -n kube-public get pods ``` ] Nothing! `kube-public` is created by kubeadm & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters). .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring `kube-public` - The only interesting object in `kube-public` is a ConfigMap named `cluster-info` .exercise[ - List ConfigMap objects: ```bash kubectl -n kube-public get configmaps ``` - Inspect `cluster-info`: ```bash kubectl -n kube-public get configmap cluster-info -o yaml ``` ] Note the `selfLink` URI: `/api/v1/namespaces/kube-public/configmaps/cluster-info` We can use that! .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Accessing `cluster-info` - Earlier, when trying to access the API server, we got a `Forbidden` message - But `cluster-info` is readable by everyone (even without authentication) .exercise[ - Retrieve `cluster-info`: ```bash curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info ``` ] - We were able to access `cluster-info` (without auth) - It contains a `kubeconfig` file .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## Retrieving `kubeconfig` - We can easily extract the `kubeconfig` file from this ConfigMap .exercise[ - Display the content of `kubeconfig`: ```bash curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \ | jq -r .data.kubeconfig ``` ] - This file holds the canonical address of the API server, and the public key of the CA - This file *does not* hold client keys or tokens - This is not sensitive information, but allows us to establish trust .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-node-lease`? - Starting with Kubernetes 1.14, there is a `kube-node-lease` namespace (or in Kubernetes 1.13 if the NodeLease feature gate is enabled) - That namespace contains one Lease object per node - *Node leases* are a new way to implement node heartbeats (i.e. node regularly pinging the control plane to say "I'm alive!") - For more details, see [KEP-0009] or the [node controller documentation] [KEP-0009]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0009-node-heartbeat.md [node controller documentation]: https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller .debug[[k8s/kubectlget.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlget.md)] --- class: pic .interstitial[] --- name: toc-running-our-first-containers-on-kubernetes class: title Running our first containers on Kubernetes .nav[ [Previous section](#toc-first-contact-with-kubectl) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-accessing-logs-from-the-cli) ] .debug[(automatically generated title slide)] --- # Running our first containers on Kubernetes - First things first: we cannot run a container -- - We are going to run a pod, and in that pod there will be a single container -- - In that container in the pod, we are going to run a simple `ping` command - Then we are going to start additional copies of the pod .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Starting a simple pod with `kubectl run` - We need to specify at least a *name* and the image we want to use .exercise[ - Let's ping `1.1.1.1`, Cloudflare's [public DNS resolver](https://blog.cloudflare.com/announcing-1111/): ```bash kubectl run pingpong --image alpine ping 1.1.1.1 ``` ] -- (Starting with Kubernetes 1.12, we get a message telling us that `kubectl run` is deprecated. Let's ignore it for now.) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Behind the scenes of `kubectl run` - Let's look at the resources that were created by `kubectl run` .exercise[ - List most resource types: ```bash kubectl get all ``` ] -- We should see the following things: - `deployment.apps/pingpong` (the *deployment* that we just created) - `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment) - `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set) Note: as of 1.10.1, resource types are displayed in more detail. .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## What are these different things? - A *deployment* is a high-level construct - allows scaling, rolling updates, rollbacks - multiple deployments can be used together to implement a [canary deployment](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) - delegates pods management to *replica sets* - A *replica set* is a low-level construct - makes sure that a given number of identical pods are running - allows scaling - rarely used directly - A *replication controller* is the (deprecated) predecessor of a replica set .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Our `pingpong` deployment - `kubectl run` created a *deployment*, `deployment.apps/pingpong` ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/pingpong 1 1 1 1 10m ``` - That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx` ``` NAME DESIRED CURRENT READY AGE replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m ``` - That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy` ``` NAME READY STATUS RESTARTS AGE pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m ``` - We'll see later how these folks play together for: - scaling, high availability, rolling updates .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Viewing container output - Let's use the `kubectl logs` command - We will pass either a *pod name*, or a *type/name* (E.g. if we specify a deployment or replica set, it will get the first pod in it) - Unless specified otherwise, it will only show logs of the first container in the pod (Good thing there's only one in ours!) .exercise[ - View the result of our `ping` command: ```bash kubectl logs deploy/pingpong ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Streaming logs in real time - Just like `docker logs`, `kubectl logs` supports convenient options: - `-f`/`--follow` to stream logs in real time (Γ la `tail -f`) - `--tail` to indicate how many lines you want to see (from the end) - `--since` to get logs only after a given timestamp .exercise[ - View the latest logs of our `ping` command: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Scaling our application - We can create additional copies of our container (I mean, our pod) with `kubectl scale` .exercise[ - Scale our `pingpong` deployment: ```bash kubectl scale deploy/pingpong --replicas 3 ``` - Note that this command does exactly the same thing: ```bash kubectl scale deployment pingpong --replicas 3 ``` ] Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`? We could! But the *deployment* would notice it right away, and scale back to the initial level. .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Resilience - The *deployment* `pingpong` watches its *replica set* - The *replica set* ensures that the right number of *pods* are running - What happens if pods disappear? .exercise[ - In a separate window, list pods, and keep watching them: ```bash kubectl get pods -w ``` - Destroy a pod: ``` kubectl delete pod pingpong-xxxxxxxxxx-yyyyy ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## What if we wanted something different? - What if we wanted to start a "one-shot" container that *doesn't* get restarted? - We could use `kubectl run --restart=OnFailure` or `kubectl run --restart=Never` - These commands would create *jobs* or *pods* instead of *deployments* - Under the hood, `kubectl run` invokes "generators" to create resource descriptions - We could also write these resource descriptions ourselves (typically in YAML), and create them on the cluster with `kubectl apply -f` (discussed later) - With `kubectl run --schedule=...`, we can also create *cronjobs* .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## What about that deprecation warning? - As we can see from the previous slide, `kubectl run` can do many things - The exact type of resource created is not obvious - To make things more explicit, it is better to use `kubectl create`: - `kubectl create deployment` to create a deployment - `kubectl create job` to create a job - `kubectl create cronjob` to run a job periodically (since Kubernetes 1.14) - Eventually, `kubectl run` will be used only to start one-shot pods (see https://github.com/kubernetes/kubernetes/pull/68132) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Various ways of creating resources - `kubectl run` - easy way to get started - versatile - `kubectl create ` - explicit, but lacks some features - can't create a CronJob before Kubernetes 1.14 - can't pass command-line arguments to deployments - `kubectl create -f foo.yaml` or `kubectl apply -f foo.yaml` - all features are available - requires writing YAML .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Viewing logs of multiple pods - When we specify a deployment name, only one single pod's logs are shown - We can view the logs of multiple pods by specifying a *selector* - A selector is a logic expression using *labels* - Conveniently, when you `kubectl run somename`, the associated objects have a `run=somename` label .exercise[ - View the last line of log from all pods with the `run=pingpong` label: ```bash kubectl logs -l run=pingpong --tail 1 ``` ] .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ### Streaming logs of multiple pods - Can we stream the logs of all our `pingpong` pods? .exercise[ - Combine `-l` and `-f` flags: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` ] *Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!* *Let's try to understand why ...* .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: extra-details ### Streaming logs of many pods - Let's see what happens if we try to stream the logs for more than 5 pods .exercise[ - Scale up our deployment: ```bash kubectl scale deployment pingpong --replicas=8 ``` - Stream the logs: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` ] We see a message like the following one: ``` error: you are attempting to follow 8 log streams, but maximum allowed concurency is 5, use --max-log-requests to increase the limit ``` .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: extra-details ## Why can't we stream the logs of many pods? - `kubectl` opens one connection to the API server per pod - For each pod, the API server opens one extra connection to the corresponding kubelet - If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server - This could easily put a lot of stress on the API server - Prior Kubernetes 1.14, it was decided to *not* allow multiple connections - From Kubernetes 1.14, it is allowed, but limited to 5 connections (this can be changed with `--max-log-requests`) - For more details about the rationale, see [PR #67573](https://github.com/kubernetes/kubernetes/pull/67573) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Shortcomings of `kubectl logs` - We don't see which pod sent which log line - If pods are restarted / replaced, the log stream stops - If new pods are added, we don't see their logs - To stream the logs of multiple pods, we need to write a selector - There are external tools to address these shortcomings (e.g.: [Stern](https://github.com/wercker/stern)) .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: extra-details ## `kubectl logs -l ... --tail N` - If we run this with Kubernetes 1.12, the last command shows multiple lines - This is a regression when `--tail` is used together with `-l`/`--selector` - It always shows the last 10 lines of output for each container (instead of the number of lines specified on the command line) - The problem was fixed in Kubernetes 1.13 *See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.* .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- ## Aren't we flooding 1.1.1.1? - If you're wondering this, good question! - Don't worry, though: *APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.* (Source: https://blog.cloudflare.com/announcing-1111/) - It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC! .debug[[k8s/kubectlrun.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlrun.md)] --- class: pic .interstitial[] --- name: toc-accessing-logs-from-the-cli class: title Accessing logs from the CLI .nav[ [Previous section](#toc-running-our-first-containers-on-kubernetes) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-declarative-vs-imperative) ] .debug[(automatically generated title slide)] --- # Accessing logs from the CLI - The `kubectl logs` command has limitations: - it cannot stream logs from multiple pods at a time - when showing logs from multiple pods, it mixes them all together - We are going to see how to do it better .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Doing it manually - We *could* (if we were so inclined) write a program or script that would: - take a selector as an argument - enumerate all pods matching that selector (with `kubectl get -l ...`) - fork one `kubectl logs --follow ...` command per container - annotate the logs (the output of each `kubectl logs ...` process) with their origin - preserve ordering by using `kubectl logs --timestamps ...` and merge the output -- - We *could* do it, but thankfully, others did it for us already! .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Stern [Stern](https://github.com/wercker/stern) is an open source project by [Wercker](http://www.wercker.com/). From the README: *Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.* *The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.* Exactly what we need! .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Installing Stern - Run `stern` (without arguments) to check if it's installed: ``` $ stern Tail multiple pods and containers from Kubernetes Usage: stern pod-query [flags] ``` - If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases) - The following commands will install Stern on a Linux Intel 64 bit machine: ```bash sudo curl -L -o /usr/local/bin/stern \ https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 sudo chmod +x /usr/local/bin/stern ``` .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Using Stern - There are two ways to specify the pods whose logs we want to see: - `-l` followed by a selector expression (like with many `kubectl` commands) - with a "pod query," i.e. a regex used to match pod names - These two ways can be combined if necessary .exercise[ - View the logs for all the rng containers: ```bash stern rng ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Stern convenient options - The `--tail N` flag shows the last `N` lines for each container (Instead of showing the logs since the creation of the container) - The `-t` / `--timestamps` flag shows timestamps - The `--all-namespaces` flag is self-explanatory .exercise[ - View what's up with the `weave` system containers: ```bash stern --tail 1 --timestamps --all-namespaces weave ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- ## Using Stern with a selector - When specifying a selector, we can omit the value for a label - This will match all objects having that label (regardless of the value) - Everything created with `kubectl run` has a label `run` - We can use that property to view the logs of all the pods created with `kubectl run` - Similarly, everything created with `kubectl create deployment` has a label `app` .exercise[ - View the logs for all the things started with `kubectl create deployment`: ```bash stern -l app ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/logs-cli.md)] --- class: pic .interstitial[] --- name: toc-declarative-vs-imperative class: title Declarative vs imperative .nav[ [Previous section](#toc-accessing-logs-from-the-cli) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-kubernetes-network-model) ] .debug[(automatically generated title slide)] --- # Declarative vs imperative - Our container orchestrator puts a very strong emphasis on being *declarative* - Declarative: *I would like a cup of tea.* - Imperative: *Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.* -- - Declarative seems simpler at first ... -- - ... As long as you know how to brew tea .debug[[shared/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/declarative.md)] --- ## Declarative vs imperative - What declarative would really be: *I want a cup of tea, obtained by pouring an infusionΒΉ of tea leaves in a cup.* -- *ΒΉAn infusion is obtained by letting the object steep a few minutes in hotΒ² water.* -- *Β²Hot liquid is obtained by pouring it in an appropriate containerΒ³ and setting it on a stove.* -- *Β³Ah, finally, containers! Something we know about. Let's get to work, shall we?* -- .footnote[Did you know there was an [ISO standard](https://en.wikipedia.org/wiki/ISO_3103) specifying how to brew tea?] .debug[[shared/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/declarative.md)] --- ## Declarative vs imperative - Imperative systems: - simpler - if a task is interrupted, we have to restart from scratch - Declarative systems: - if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary - we need to be able to *observe* the system - ... and compute a "diff" between *what we have* and *what we want* .debug[[shared/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/declarative.md)] --- ## Declarative vs imperative in Kubernetes - With Kubernetes, we cannot say: "run this container" - All we can do is write a *spec* and push it to the API server (by creating a resource like e.g. a Pod or a Deployment) - The API server will validate that spec (and reject it if it's invalid) - Then it will store it in etcd - A *controller* will "notice" that spec and act upon it .debug[[k8s/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/declarative.md)] --- ## Reconciling state - Watch for the `spec` fields in the YAML files later! - The *spec* describes *how we want the thing to be* - Kubernetes will *reconcile* the current state with the spec (technically, this is done by a number of *controllers*) - When we want to change some resource, we update the *spec* - Kubernetes will then *converge* that resource .debug[[k8s/declarative.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/declarative.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl run web --image=nginx --replicas=3 ``` .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-network-model class: title Kubernetes network model .nav[ [Previous section](#toc-declarative-vs-imperative) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-exposing-containers) ] .debug[(automatically generated title slide)] --- # Kubernetes network model - TL,DR: *Our cluster (nodes and pods) is one big flat IP network.* -- - In detail: - all nodes must be able to reach each other, without NAT - all pods must be able to reach each other, without NAT - pods and nodes must be able to reach each other, without NAT - each pod is aware of its IP address (no NAT) - pod IP addresses are assigned by the network implementation - Kubernetes doesn't mandate any particular implementation .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the good - Everything can reach everything - No address translation - No port translation - No new protocol - The network implementation can decide how to allocate addresses - IP addresses don't have to be "portable" from a node to another (We can use e.g. a subnet per node and use a simple routed topology) - The specification is simple enough to allow many various implementations .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the less good - Everything can reach everything - if you want security, you need to add network policies - the network implementation that you use needs to support them - There are literally dozens of implementations out there (15 are listed in the Kubernetes documentation) - Pods have level 3 (IP) connectivity, but *services* are level 4 (TCP or UDP) (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets) - `kube-proxy` is on the data path when connecting to a pod or container, and it's not particularly fast (relies on userland proxying or iptables) .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- ## Kubernetes network model: in practice - The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave) - We don't endorse Weave in a particular way, it just Works For Us - Don't worry about the warning about `kube-proxy` performance - Unless you: - routinely saturate 10G network interfaces - count packet rates in millions per second - run high-traffic VOIP or gaming platforms - do weird things that involve millions of simultaneous connections (in which case you're already familiar with kernel tuning) - If necessary, there are alternatives to `kube-proxy`; e.g. [`kube-router`](https://www.kube-router.io) .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: extra-details ## The Container Network Interface (CNI) - Most Kubernetes clusters use CNI "plugins" to implement networking - When a pod is created, Kubernetes delegates the network setup to these plugins (it can be a single plugin, or a combination of plugins, each doing one task) - Typically, CNI plugins will: - allocate an IP address (by calling an IPAM plugin) - add a network interface into the pod's network namespace - configure the interface as well as required routes etc. .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: extra-details ## Multiple moving parts - The "pod-to-pod network" or "pod network": - provides communication between pods and nodes - is generally implemented with CNI plugins - The "pod-to-service network": - provides internal communication and load balancing - is generally implemented with kube-proxy (or e.g. kube-router) - Network policies: - provide firewalling and isolation - can be bundled with the "pod network" or provided by another component .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: extra-details ## Even more moving parts - Inbound traffic can be handled by multiple components: - something like kube-proxy or kube-router (for NodePort services) - load balancers (ideally, connected to the pod network) - It is possible to use multiple pod networks in parallel (with "meta-plugins" like CNI-Genie or Multus) - Some solutions can fill multiple roles (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy) .debug[[k8s/kubenet.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubenet.md)] --- class: pic .interstitial[] --- name: toc-exposing-containers class: title Exposing containers .nav[ [Previous section](#toc-kubernetes-network-model) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-shipping-images-with-a-registry) ] .debug[(automatically generated title slide)] --- # Exposing containers - `kubectl expose` creates a *service* for existing pods - A *service* is a stable address for a pod (or a bunch of pods) - If we want to connect to our pod(s), we need to create a *service* - Once a service is created, CoreDNS will allow us to resolve it by name (i.e. after creating service `hello`, the name `hello` will resolve to something) - There are different types of services, detailed on the following slides: `ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName` .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Basic service types - `ClusterIP` (default type) - a virtual IP address is allocated for the service (in an internal, private range) - this IP address is reachable only from within the cluster (nodes and pods) - our code can connect to the service using the original port number - `NodePort` - a port is allocated for the service (by default, in the 30000-32768 range) - that port is made available *on all our nodes* and anybody can connect to it - our code must be changed to connect to that new port number These service types are always available. Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables` rules. .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## More service types - `LoadBalancer` - an external load balancer is allocated for the service - the load balancer is configured accordingly (e.g.: a `NodePort` service is created, and the load balancer sends traffic to that port) - available only when the underlying infrastructure provides some "load balancer as a service" (e.g. AWS, Azure, GCE, OpenStack...) - `ExternalName` - the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record - no port, no IP address, no nothing else is allocated .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Running containers with open ports - Since `ping` doesn't have anything to connect to, we'll have to run something else - We could use the `nginx` official image, but ... ... we wouldn't be able to tell the backends from each other! - We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go - `jpetazzo/httpenv` listens on port 8888 - It serves its environment variables in JSON format - The environment variables will include `HOSTNAME`, which will be the pod name (and therefore, will be different on each backend) .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Creating a deployment for our HTTP server - We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ... - But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead .exercise[ - In another window, watch the pods (to see when they are created): ```bash kubectl get pods -w ``` - Create a deployment for this very lightweight HTTP server: ```bash kubectl create deployment httpenv --image=jpetazzo/httpenv ``` - Scale it to 10 replicas: ```bash kubectl scale deployment httpenv --replicas=10 ``` ] .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Exposing our deployment - We'll create a default `ClusterIP` service .exercise[ - Expose the HTTP port of our server: ```bash kubectl expose deployment httpenv --port 8888 ``` - Look up which IP address was allocated: ```bash kubectl get service ``` ] .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Services are layer 4 constructs - You can assign IP addresses to services, but they are still *layer 4* (i.e. a service is not an IP address; it's an IP address + protocol + port) - This is caused by the current implementation of `kube-proxy` (it relies on mechanisms that don't support layer 3) - As a result: you *have to* indicate the port number for your service - Running services with arbitrary port (or port ranges) requires hacks (e.g. host networking mode) .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Testing our service - We will now send a few HTTP requests to our pods .exercise[ - Let's obtain the IP address that was allocated for our service, *programmatically:* ```bash IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}') ``` - Send a few requests: ```bash curl http://$IP:8888/ ``` - Too much output? Filter it with `jq`: ```bash curl -s http://$IP:8888/ | jq .HOSTNAME ``` ] -- Try it a few times! Our requests are load balanced across multiple pods. .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## If we don't need a load balancer - Sometimes, we want to access our scaled services directly: - if we want to save a tiny little bit of latency (typically less than 1ms) - if we need to connect over arbitrary ports (instead of a few fixed ones) - if we need to communicate over another protocol than UDP or TCP - if we want to decide how to balance the requests client-side - ... - In that case, we can use a "headless service" .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Headless services - A headless service is obtained by setting the `clusterIP` field to `None` (Either with `--cluster-ip=None`, or by providing a custom YAML) - As a result, the service doesn't have a virtual IP address - Since there is no virtual IP address, there is no load balancer either - CoreDNS will return the pods' IP addresses as multiple `A` records - This gives us an easy way to discover all the replicas for a deployment .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Services and endpoints - A service has a number of "endpoints" - Each endpoint is a host + port where the service is available - The endpoints are maintained and updated automatically by Kubernetes .exercise[ - Check the endpoints that Kubernetes has associated with our `httpenv` service: ```bash kubectl describe service httpenv ``` ] In the output, there will be a line starting with `Endpoints:`. That line will list a bunch of addresses in `host:port` format. .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Viewing endpoint details - When we have many endpoints, our display commands truncate the list ```bash kubectl get endpoints ``` - If we want to see the full list, we can use one of the following commands: ```bash kubectl describe endpoints httpenv kubectl get endpoints httpenv -o yaml ``` - These commands will show us a list of IP addresses - These IP addresses should match the addresses of the corresponding pods: ```bash kubectl get pods -l app=httpenv -o wide ``` .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: extra-details ## `endpoints` not `endpoint` - `endpoints` is the only resource that cannot be singular ```bash $ kubectl get endpoint error: the server doesn't have a resource type "endpoint" ``` - This is because the type itself is plural (unlike every other resource) - There is no `endpoint` object: `type Endpoints struct` - The type doesn't represent a single endpoint, but a list of endpoints .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- ## Exposing services to the outside world - The default type (ClusterIP) only works for internal traffic - If we want to accept external traffic, we can use one of these: - NodePort (expose a service on a TCP port between 30000-32768) - LoadBalancer (provision a cloud load balancer for our service) - ExternalIP (use one node's external IP address) - Ingress (a special mechanism for HTTP services) *We'll see NodePorts and Ingresses more in detail later.* .debug[[k8s/kubectlexpose.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kubectlexpose.md)] --- class: pic .interstitial[] --- name: toc-shipping-images-with-a-registry class: title Shipping images with a registry .nav[ [Previous section](#toc-exposing-containers) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-running-our-application-on-kubernetes) ] .debug[(automatically generated title slide)] --- # Shipping images with a registry - Initially, our app was running on a single node - We could *build* and *run* in the same place - Therefore, we did not need to *ship* anything - Now that we want to run on a cluster, things are different - The easiest way to ship container images is to use a registry .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## How Docker registries work (a reminder) - What happens when we execute `docker run alpine` ? - If the Engine needs to pull the `alpine` image, it expands it into `library/alpine` - `library/alpine` is expanded into `index.docker.io/library/alpine` - The Engine communicates with `index.docker.io` to retrieve `library/alpine:latest` - To use something else than `index.docker.io`, we specify it in the image name - Examples: ```bash docker pull gcr.io/google-containers/alpine-with-bash:1.0 docker build -t registry.mycompany.io:5000/myimage:awesome . docker push registry.mycompany.io:5000/myimage:awesome ``` .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Running DockerCoins on Kubernetes - Create one deployment for each component (hasher, redis, rng, webui, worker) - Expose deployments that need to accept connections (hasher, redis, rng, webui) - For redis, we can use the official redis image - For the 4 others, we need to build images and push them to some registry .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Building and shipping images - There are *many* options! - Manually: - build locally (with `docker build` or otherwise) - push to the registry - Automatically: - build and test locally - when ready, commit and push a code repository - the code repository notifies an automated build system - that system gets the code, builds it, pushes the image to the registry .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Which registry do we want to use? - There are SAAS products like Docker Hub, Quay ... - Each major cloud provider has an option as well (ACR on Azure, ECR on AWS, GCR on Google Cloud...) - There are also commercial products to run our own registry (Docker EE, Quay...) - And open source options, too! - When picking a registry, pay attention to its build system (when it has one) .debug[[k8s/shippingimages.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/shippingimages.md)] --- ## Using images from the Docker Hub - For everyone's convenience, we took care of building DockerCoins images - We pushed these images to the DockerHub, under the [dockercoins](https://hub.docker.com/u/dockercoins) user - These images are *tagged* with a version number, `v0.1` - The full image names are therefore: - `dockercoins/hasher:v0.1` - `dockercoins/rng:v0.1` - `dockercoins/webui:v0.1` - `dockercoins/worker:v0.1` .debug[[k8s/buildshiprun-dockerhub.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/buildshiprun-dockerhub.md)] --- ## Setting `$REGISTRY` and `$TAG` - In the upcoming exercises and labs, we use a couple of environment variables: - `$REGISTRY` as a prefix to all image names - `$TAG` as the image version tag - For example, the worker image is `$REGISTRY/worker:$TAG` - If you copy-paste the commands in these exercises: **make sure that you set `$REGISTRY` and `$TAG` first!** - For example: ``` export REGISTRY=dockercoins TAG=v0.1 ``` (this will expand `$REGISTRY/worker:$TAG` to `dockercoins/worker:v0.1`) .debug[[k8s/buildshiprun-dockerhub.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/buildshiprun-dockerhub.md)] --- class: pic .interstitial[] --- name: toc-running-our-application-on-kubernetes class: title Running our application on Kubernetes .nav[ [Previous section](#toc-shipping-images-with-a-registry) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-scaling-our-demo-app) ] .debug[(automatically generated title slide)] --- # Running our application on Kubernetes - We can now deploy our code (as well as a redis instance) .exercise[ - Deploy `redis`: ```bash kubectl create deployment redis --image=redis ``` - Deploy everything else: ```bash set -u for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG done ``` ] .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Is this working? - After waiting for the deployment to complete, let's look at the logs! (Hint: use `kubectl get deploy -w` to watch deployment events) .exercise[ - Look at some logs: ```bash kubectl logs deploy/rng kubectl logs deploy/worker ``` ] -- π€ `rng` is fine ... But not `worker`. -- π‘ Oh right! We forgot to `expose`. .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Connecting containers together - Three deployments need to be reachable by others: `hasher`, `redis`, `rng` - `worker` doesn't need to be exposed - `webui` will be dealt with later .exercise[ - Expose each deployment, specifying the right port: ```bash kubectl expose deployment redis --port 6379 kubectl expose deployment rng --port 80 kubectl expose deployment hasher --port 80 ``` ] .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Is this working yet? - The `worker` has an infinite loop, that retries 10 seconds after an error .exercise[ - Stream the worker's logs: ```bash kubectl logs deploy/worker --follow ``` (Give it about 10 seconds to recover) ] -- We should now see the `worker`, well, working happily. .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Exposing services for external access - Now we would like to access the Web UI - We will expose it with a `NodePort` (just like we did for the registry) .exercise[ - Create a `NodePort` service for the Web UI: ```bash kubectl expose deploy/webui --type=NodePort --port=80 ``` - Check the port that was allocated: ```bash kubectl get svc ``` ] .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- ## Accessing the web UI - We can now connect to *any node*, on the allocated node port, to view the web UI .exercise[ - Open the web UI in your browser (http://node-ip-address:3xxxx/) ] -- Yes, this may take a little while to update. *(Narrator: it was DNS.)* -- *Alright, we're back to where we started, when we were running on a single node!* .debug[[k8s/ourapponkube.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ourapponkube.md)] --- class: pic .interstitial[] --- name: toc-scaling-our-demo-app class: title Scaling our demo app .nav[ [Previous section](#toc-running-our-application-on-kubernetes) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-namespaces) ] .debug[(automatically generated title slide)] --- # Scaling our demo app - Our ultimate goal is to get more DockerCoins (i.e. increase the number of loops per second shown on the web UI) - Let's look at the architecture again:  - The loop is done in the worker; perhaps we could try adding more workers? .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Adding another worker - All we have to do is scale the `worker` Deployment .exercise[ - Open two new terminals to check what's going on with pods and deployments: ```bash kubectl get pods -w kubectl get deployments -w ``` - Now, create more `worker` replicas: ```bash kubectl scale deployment worker --replicas=2 ``` ] After a few seconds, the graph in the web UI should show up. .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Adding more workers - If 2 workers give us 2x speed, what about 3 workers? .exercise[ - Scale the `worker` Deployment further: ```bash kubectl scale deployment worker --replicas=3 ``` ] The graph in the web UI should go up again. (This is looking great! We're gonna be RICH!) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Adding even more workers - Let's see if 10 workers give us 10x speed! .exercise[ - Scale the `worker` Deployment to a bigger number: ```bash kubectl scale deployment worker --replicas=10 ``` ] -- The graph will peak at 10 hashes/second. (We can add as many workers as we want: we will never go past 10 hashes/second.) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- class: extra-details ## Didn't we briefly exceed 10 hashes/second? - It may *look like it*, because the web UI shows instant speed - The instant speed can briefly exceed 10 hashes/second - The average speed cannot - The instant speed can be biased because of how it's computed .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- class: extra-details ## Why instant speed is misleading - The instant speed is computed client-side by the web UI - The web UI checks the hash counter once per second (and does a classic (h2-h1)/(t2-t1) speed computation) - The counter is updated once per second by the workers - These timings are not exact (e.g. the web UI check interval is client-side JavaScript) - Sometimes, between two web UI counter measurements, the workers are able to update the counter *twice* - During that cycle, the instant speed will appear to be much bigger (but it will be compensated by lower instant speed before and after) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Why are we stuck at 10 hashes per second? - If this was high-quality, production code, we would have instrumentation (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...) - It's not! - Perhaps we could benchmark our web services? (with tools like `ab`, or even simpler, `httping`) .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Benchmarking our web services - We want to check `hasher` and `rng` - We are going to use `httping` - It's just like `ping`, but using HTTP `GET` requests (it measures how long it takes to perform one `GET` request) - It's used like this: ``` httping [-c count] http://host:port/path ``` - Or even simpler: ``` httping ip.ad.dr.ess ``` - We will use `httping` on the ClusterIP addresses of our services .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Obtaining ClusterIP addresses - We can simply check the output of `kubectl get services` - Or do it programmatically, as in the example below .exercise[ - Retrieve the IP addresses: ```bash HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}}) RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}}) ``` ] Now we can access the IP addresses of our services through `$HASHER` and `$RNG`. .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Checking `hasher` and `rng` response times .exercise[ - Check the response times for both services: ```bash httping -c 3 $HASHER httping -c 3 $RNG ``` ] - `hasher` is fine (it should take a few milliseconds to reply) - `rng` is not (it should take about 700 milliseconds if there are 10 workers) - Something is wrong with `rng`, but ... what? .debug[[k8s/scalingdockercoins.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/scalingdockercoins.md)] --- ## Let's draw hasty conclusions - The bottleneck seems to be `rng` - *What if* we don't have enough entropy and can't generate enough random numbers? - We need to scale out the `rng` service on multiple machines! Note: this is a fiction! We have enough entropy. But we need a pretext to scale out. (In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy... ...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).) .debug[[shared/hastyconclusions.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/hastyconclusions.md)] --- class: pic .interstitial[] --- name: toc-namespaces class: title Namespaces .nav[ [Previous section](#toc-scaling-our-demo-app) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-daemon-sets) ] .debug[(automatically generated title slide)] --- # Namespaces - We would like to deploy another copy of DockerCoins on our cluster - We could rename all our deployments and services: hasher β hasher2, redis β redis2, rng β rng2, etc. - That would require updating the code - There has to be a better way! -- - As hinted by the title of this section, we will use *namespaces* .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Identifying a resource - We cannot have two resources with the same name (or can we...?) -- - We cannot have two resources *of the same kind* with the same name (but it's OK to have an `rng` service, an `rng` deployment, and an `rng` daemon set) -- - We cannot have two resources of the same kind with the same name *in the same namespace* (but it's OK to have e.g. two `rng` services in different namespaces) -- - Except for resources that exist at the *cluster scope* (these do not belong to a namespace) .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Uniquely identifying a resource - For *namespaced* resources: the tuple *(kind, name, namespace)* needs to be unique - For resources at the *cluster scope*: the tuple *(kind, name)* needs to be unique .exercise[ - List resource types again, and check the NAMESPACED column: ```bash kubectl api-resources ``` ] .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Pre-existing namespaces - If we deploy a cluster with `kubeadm`, we have three or four namespaces: - `default` (for our applications) - `kube-system` (for the control plane) - `kube-public` (contains one ConfigMap for cluster discovery) - `kube-node-lease` (in Kubernetes 1.14 and later; contains Lease objects) - If we deploy differently, we may have different namespaces .debug[[k8s/namespaces.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/namespaces.md)] --- ## Creating namespaces - Let's see two identical methods to create a namespace .exercise[ - We can use `kubectl create namespace`: ```bash kubectl create namespace blue ``` - Or we can construct a very minimal YAML snippet: ```bash kubectl apply -f- < - Unfortunately, as of Kubernetes 1.15, the CLI cannot create daemon sets -- - More precisely: it doesn't have a subcommand to create a daemon set -- - But any kind of resource can always be created by providing a YAML description: ```bash kubectl apply -f foo.yaml ``` -- - How do we create the YAML file for our daemon set? -- - option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset) -- - option 2: `vi` our way out of it .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Creating the YAML file for our daemon set - Let's start with the YAML file for the current `rng` resource .exercise[ - Dump the `rng` resource in YAML: ```bash kubectl get deploy/rng -o yaml >rng.yml ``` - Edit `rng.yml` ] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## "Casting" a resource to another - What if we just changed the `kind` field? (It can't be that easy, right?) .exercise[ - Change `kind: Deployment` to `kind: DaemonSet` - Save, quit - Try to create our new resource: ``` kubectl apply -f rng.yml ``` ] -- We all knew this couldn't be that easy, right! .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Understanding the problem - The core of the error is: ``` error validating data: [ValidationError(DaemonSet.spec): unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec, ... ``` -- - *Obviously,* it doesn't make sense to specify a number of replicas for a daemon set -- - Workaround: fix the YAML - remove the `replicas` field - remove the `strategy` field (which defines the rollout mechanism for a deployment) - remove the `progressDeadlineSeconds` field (also used by the rollout mechanism) - remove the `status: {}` line at the end -- - Or, we could also ... .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Use the `--force`, Luke - We could also tell Kubernetes to ignore these errors and try anyway - The `--force` flag's actual name is `--validate=false` .exercise[ - Try to load our YAML file and ignore errors: ```bash kubectl apply -f rng.yml --validate=false ``` ] -- π©β¨π -- Wait ... Now, can it be *that* easy? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Checking what we've done - Did we transform our `deployment` into a `daemonset`? .exercise[ - Look at the resources that we have now: ```bash kubectl get all ``` ] -- We have two resources called `rng`: - the *deployment* that was existing before - the *daemon set* that we just created We also have one too many pods. (The pod corresponding to the *deployment* still exists.) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## `deploy/rng` and `ds/rng` - You can have different resource types with the same name (i.e. a *deployment* and a *daemon set* both named `rng`) - We still have the old `rng` *deployment* ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/rng 1 1 1 1 18m ``` - But now we have the new `rng` *daemon set* as well ``` NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/rng 2 2 2 2 2 9s ``` .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Too many pods - If we check with `kubectl get pods`, we see: - *one pod* for the deployment (named `rng-xxxxxxxxxx-yyyyy`) - *one pod per node* for the daemon set (named `rng-zzzzz`) ``` NAME READY STATUS RESTARTS AGE rng-54f57d4d49-7pt82 1/1 Running 0 11m rng-b85tm 1/1 Running 0 25s rng-hfbrr 1/1 Running 0 25s [...] ``` -- The daemon set created one pod per node, except on the master node. The master node has [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) preventing pods from running there. (To schedule a pod on this node anyway, the pod will require appropriate [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).) .footnote[(Off by one? We don't run these pods on the node hosting the control plane.)] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Is this working? - Look at the web UI -- - The graph should now go above 10 hashes per second! -- - It looks like the newly created pods are serving traffic correctly - How and why did this happen? (We didn't do anything special to add them to the `rng` service load balancer!) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: pic .interstitial[] --- name: toc-labels-and-selectors class: title Labels and selectors .nav[ [Previous section](#toc-daemon-sets) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-exercise--from-compose-to-kubernetes) ] .debug[(automatically generated title slide)] --- # Labels and selectors - The `rng` *service* is load balancing requests to a set of pods - That set of pods is defined by the *selector* of the `rng` service .exercise[ - Check the *selector* in the `rng` service definition: ```bash kubectl describe service rng ``` ] - The selector is `app=rng` - It means "all the pods having the label `app=rng`" (They can have additional labels as well, that's OK!) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Selector evaluation - We can use selectors with many `kubectl` commands - For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more .exercise[ - Get the list of pods matching selector `app=rng`: ```bash kubectl get pods -l app=rng kubectl get pods --selector app=rng ``` ] But ... why do these pods (in particular, the *new* ones) have this `app=rng` label? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Where do labels come from? - When we create a deployment with `kubectl create deployment rng`, this deployment gets the label `app=rng` - The replica sets created by this deployment also get the label `app=rng` - The pods created by these replica sets also get the label `app=rng` - When we created the daemon set from the deployment, we re-used the same spec - Therefore, the pods created by the daemon set get the same labels .footnote[Note: when we use `kubectl run stuff`, the label is `run=stuff` instead.] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating load balancer configuration - We would like to remove a pod from the load balancer - What would happen if we removed that pod, with `kubectl delete pod ...`? -- It would be re-created immediately (by the replica set or the daemon set) -- - What would happen if we removed the `app=rng` label from that pod? -- It would *also* be re-created immediately -- Why?!? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Selectors for replica sets and daemon sets - The "mission" of a replica set is: "Make sure that there is the right number of pods matching this spec!" - The "mission" of a daemon set is: "Make sure that there is a pod matching this spec on each node!" -- - *In fact,* replica sets and daemon sets do not check pod specifications - They merely have a *selector*, and they look for pods matching that selector - Yes, we can fool them by manually creating pods with the "right" labels - Bottom line: if we remove our `app=rng` label ... ... The pod "disappears" for its parent, which re-creates another pod to replace it .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: extra-details ## Isolation of replica sets and daemon sets - Since both the `rng` daemon set and the `rng` replica set use `app=rng` ... ... Why don't they "find" each other's pods? -- - *Replica sets* have a more specific selector, visible with `kubectl describe` (It looks like `app=rng,pod-template-hash=abcd1234`) - *Daemon sets* also have a more specific selector, but it's invisible (It looks like `app=rng,controller-revision-hash=abcd1234`) - As a result, each controller only "sees" the pods it manages .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer - Currently, the `rng` service is defined by the `app=rng` selector - The only way to remove a pod is to remove or change the `app` label - ... But that will cause another pod to be created instead! - What's the solution? -- - We need to change the selector of the `rng` service! - Let's add another label to that selector (e.g. `enabled=yes`) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Complex selectors - If a selector specifies multiple labels, they are understood as a logical *AND* (In other words: the pods must match all the labels) - Kubernetes has support for advanced, set-based selectors (But these cannot be used with services, at least not yet!) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## The plan 1. Add the label `enabled=yes` to all our `rng` pods 2. Update the selector for the `rng` service to also include `enabled=yes` 3. Toggle traffic to a pod by manually adding/removing the `enabled` label 4. Profit! *Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.* .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Adding labels to pods - We want to add the label `enabled=yes` to all pods that have `app=rng` - We could edit each pod one by one with `kubectl edit` ... - ... Or we could use `kubectl label` to label them all - `kubectl label` can use selectors itself .exercise[ - Add `enabled=yes` to all pods that have `app=rng`: ```bash kubectl label pods -l app=rng enabled=yes ``` ] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating the service selector - We need to edit the service specification - Reminder: in the service definition, we will see `app: rng` in two places - the label of the service itself (we don't need to touch that one) - the selector of the service (that's the one we want to change) .exercise[ - Update the service to add `enabled: yes` to its selector: ```bash kubectl edit service rng ``` ] -- ... And then we get *the weirdest error ever.* Why? .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## When the YAML parser is being too smart - YAML parsers try to help us: - `xyz` is the string `"xyz"` - `42` is the integer `42` - `yes` is the boolean value `true` - If we want the string `"42"` or the string `"yes"`, we have to quote them - So we have to use `enabled: "yes"` .footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!] .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating the service selector, take 2 .exercise[ - Update the service to add `enabled: "yes"` to its selector: ```bash kubectl edit service rng ``` ] This time it should work! If we did everything correctly, the web UI shouldn't show any change. .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Updating labels - We want to disable the pod that was created by the deployment - All we have to do, is remove the `enabled` label from that pod - To identify that pod, we can use its name - ... Or rely on the fact that it's the only one with a `pod-template-hash` label - Good to know: - `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string) - to remove label `foo`, use `kubectl label ... foo-` - to change an existing label, we would need to add `--overwrite` .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer .exercise[ - In one window, check the logs of that pod: ```bash POD=$(kubectl get pod -l app=rng,pod-template-hash -o name) kubectl logs --tail 1 --follow $POD ``` (We should see a steady stream of HTTP logs) - In another window, remove the label from the pod: ```bash kubectl label pod -l app=rng,pod-template-hash enabled- ``` (The stream of HTTP logs should stop immediately) ] There might be a slight change in the web UI (since we removed a bit of capacity from the `rng` service). If we remove more pods, the effect should be more visible. .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: extra-details ## Updating the daemon set - If we scale up our cluster by adding new nodes, the daemon set will create more pods - These pods won't have the `enabled=yes` label - If we want these pods to have that label, we need to edit the daemon set spec - We can do that with e.g. `kubectl edit daemonset rng` .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: extra-details ## We've put resources in your resources - Reminder: a daemon set is a resource that creates more resources! - There is a difference between: - the label(s) of a resource (in the `metadata` block in the beginning) - the selector of a resource (in the `spec` block) - the label(s) of the resource(s) created by the first resource (in the `template` block) - We would need to update the selector and the template (metadata labels are not mandatory) - The template must match the selector (i.e. the resource will refuse to create resources that it will not select) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Labels and debugging - When a pod is misbehaving, we can delete it: another one will be recreated - But we can also change its labels - It will be removed from the load balancer (it won't receive traffic anymore) - Another pod will be recreated immediately - But the problematic pod is still here, and we can inspect and debug it - We can even re-add it to the rotation if necessary (Very useful to troubleshoot intermittent and elusive bugs) .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- ## Labels and advanced rollout control - Conversely, we can add pods matching a service's selector - These pods will then receive requests and serve traffic - Examples: - one-shot pod with all debug flags enabled, to collect logs - pods created automatically, but added to rotation in a second step (by setting their label accordingly) - This gives us building blocks for canary and blue/green deployments .debug[[k8s/daemonset.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/daemonset.md)] --- class: pic .interstitial[] --- name: toc-exercise--from-compose-to-kubernetes class: title Exercise β from Compose to Kubernetes .nav[ [Previous section](#toc-labels-and-selectors) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-rolling-updates) ] .debug[(automatically generated title slide)] --- # Exercise β from Compose to Kubernetes Let's run the wordsmith app on Kubernetes! The code is at: https://github.com/jpetazzo/wordsmith .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: pic .interstitial[] --- name: toc-rolling-updates class: title Rolling updates .nav[ [Previous section](#toc-exercise--from-compose-to-kubernetes) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-healthchecks) ] .debug[(automatically generated title slide)] --- # Rolling updates - By default (without rolling updates), when a scaled resource is updated: - new pods are created - old pods are terminated - ... all at the same time - if something goes wrong, Β―\\\_(γ)\_/Β― .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling updates - With rolling updates, when a resource is updated, it happens progressively - Two parameters determine the pace of the rollout: `maxUnavailable` and `maxSurge` - They can be specified in absolute number of pods, or percentage of the `replicas` count - At any given time ... - there will always be at least `replicas`-`maxUnavailable` pods available - there will never be more than `replicas`+`maxSurge` pods in total - there will therefore be up to `maxUnavailable`+`maxSurge` pods being updated - We have the possibility of rolling back to the previous version (if the update fails or is unsatisfactory in any way) .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Checking current rollout parameters - Recall how we build custom reports with `kubectl` and `jq`: .exercise[ - Show the rollout plan for our deployments: ```bash kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling updates in practice - As of Kubernetes 1.8, we can do rolling updates with: `deployments`, `daemonsets`, `statefulsets` - Editing one of these resources will automatically result in a rolling update - Rolling updates can be monitored with the `kubectl rollout` subcommand .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Building a new version of the `worker` service .warning[ Only run these commands if you have built and pushed DockerCoins to a local registry. If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip this. ] .exercise[ - Go to the `stacks` directory (`~/container.training/stacks`) - Edit `dockercoins/worker/worker.py`; update the first `sleep` line to sleep 1 second - Build a new tag and push it to the registry: ```bash #export REGISTRY=localhost:3xxxx export TAG=v0.2 docker-compose -f dockercoins.yml build docker-compose -f dockercoins.yml push ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling out the new `worker` service .exercise[ - Let's monitor what's going on by opening a few terminals, and run: ```bash kubectl get pods -w kubectl get replicasets -w kubectl get deployments -w ``` - Update `worker` either with `kubectl edit`, or by running: ```bash kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` ] -- That rollout should be pretty quick. What shows in the web UI? .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Give it some time - At first, it looks like nothing is happening (the graph remains at the same level) - According to `kubectl get deploy -w`, the `deployment` was updated really quickly - But `kubectl get pods -w` tells a different story - The old `pods` are still here, and they stay in `Terminating` state for a while - Eventually, they are terminated; and then the graph decreases significantly - This delay is due to the fact that our worker doesn't handle signals - Kubernetes sends a "polite" shutdown request to the worker, which ignores it - After a grace period, Kubernetes gets impatient and kills the container (The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed) .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Rolling out something invalid - What happens if we make a mistake? .exercise[ - Update `worker` by specifying a non-existent image: ```bash export TAG=v0.3 kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` - Check what's going on: ```bash kubectl rollout status deploy worker ``` ] -- Our rollout is stuck. However, the app is not dead. (After a minute, it will stabilize to be 20-25% slower.) .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## What's going on with our rollout? - Why is our app a bit slower? - Because `MaxUnavailable=25%` ... So the rollout terminated 2 replicas out of 10 available - Okay, but why do we see 5 new replicas being rolled out? - Because `MaxSurge=25%` ... So in addition to replacing 2 replicas, the rollout is also starting 3 more - It rounded down the number of MaxUnavailable pods conservatively, but the total number of pods being rolled out is allowed to be 25+25=50% .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: extra-details ## The nitty-gritty details - We start with 10 pods running for the `worker` deployment - Current settings: MaxUnavailable=25% and MaxSurge=25% - When we start the rollout: - two replicas are taken down (as per MaxUnavailable=25%) - two others are created (with the new version) to replace them - three others are created (with the new version) per MaxSurge=25%) - Now we have 8 replicas up and running, and 5 being deployed - Our rollout is stuck at this point! .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Checking the dashboard during the bad rollout If you didn't deploy the Kubernetes dashboard earlier, just skip this slide. .exercise[ - Check which port the dashboard is on: ```bash kubectl -n kube-system get svc socat ``` ] Note the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] -- - We have failures in Deployments, Pods, and Replica Sets .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- ## Recovering from a bad rollout - We could push some `v0.3` image (the pod retry logic will eventually catch it and the rollout will proceed) - Or we could invoke a manual rollback .exercise[ - Cancel the deployment and wait for the dust to settle: ```bash kubectl rollout undo deploy worker kubectl rollout status deploy worker ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: extra-details ## Changing rollout parameters - We want to: - revert to `v0.1` - be conservative on availability (always have desired number of available workers) - go slow on rollout speed (update only one pod at a time) - give some time to our workers to "warm up" before starting more The corresponding changes can be expressed in the following YAML snippet: .small[ ```yaml spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 ``` ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: extra-details ## Applying changes through a YAML patch - We could use `kubectl edit deployment worker` - But we could also use `kubectl patch` with the exact YAML shown before .exercise[ .small[ - Apply all our changes and wait for them to take effect: ```bash kubectl patch deployment worker -p " spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 " kubectl rollout status deployment worker kubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] ] .debug[[k8s/rollout.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/rollout.md)] --- class: pic .interstitial[] --- name: toc-healthchecks class: title Healthchecks .nav[ [Previous section](#toc-rolling-updates) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-exposing-http-services-with-ingress-resources) ] .debug[(automatically generated title slide)] --- # Healthchecks - Kubernetes provides two kinds of healthchecks: liveness and readiness - Healthchecks are *probes* that apply to *containers* (not to pods) - Each container can have two (optional) probes: - liveness = is this container dead or alive? - readiness = is this container ready to serve traffic? - Different probes are available (HTTP, TCP, program execution) - Let's see the difference and how to use them! .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Liveness probe - Indicates if the container is dead or alive - A dead container cannot come back to life - If the liveness probe fails, the container is killed (to make really sure that it's really dead; no zombies or undeads!) - What happens next depends on the pod's `restartPolicy`: - `Never`: the container is not restarted - `OnFailure` or `Always`: the container is restarted .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## When to use a liveness probe - To indicate failures that can't be recovered - deadlocks (causing all requests to time out) - internal corruption (causing all requests to error) - If the liveness probe fails *N* consecutive times, the container is killed - *N* is the `failureThreshold` (3 by default) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Readiness probe - Indicates if the container is ready to serve traffic - If a container becomes "unready" (let's say busy!) it might be ready again soon - If the readiness probe fails: - the container is *not* killed - if the pod is a member of a service, it is temporarily removed - it is re-added as soon as the readiness probe passes again .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## When to use a readiness probe - To indicate temporary failures - the application can only service *N* parallel connections - the runtime is busy doing garbage collection or initial data load - The container is marked as "not ready" after `failureThreshold` failed attempts (3 by default) - It is marked again as "ready" after `successThreshold` successful attempts (1 by default) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Different types of probes - HTTP request - specify URL of the request (and optional headers) - any status code between 200 and 399 indicates success - TCP connection - the probe succeeds if the TCP port is open - arbitrary exec - a command is executed in the container - exit status of zero indicates success .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Benefits of using probes - Rolling updates proceed when containers are *actually ready* (as opposed to merely started) - Containers in a broken state get killed and restarted (instead of serving errors or timeouts) - Overloaded backends get removed from load balancer rotation (thus improving response times across the board) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Example: HTTP probe Here is a pod template for the `rng` web service of the DockerCoins app: ```yaml apiVersion: v1 kind: Pod metadata: name: rng-with-liveness spec: containers: - name: rng image: dockercoins/rng:v0.1 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 10 periodSeconds: 1 ``` If the backend serves an error, or takes longer than 1s, 3 times in a row, it gets killed. .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Example: exec probe Here is a pod template for a Redis server: ```yaml apiVersion: v1 kind: Pod metadata: name: redis-with-liveness spec: containers: - name: redis image: redis livenessProbe: exec: command: ["redis-cli", "ping"] ``` If the Redis process becomes unresponsive, it will be killed. .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Details about liveness and readiness probes - Probes are executed at intervals of `periodSeconds` (default: 10) - The timeout for a probe is set with `timeoutSeconds` (default: 1) - A probe is considered successful after `successThreshold` successes (default: 1) - A probe is considered failing after `failureThreshold` failures (default: 3) - If a probe is not defined, it's as if there was an "always successful" probe .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks.md)] --- ## Questions to ask before adding healthchecks - Do we want liveness, readiness, both? (sometimes, we can use the same check, but with different failure thresholds) - Do we have existing HTTP endpoints that we can use? - Do we need to add new endpoints, or perhaps use something else? - Are our healthchecks likely to use resources and/or slow down the app? - Do they depend on additional services? (this can be particularly tricky, see next slide) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Healthchecks and dependencies - A good healthcheck should always indicate the health of the service itself - It should not be affected by the state of the service's dependencies - Example: a web server requiring a database connection to operate (make sure that the healthcheck can report "OK" even if the database is down; because it won't help us to restart the web server if the issue is with the DB!) - Example: a microservice calling other microservices - Example: a worker process (these will generally require minor code changes to report health) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Adding healthchecks to an app - Let's add healthchecks to DockerCoins! - We will examine the questions of the previous slide - Then we will review each component individually to add healthchecks .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Liveness, readiness, or both? - To answer that question, we need to see the app run for a while - Do we get temporary, recoverable glitches? β then use readiness - Or do we get hard lock-ups requiring a restart? β then use liveness - In the case of DockerCoins, we don't know yet! - Let's pick liveness .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Do we have HTTP endpoints that we can use? - Each of the 3 web services (hasher, rng, webui) has a trivial route on `/` - These routes: - don't seem to perform anything complex or expensive - don't seem to call other services - Perfect! (See next slides for individual details) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- - [hasher.rb](https://github.com/jpetazzo/container.training/blob/master/dockercoins/hasher/hasher.rb) ```ruby get '/' do "HASHER running on #{Socket.gethostname}\n" end ``` - [rng.py](https://github.com/jpetazzo/container.training/blob/master/dockercoins/rng/rng.py) ```python @app.route("/") def index(): return "RNG running on {}\n".format(hostname) ``` - [webui.js](https://github.com/jpetazzo/container.training/blob/master/dockercoins/webui/webui.js) ```javascript app.get('/', function (req, res) { res.redirect('/index.html'); }); ``` .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Running DockerCoins - We will run DockerCoins in a new, separate namespace - We will use a set of YAML manifests and pre-built images - We will add our new liveness probe to the YAML of the `rng` DaemonSet - Then, we will deploy the application .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Creating a new namespace - This will make sure that we don't collide / conflict with previous exercises .exercise[ - Create the yellow namespace: ```bash kubectl create namespace probes ``` - Switch to that namespace: ```bash kns probes ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Retrieving DockerCoins manifests - All the manifests that we need are on a convenient repository: https://github.com/jpetazzo/kubercoins .exercise[ - Clone that repository, if we haven't done it yet: ```bash cd ~ git clone https://github.com/jpetazzo/kubercoins ``` - Change directory to the repository: ```bash cd kubercoins ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## A simple HTTP liveness probe This is what our liveness probe should look like: ```yaml containers: - name: ... image: ... livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 30 periodSeconds: 5 ``` This will give 30 seconds to the service to start. (Way more than necessary!) It will run the probe every 5 seconds. It will use the default timeout (1 second). It will use the default failure threshold (3 failed attempts = dead). It will use the default success threshold (1 successful attempt = alive). .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Adding the liveness probe - Let's add the liveness probe, then deploy DockerCoins .exercise[ - Edit `rng-deployment.yaml` and add the liveness probe ```bash vim rng-deployment.yaml ``` - Load the YAML for all the resources of DockerCoins: ```bash kubectl apply -f . ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Testing the liveness probe - The rng service needs 100ms to process a request (because it is single-threaded and sleeps 0.1s in each request) - The probe timeout is set to 1 second - If we send more than 10 requests per second per backend, it will break - Let's generate traffic and see what happens! .exercise[ - Get the ClusterIP address of the rng service: ```bash kubectl get svc rng ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Monitoring the rng service - Each command below will show us what's happening on a different level .exercise[ - In one window, monitor cluster events: ```bash kubectl get events -w ``` - In another window, monitor the response time of rng: ```bash httping `` ``` - In another window, monitor pods status: ```bash kubectl get pods -w ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Generating traffic - Let's use `ab` to send concurrent requests to rng .exercise[ - In yet another window, generate traffic: ```bash ab -c 10 -n 1000 http://``/1 ``` - Experiment with higher values of `-c` and see what happens ] - The `-c` parameter indicates the number of concurrent requests - The final `/1` is important to generate actual traffic (otherwise we would use the ping endpoint, which doesn't sleep 0.1s per request) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Discussion - Above a given threshold, the liveness probe starts failing (about 10 concurrent requests per backend should be plenty enough) - When the liveness probe fails 3 times in a row, the container is restarted - During the restart, there is *less* capacity available - ... Meaning that the other backends are likely to timeout as well - ... Eventually causing all backends to be restarted - ... And each fresh backend gets restarted, too - This goes on until the load goes down, or we add capacity *This wouldn't be a good healthcheck in a real application!* .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Better healthchecks - We need to make sure that the healthcheck doesn't trip when performance degrades due to external pressure - Using a readiness check would have fewer effects (but it would still be an imperfect solution) - A possible combination: - readiness check with a short timeout / low failure threshold - liveness check with a longer timeout / higher failure threshold .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Healthchecks for redis - A liveness probe is enough (it's not useful to remove a backend from rotation when it's the only one) - We could use an exec probe running `redis-cli ping` .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- class: extra-details ## Exec probes and zombies - When using exec probes, we should make sure that we have a *zombie reaper* π€π§π§ Wait, what? - When a process terminates, its parent must call `wait()`/`waitpid()` (this is how the parent process retrieves the child's exit status) - In the meantime, the process is in *zombie* state (the process state will show as `Z` in `ps`, `top` ...) - When a process is killed, its children are *orphaned* and attached to PID 1 - PID 1 has the responsibility of *reaping* these processes when they terminate - OK, but how does that affect us? .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- class: extra-details ## PID 1 in containers - On ordinary systems, PID 1 (`/sbin/init`) has logic to reap processes - In containers, PID 1 is typically our application process (e.g. Apache, the JVM, NGINX, Redis ...) - These *do not* take care of reaping orphans - If we use exec probes, we need to add a process reaper - We can add [tini](https://github.com/krallin/tini) to our images - Or [share the PID namespace between containers of a pod](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/) (and have gcr.io/pause take care of the reaping) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- ## Healthchecks for worker - Readiness isn't useful (because worker isn't a backend for a service) - Liveness may help us restart a broken worker, but how can we check it? - Embedding an HTTP server is an option (but it has a high potential for unwanted side effects and false positives) - Using a "lease" file can be relatively easy: - touch a file during each iteration of the main loop - check the timestamp of that file from an exec probe - Writing logs (and checking them from the probe) also works .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/healthchecks-more.md)] --- class: pic .interstitial[] --- name: toc-exposing-http-services-with-ingress-resources class: title Exposing HTTP services with Ingress resources .nav[ [Previous section](#toc-healthchecks) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-exercise--creating-an-ingress) ] .debug[(automatically generated title slide)] --- # Exposing HTTP services with Ingress resources - *Services* give us a way to access a pod or a set of pods - Services can be exposed to the outside world: - with type `NodePort` (on a port >30000) - with type `LoadBalancer` (allocating an external load balancer) - What about HTTP services? - how can we expose `webui`, `rng`, `hasher`? - the Kubernetes dashboard? - a new version of `webui`? .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Exposing HTTP services - If we use `NodePort` services, clients have to specify port numbers (i.e. http://xxxxx:31234 instead of just http://xxxxx) - `LoadBalancer` services are nice, but: - they are not available in all environments - they often carry an additional cost (e.g. they provision an ELB) - they require one extra step for DNS integration (waiting for the `LoadBalancer` to be provisioned; then adding it to DNS) - We could build our own reverse proxy .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Building a custom reverse proxy - There are many options available: Apache, HAProxy, Hipache, NGINX, Traefik, ... (look at [jpetazzo/aiguillage](https://github.com/jpetazzo/aiguillage) for a minimal reverse proxy configuration using NGINX) - Most of these options require us to update/edit configuration files after each change - Some of them can pick up virtual hosts and backends from a configuration store - Wouldn't it be nice if this configuration could be managed with the Kubernetes API? -- - Enter.red[ΒΉ] *Ingress* resources! .footnote[.red[ΒΉ] Pun maybe intended.] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress resources - Kubernetes API resource (`kubectl get ingress`/`ingresses`/`ing`) - Designed to expose HTTP services - Basic features: - load balancing - SSL termination - name-based virtual hosting - Can also route to different services depending on: - URI path (e.g. `/api`β`api-service`, `/static`β`assets-service`) - Client headers, including cookies (for A/B testing, canary deployment...) - and more! .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Principle of operation - Step 1: deploy an *ingress controller* - ingress controller = load balancer + control loop - the control loop watches over ingress resources, and configures the LB accordingly - Step 2: set up DNS - associate DNS entries with the load balancer address - Step 3: create *ingress resources* - the ingress controller picks up these resources and configures the LB - Step 4: profit! .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress in action - We will deploy the Traefik ingress controller - this is an arbitrary choice - maybe motivated by the fact that Traefik releases are named after cheeses - For DNS, we will use [nip.io](http://nip.io/) - `*.1.2.3.4.nip.io` resolves to `1.2.3.4` - We will create ingress resources for various HTTP services .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Deploying pods listening on port 80 - We want our ingress load balancer to be available on port 80 - We could do that with a `LoadBalancer` service ... but it requires support from the underlying infrastructure - We could use pods specifying `hostPort: 80` ... but with most CNI plugins, this [doesn't work or requires additional setup](https://github.com/kubernetes/kubernetes/issues/23920) - We could use a `NodePort` service ... but that requires [changing the `--service-node-port-range` flag in the API server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) - Last resort: the `hostNetwork` mode .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Without `hostNetwork` - Normally, each pod gets its own *network namespace* (sometimes called sandbox or network sandbox) - An IP address is assigned to the pod - This IP address is routed/connected to the cluster network - All containers of that pod are sharing that network namespace (and therefore using the same IP address) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## With `hostNetwork: true` - No network namespace gets created - The pod is using the network namespace of the host - It "sees" (and can use) the interfaces (and IP addresses) of the host - The pod can receive outside traffic directly, on any port - Downside: with most network plugins, network policies won't work for that pod - most network policies work at the IP address level - filtering that pod = filtering traffic from the node .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Running Traefik - The [Traefik documentation](https://docs.traefik.io/user-guide/kubernetes/#deploy-trfik-using-a-deployment-or-daemonset) tells us to pick between Deployment and Daemon Set - We are going to use a Daemon Set so that each node can accept connections - We will do two minor changes to the [YAML provided by Traefik](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml): - enable `hostNetwork` - add a *toleration* so that Traefik also runs on `node1` .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Taints and tolerations - A *taint* is an attribute added to a node - It prevents pods from running on the node - ... Unless they have a matching *toleration* - When deploying with `kubeadm`: - a taint is placed on the node dedicated to the control plane - the pods running the control plane have a matching toleration .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Checking taints on our nodes .exercise[ - Check our nodes specs: ```bash kubectl get node node1 -o json | jq .spec kubectl get node node2 -o json | jq .spec ``` ] We should see a result only for `node1` (the one with the control plane): ```json "taints": [ { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ] ``` .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Understanding a taint - The `key` can be interpreted as: - a reservation for a special set of pods (here, this means "this node is reserved for the control plane") - an error condition on the node (for instance: "disk full," do not start new pods here!) - The `effect` can be: - `NoSchedule` (don't run new pods here) - `PreferNoSchedule` (try not to run new pods here) - `NoExecute` (don't run new pods and evict running pods) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Checking tolerations on the control plane .exercise[ - Check tolerations for CoreDNS: ```bash kubectl -n kube-system get deployments coredns -o json | jq .spec.template.spec.tolerations ``` ] The result should include: ```json { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ``` It means: "bypass the exact taint that we saw earlier on `node1`." .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: extra-details ## Special tolerations .exercise[ - Check tolerations on `kube-proxy`: ```bash kubectl -n kube-system get ds kube-proxy -o json | jq .spec.template.spec.tolerations ``` ] The result should include: ```json { "operator": "Exists" } ``` This one is a special case that means "ignore all taints and run anyway." .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Running Traefik on our cluster - We provide a YAML file (`k8s/traefik.yaml`) which is essentially the sum of: - [Traefik's Daemon Set resources](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml) (patched with `hostNetwork` and tolerations) - [Traefik's RBAC rules](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml) allowing it to watch necessary API objects .exercise[ - Apply the YAML: ```bash kubectl apply -f ~/container.training/k8s/traefik.yaml ``` ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Checking that Traefik runs correctly - If Traefik started correctly, we now have a web server listening on each node .exercise[ - Check that Traefik is serving 80/tcp: ```bash curl localhost ``` ] We should get a `404 page not found` error. This is normal: we haven't provided any ingress rule yet. .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Setting up DNS - To make our lives easier, we will use [nip.io](http://nip.io) - Check out `http://cheddar.A.B.C.D.nip.io` (replacing A.B.C.D with the IP address of `node1`) - We should get the same `404 page not found` error (meaning that our DNS is "set up properly", so to speak!) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Traefik web UI - Traefik provides a web dashboard - With the current install method, it's listening on port 8080 .exercise[ - Go to `http://node1:8080` (replacing `node1` with its IP address) ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Setting up host-based routing ingress rules - We are going to use `errm/cheese` images (there are [3 tags available](https://hub.docker.com/r/errm/cheese/tags/): wensleydale, cheddar, stilton) - These images contain a simple static HTTP server sending a picture of cheese - We will run 3 deployments (one for each cheese) - We will create 3 services (one for each deployment) - Then we will create 3 ingress rules (one for each service) - We will route `.A.B.C.D.nip.io` to the corresponding deployment .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Running cheesy web servers .exercise[ - Run all three deployments: ```bash kubectl create deployment cheddar --image=errm/cheese:cheddar kubectl create deployment stilton --image=errm/cheese:stilton kubectl create deployment wensleydale --image=errm/cheese:wensleydale ``` - Create a service for each of them: ```bash kubectl expose deployment cheddar --port=80 kubectl expose deployment stilton --port=80 kubectl expose deployment wensleydale --port=80 ``` ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## What does an ingress resource look like? Here is a minimal host-based ingress resource: ```yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cheddar spec: rules: - host: cheddar.`A.B.C.D`.nip.io http: paths: - path: / backend: serviceName: cheddar servicePort: 80 ``` (It is in `k8s/ingress.yaml`.) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Creating our first ingress resources .exercise[ - Edit the file `~/container.training/k8s/ingress.yaml` - Replace A.B.C.D with the IP address of `node1` - Apply the file - Open http://cheddar.A.B.C.D.nip.io ] (An image of a piece of cheese should show up.) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Creating the other ingress resources .exercise[ - Edit the file `~/container.training/k8s/ingress.yaml` - Replace `cheddar` with `stilton` (in `name`, `host`, `serviceName`) - Apply the file - Check that `stilton.A.B.C.D.nip.io` works correctly - Repeat for `wensleydale` ] .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Using multiple ingress controllers - You can have multiple ingress controllers active simultaneously (e.g. Traefik and NGINX) - You can even have multiple instances of the same controller (e.g. one for internal, another for external traffic) - The `kubernetes.io/ingress.class` annotation can be used to tell which one to use - It's OK if multiple ingress controllers configure the same resource (it just means that the service will be accessible through multiple paths) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress: the good - The traffic flows directly from the ingress load balancer to the backends - it doesn't need to go through the `ClusterIP` - in fact, we don't even need a `ClusterIP` (we can use a headless service) - The load balancer can be outside of Kubernetes (as long as it has access to the cluster subnet) - This allows the use of external (hardware, physical machines...) load balancers - Annotations can encode special features (rate-limiting, A/B testing, session stickiness, etc.) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- ## Ingress: the bad - Aforementioned "special features" are not standardized yet - Some controllers will support them; some won't - Even relatively common features (stripping a path prefix) can differ: - [traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip](https://docs.traefik.io/user-guide/kubernetes/#path-based-routing) - [ingress.kubernetes.io/rewrite-target: /](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/rewrite) - This should eventually stabilize (remember that ingresses are currently `apiVersion: extensions/v1beta1`) .debug[[k8s/ingress.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/ingress.md)] --- class: pic .interstitial[] --- name: toc-exercise--creating-an-ingress class: title Exercise β creating an Ingress .nav[ [Previous section](#toc-exposing-http-services-with-ingress-resources) | [Back to table of contents](#toc-chapter-10) | [Next section](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- # Exercise β creating an Ingress Add an Ingress resource for the wordsmith app. .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: pic .interstitial[] --- name: toc-setting-up-kubernetes class: title Setting up Kubernetes .nav[ [Previous section](#toc-exercise--creating-an-ingress) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-the-kubernetes-dashboard) ] .debug[(automatically generated title slide)] --- # Setting up Kubernetes - How did we set up these Kubernetes clusters that we're using? -- - We used `kubeadm` on freshly installed VM instances running Ubuntu LTS 1. Install Docker 2. Install Kubernetes packages 3. Run `kubeadm init` on the first node (it deploys the control plane on that node) 4. Set up Weave (the overlay network) (that step is just one `kubectl apply` command; discussed later) 5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`) 6. Copy the configuration file generated by `kubeadm init` - Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- ## `kubeadm` drawbacks - Doesn't set up Docker or any other container engine - Doesn't set up the overlay network - Doesn't set up multi-master (no high availability) -- (At least ... not yet! Though it's [experimental in 1.12](https://kubernetes.io/docs/setup/independent/high-availability/).) -- - "It's still twice as many steps as setting up a Swarm cluster π" -- JΓ©rΓ΄me .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- ## Other deployment options - [AKS](https://azure.microsoft.com/services/kubernetes-service/): managed Kubernetes on Azure - [GKE](https://cloud.google.com/kubernetes-engine/): managed Kubernetes on Google Cloud - [EKS](https://aws.amazon.com/eks/), [eksctl](https://eksctl.io/): managed Kubernetes on AWS - [kops](https://github.com/kubernetes/kops): customizable deployments on AWS, Digital Ocean, GCE (beta), vSphere (alpha) - [minikube](https://kubernetes.io/docs/setup/minikube/), [kubespawn](https://github.com/kinvolk/kube-spawn), [Docker Desktop](https://docs.docker.com/docker-for-mac/kubernetes/): for local development - [kubicorn](https://github.com/kubicorn/kubicorn), the [Cluster API](https://blogs.vmware.com/cloudnative/2019/03/14/what-and-why-of-cluster-api/): deploy your clusters declaratively, "the Kubernetes way" .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- ## Even more deployment options - If you like Ansible: [kubespray](https://github.com/kubernetes-incubator/kubespray) - If you like Terraform: [typhoon](https://github.com/poseidon/typhoon) - If you like Terraform and Puppet: [tarmak](https://github.com/jetstack/tarmak) - You can also learn how to install every component manually, with the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) *Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.* - There are also many commercial options available! - For a longer list, check the Kubernetes documentation: it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/#production-environment) to set up Kubernetes. .debug[[k8s/setup-k8s.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/setup-k8s.md)] --- class: pic .interstitial[] --- name: toc-the-kubernetes-dashboard class: title The Kubernetes dashboard .nav[ [Previous section](#toc-setting-up-kubernetes) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-security-implications-of-kubectl-apply) ] .debug[(automatically generated title slide)] --- # The Kubernetes dashboard - Kubernetes resources can also be viewed with a web dashboard - That dashboard is usually exposed over HTTPS (this requires obtaining a proper TLS certificate) - Dashboard users need to authenticate - We are going to take a *dangerous* shortcut .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## The insecure method - We could (and should) use [Let's Encrypt](https://letsencrypt.org/) ... - ... but we don't want to deal with TLS certificates - We could (and should) learn how authentication and authorization work ... - ... but we will use a guest account with admin access instead .footnote[.warning[Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.]] .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Running a very insecure dashboard - We are going to deploy that dashboard with *one single command* - This command will create all the necessary resources (the dashboard itself, the HTTP wrapper, the admin/guest account) - All these resources are defined in a YAML file - All we have to do is load that YAML file with with `kubectl apply -f` .exercise[ - Create all the dashboard resources, with the following command: ```bash kubectl apply -f ~/container.training/k8s/insecure-dashboard.yaml ``` ] .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Connecting to the dashboard .exercise[ - Check which port the dashboard is on: ```bash kubectl get svc dashboard ``` ] You'll want the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] The dashboard will then ask you which authentication you want to use. .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Dashboard authentication - We have three authentication options at this point: - token (associated with a role that has appropriate permissions) - kubeconfig (e.g. using the `~/.kube/config` file from `node1`) - "skip" (use the dashboard "service account") - Let's use "skip": we're logged in! -- .warning[By the way, we just added a backdoor to our Kubernetes cluster!] .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## Running the Kubernetes dashboard securely - The steps that we just showed you are *for educational purposes only!* - If you do that on your production cluster, people [can and will abuse it](https://redlock.io/blog/cryptojacking-tesla) - For an in-depth discussion about securing the dashboard, check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca) .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- class: pic .interstitial[] --- name: toc-security-implications-of-kubectl-apply class: title Security implications of `kubectl apply` .nav[ [Previous section](#toc-the-kubernetes-dashboard) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-volumes) ] .debug[(automatically generated title slide)] --- # Security implications of `kubectl apply` - When we do `kubectl apply -f `, we create arbitrary resources - Resources can be evil; imagine a `deployment` that ... -- - starts bitcoin miners on the whole cluster -- - hides in a non-default namespace -- - bind-mounts our nodes' filesystem -- - inserts SSH keys in the root account (on the node) -- - encrypts our data and ransoms it -- - β οΈβ οΈβ οΈ .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- ## `kubectl apply` is the new `curl | sh` - `curl | sh` is convenient - It's safe if you use HTTPS URLs from trusted sources -- - `kubectl apply -f` is convenient - It's safe if you use HTTPS URLs from trusted sources - Example: the official setup instructions for most pod networks -- - It introduces new failure modes (for instance, if you try to apply YAML from a link that's no longer valid) .debug[[k8s/dashboard.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/dashboard.md)] --- class: pic .interstitial[] --- name: toc-volumes class: title Volumes .nav[ [Previous section](#toc-security-implications-of-kubectl-apply) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-managing-configuration) ] .debug[(automatically generated title slide)] --- # Volumes - Volumes are special directories that are mounted in containers - Volumes can have many different purposes: - share files and directories between containers running on the same machine - share files and directories between containers and their host - centralize configuration information in Kubernetes and expose it to containers - manage credentials and secrets and expose them securely to containers - store persistent data for stateful services - access storage systems (like Ceph, EBS, NFS, Portworx, and many others) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- class: extra-details ## Kubernetes volumes vs. Docker volumes - Kubernetes and Docker volumes are very similar (the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/) says otherwise ... but it refers to Docker 1.7, which was released in 2015!) - Docker volumes allow us to share data between containers running on the same host - Kubernetes volumes allow us to share data between containers in the same pod - Both Docker and Kubernetes volumes enable access to storage systems - Kubernetes volumes are also used to expose configuration and secrets - Docker has specific concepts for configuration and secrets (but under the hood, the technical implementation is similar) - If you're not familiar with Docker volumes, you can safely ignore this slide! .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Volumes β Persistent Volumes - Volumes and Persistent Volumes are related, but very different! - *Volumes*: - appear in Pod specifications (see next slide) - do not exist as API resources (**cannot** do `kubectl get volumes`) - *Persistent Volumes*: - are API resources (**can** do `kubectl get persistentvolumes`) - correspond to concrete volumes (e.g. on a SAN, EBS, etc.) - cannot be associated with a Pod directly; but through a Persistent Volume Claim - won't be discussed further in this section .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## A simple volume example ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-volume spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ ``` .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## A simple volume example, explained - We define a standalone `Pod` named `nginx-with-volume` - In that pod, there is a volume named `www` - No type is specified, so it will default to `emptyDir` (as the name implies, it will be initialized as an empty directory at pod creation) - In that pod, there is also a container named `nginx` - That container mounts the volume `www` to path `/usr/share/nginx/html/` .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## A volume shared between two containers .small[ ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-volume spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ - name: git image: alpine command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/ restartPolicy: OnFailure ``` ] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Sharing a volume, explained - We added another container to the pod - That container mounts the `www` volume on a different path (`/www`) - It uses the `alpine` image - When started, it installs `git` and clones the `octocat/Spoon-Knife` repository (that repository contains a tiny HTML website) - As a result, NGINX now serves this website .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Sharing a volume, in action - Let's try it! .exercise[ - Create the pod by applying the YAML file: ```bash kubectl apply -f ~/container.training/k8s/nginx-with-volume.yaml ``` - Check the IP address that was allocated to our pod: ```bash kubectl get pod nginx-with-volume -o wide IP=$(kubectl get pod nginx-with-volume -o json | jq -r .status.podIP) ``` - Access the web server: ```bash curl $IP ``` ] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## The devil is in the details - The default `restartPolicy` is `Always` - This would cause our `git` container to run again ... and again ... and again (with an exponential back-off delay, as explained [in the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)) - That's why we specified `restartPolicy: OnFailure` - There is a short period of time during which the website is not available (because the `git` container hasn't done its job yet) - This could be avoided by using [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) (we will see a live example in a few sections) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- ## Volume lifecycle - The lifecycle of a volume is linked to the pod's lifecycle - This means that a volume is created when the pod is created - This is mostly relevant for `emptyDir` volumes (other volumes, like remote storage, are not "created" but rather "attached" ) - A volume survives across container restarts - A volume is destroyed (or, for remote storage, detached) when the pod is destroyed .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/volumes.md)] --- class: pic .interstitial[] --- name: toc-managing-configuration class: title Managing configuration .nav[ [Previous section](#toc-volumes) | [Back to table of contents](#toc-chapter-11) | [Next section](#toc-stateful-sets) ] .debug[(automatically generated title slide)] --- # Managing configuration - Some applications need to be configured (obviously!) - There are many ways for our code to pick up configuration: - command-line arguments - environment variables - configuration files - configuration servers (getting configuration from a database, an API...) - ... and more (because programmers can be very creative!) - How can we do these things with containers and Kubernetes? .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Passing configuration to containers - There are many ways to pass configuration to code running in a container: - baking it into a custom image - command-line arguments - environment variables - injecting configuration files - exposing it over the Kubernetes API - configuration servers - Let's review these different strategies! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Baking custom images - Put the configuration in the image (it can be in a configuration file, but also `ENV` or `CMD` actions) - It's easy! It's simple! - Unfortunately, it also has downsides: - multiplication of images - different images for dev, staging, prod ... - minor reconfigurations require a whole build/push/pull cycle - Avoid doing it unless you don't have the time to figure out other options .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Command-line arguments - Pass options to `args` array in the container specification - Example ([source](https://github.com/coreos/pods/blob/master/kubernetes.yaml#L29)): ```yaml args: - "--data-dir=/var/lib/etcd" - "--advertise-client-urls=http://127.0.0.1:2379" - "--listen-client-urls=http://127.0.0.1:2379" - "--listen-peer-urls=http://127.0.0.1:2380" - "--name=etcd" ``` - The options can be passed directly to the program that we run ... ... or to a wrapper script that will use them to e.g. generate a config file .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Command-line arguments, pros & cons - Works great when options are passed directly to the running program (otherwise, a wrapper script can work around the issue) - Works great when there aren't too many parameters (to avoid a 20-lines `args` array) - Requires documentation and/or understanding of the underlying program ("which parameters and flags do I need, again?") - Well-suited for mandatory parameters (without default values) - Not ideal when we need to pass a real configuration file anyway .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Environment variables - Pass options through the `env` map in the container specification - Example: ```yaml env: - name: ADMIN_PORT value: "8080" - name: ADMIN_AUTH value: Basic - name: ADMIN_CRED value: "admin:0pensesame!" ``` .warning[`value` must be a string! Make sure that numbers and fancy strings are quoted.] π€ Why this weird `{name: xxx, value: yyy}` scheme? It will be revealed soon! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## The downward API - In the previous example, environment variables have fixed values - We can also use a mechanism called the *downward API* - The downward API allows exposing pod or container information - either through special files (we won't show that for now) - or through environment variables - The value of these environment variables is computed when the container is started - Remember: environment variables won't (can't) change after container start - Let's see a few concrete examples! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing the pod's namespace ```yaml - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ``` - Useful to generate FQDN of services (in some contexts, a short name is not enough) - For instance, the two commands should be equivalent: ``` curl api-backend curl api-backend.$MY_POD_NAMESPACE.svc.cluster.local ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing the pod's IP address ```yaml - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP ``` - Useful if we need to know our IP address (we could also read it from `eth0`, but this is more solid) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing the container's resource limits ```yaml - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory ``` - Useful for runtimes where memory is garbage collected - Example: the JVM (the memory available to the JVM should be set with the `-Xmx ` flag) - Best practice: set a memory limit, and pass it to the runtime (see [this blog post](https://very-serio.us/2017/12/05/running-jvms-in-kubernetes/) for a detailed example) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## More about the downward API - [This documentation page](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) tells more about these environment variables - And [this one](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) explains the other way to use the downward API (through files that get created in the container filesystem) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Environment variables, pros and cons - Works great when the running program expects these variables - Works great for optional parameters with reasonable defaults (since the container image can provide these defaults) - Sort of auto-documented (we can see which environment variables are defined in the image, and their values) - Can be (ab)used with longer values ... - ... You *can* put an entire Tomcat configuration file in an environment ... - ... But *should* you? (Do it if you really need to, we're not judging! But we'll see better ways.) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Injecting configuration files - Sometimes, there is no way around it: we need to inject a full config file - Kubernetes provides a mechanism for that purpose: `configmaps` - A configmap is a Kubernetes resource that exists in a namespace - Conceptually, it's a key/value map (values are arbitrary strings) - We can think about them in (at least) two different ways: - as holding entire configuration file(s) - as holding individual configuration parameters *Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!* .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Configmaps storing entire files - In this case, each key/value pair corresponds to a configuration file - Key = name of the file - Value = content of the file - There can be one key/value pair, or as many as necessary (for complex apps with multiple configuration files) - Examples: ``` # Create a configmap with a single key, "app.conf" kubectl create configmap my-app-config --from-file=app.conf # Create a configmap with a single key, "app.conf" but another file kubectl create configmap my-app-config --from-file=app.conf=app-prod.conf # Create a configmap with multiple keys (one per file in the config.d directory) kubectl create configmap my-app-config --from-file=config.d/ ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Configmaps storing individual parameters - In this case, each key/value pair corresponds to a parameter - Key = name of the parameter - Value = value of the parameter - Examples: ``` # Create a configmap with two keys kubectl create cm my-app-config \ --from-literal=foreground=red \ --from-literal=background=blue # Create a configmap from a file containing key=val pairs kubectl create cm my-app-config \ --from-env-file=app.conf ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing configmaps to containers - Configmaps can be exposed as plain files in the filesystem of a container - this is achieved by declaring a volume and mounting it in the container - this is particularly effective for configmaps containing whole files - Configmaps can be exposed as environment variables in the container - this is achieved with the downward API - this is particularly effective for configmaps containing individual parameters - Let's see how to do both! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Passing a configuration file with a configmap - We will start a load balancer powered by HAProxy - We will use the [official `haproxy` image](https://hub.docker.com/_/haproxy/) - It expects to find its configuration in `/usr/local/etc/haproxy/haproxy.cfg` - We will provide a simple HAproxy configuration, `k8s/haproxy.cfg` - It listens on port 80, and load balances connections between IBM and Google .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Creating the configmap .exercise[ - Go to the `k8s` directory in the repository: ```bash cd ~/container.training/k8s ``` - Create a configmap named `haproxy` and holding the configuration file: ```bash kubectl create configmap haproxy --from-file=haproxy.cfg ``` - Check what our configmap looks like: ```bash kubectl get configmap haproxy -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap We are going to use the following pod definition: ```yaml apiVersion: v1 kind: Pod metadata: name: haproxy spec: volumes: - name: config configMap: name: haproxy containers: - name: haproxy image: haproxy volumeMounts: - name: config mountPath: /usr/local/etc/haproxy/ ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap - The resource definition from the previous slide is in `k8s/haproxy.yaml` .exercise[ - Create the HAProxy pod: ```bash kubectl apply -f ~/container.training/k8s/haproxy.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod haproxy -o wide IP=$(kubectl get pod haproxy -o json | jq -r .status.podIP) ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Testing our load balancer - The load balancer will send: - half of the connections to Google - the other half to IBM .exercise[ - Access the load balancer a few times: ```bash curl $IP curl $IP curl $IP ``` ] We should see connections served by Google, and others served by IBM. (Each server sends us a redirect page. Look at the URL that they send us to!) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Exposing configmaps with the downward API - We are going to run a Docker registry on a custom port - By default, the registry listens on port 5000 - This can be changed by setting environment variable `REGISTRY_HTTP_ADDR` - We are going to store the port number in a configmap - Then we will expose that configmap as a container environment variable .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Creating the configmap .exercise[ - Our configmap will have a single key, `http.addr`: ```bash kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80 ``` - Check our configmap: ```bash kubectl get configmap registry -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap We are going to use the following pod definition: ```yaml apiVersion: v1 kind: Pod metadata: name: registry spec: containers: - name: registry image: registry env: - name: REGISTRY_HTTP_ADDR valueFrom: configMapKeyRef: name: registry key: http.addr ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Using the configmap - The resource definition from the previous slide is in `k8s/registry.yaml` .exercise[ - Create the registry pod: ```bash kubectl apply -f ~/container.training/k8s/registry.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod registry -o wide IP=$(kubectl get pod registry -o json | jq -r .status.podIP) ``` - Confirm that the registry is available on port 80: ```bash curl $IP/v2/_catalog ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Passwords, tokens, sensitive information - For sensitive information, there is another special resource: *Secrets* - Secrets and Configmaps work almost the same way (we'll expose the differences on the next slide) - The *intent* is different, though: *"You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."* *"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."* (Source: [the author of both features](https://stackoverflow.com/a/36925553/580281 )) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- ## Differences between configmaps and secrets - Secrets are base64-encoded when shown with `kubectl get secrets -o yaml` - keep in mind that this is just *encoding*, not *encryption* - it is very easy to [automatically extract and decode secrets](https://medium.com/@mveritym/decoding-kubernetes-secrets-60deed7a96a3) - [Secrets can be encrypted at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) - With RBAC, we can authorize a user to access configmaps, but not secrets (since they are two different kinds of resources) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/configuration.md)] --- class: pic .interstitial[] --- name: toc-stateful-sets class: title Stateful sets .nav[ [Previous section](#toc-managing-configuration) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-running-a-consul-cluster) ] .debug[(automatically generated title slide)] --- # Stateful sets - Stateful sets are a type of resource in the Kubernetes API (like pods, deployments, services...) - They offer mechanisms to deploy scaled stateful applications - At a first glance, they look like *deployments*: - a stateful set defines a pod spec and a number of replicas *R* - it will make sure that *R* copies of the pod are running - that number can be changed while the stateful set is running - updating the pod spec will cause a rolling update to happen - But they also have some significant differences .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Stateful sets unique features - Pods in a stateful set are numbered (from 0 to *R-1*) and ordered - They are started and updated in order (from 0 to *R-1*) - A pod is started (or updated) only when the previous one is ready - They are stopped in reverse order (from *R-1* to 0) - Each pod know its identity (i.e. which number it is in the set) - Each pod can discover the IP address of the others easily - The pods can persist data on attached volumes π€ Wait a minute ... Can't we already attach volumes to pods and deployments? .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Revisiting volumes - [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) are used for many purposes: - sharing data between containers in a pod - exposing configuration information and secrets to containers - accessing storage systems - Let's see examples of the latter usage .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Volumes types - There are many [types of volumes](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes) available: - public cloud storage (GCEPersistentDisk, AWSElasticBlockStore, AzureDisk...) - private cloud storage (Cinder, VsphereVolume...) - traditional storage systems (NFS, iSCSI, FC...) - distributed storage (Ceph, Glusterfs, Portworx...) - Using a persistent volume requires: - creating the volume out-of-band (outside of the Kubernetes API) - referencing the volume in the pod description, with all its parameters .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Using a cloud volume Here is a pod definition using an AWS EBS volume (that has to be created first): ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-ebs-volume spec: containers: - image: ... name: container-using-my-ebs-volume volumeMounts: - mountPath: /my-ebs name: my-ebs-volume volumes: - name: my-ebs-volume awsElasticBlockStore: volumeID: vol-049df61146c4d7901 fsType: ext4 ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Using an NFS volume Here is another example using a volume on an NFS server: ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-nfs-volume spec: containers: - image: ... name: container-using-my-nfs-volume volumeMounts: - mountPath: /my-nfs name: my-nfs-volume volumes: - name: my-nfs-volume nfs: server: 192.168.0.55 path: "/exports/assets" ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Shortcomings of volumes - Their lifecycle (creation, deletion...) is managed outside of the Kubernetes API (we can't just use `kubectl apply/create/delete/...` to manage them) - If a Deployment uses a volume, all replicas end up using the same volume - That volume must then support concurrent access - some volumes do (e.g. NFS servers support multiple read/write access) - some volumes support concurrent reads - some volumes support concurrent access for colocated pods - What we really need is a way for each replica to have its own volume .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims - To abstract the different types of storage, a pod can use a special volume type - This type is a *Persistent Volume Claim* - A Persistent Volume Claim (PVC) is a resource type (visible with `kubectl get persistentvolumeclaims` or `kubectl get pvc`) - A PVC is not a volume; it is a *request for a volume* .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims in practice - Using a Persistent Volume Claim is a two-step process: - creating the claim - using the claim in a pod (as if it were any other kind of volume) - A PVC starts by being Unbound (without an associated volume) - Once it is associated with a Persistent Volume, it becomes Bound - A Pod referring an unbound PVC will not start (but as soon as the PVC is bound, the Pod can start) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Binding PV and PVC - A Kubernetes controller continuously watches PV and PVC objects - When it notices an unbound PVC, it tries to find a satisfactory PV ("satisfactory" in terms of size and other characteristics; see next slide) - If no PV fits the PVC, a PV can be created dynamically (this requires to configure a *dynamic provisioner*, more on that later) - Otherwise, the PVC remains unbound indefinitely (until we manually create a PV or setup dynamic provisioning) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## What's in a Persistent Volume Claim? - At the very least, the claim should indicate: - the size of the volume (e.g. "5 GiB") - the access mode (e.g. "read-write by a single pod") - Optionally, it can also specify a Storage Class - The Storage Class indicates: - which storage system to use (e.g. Portworx, EBS...) - extra parameters for that storage system e.g.: "replicate the data 3 times, and use SSD media" .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## What's a Storage Class? - A Storage Class is yet another Kubernetes API resource (visible with e.g. `kubectl get storageclass` or `kubectl get sc`) - It indicates which *provisioner* to use (which controller will create the actual volume) - And arbitrary parameters for that provisioner (replication levels, type of disk ... anything relevant!) - Storage Classes are required if we want to use [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) (but we can also create volumes manually, and ignore Storage Classes) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Defining a Persistent Volume Claim Here is a minimal PVC: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Using a Persistent Volume Claim Here is a Pod definition like the ones shown earlier, but using a PVC: ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-a-claim spec: containers: - image: ... name: container-using-a-claim volumeMounts: - mountPath: /my-vol name: my-volume volumes: - name: my-volume persistentVolumeClaim: claimName: my-claim ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims and Stateful sets - The pods in a stateful set can define a `volumeClaimTemplate` - A `volumeClaimTemplate` will dynamically create one Persistent Volume Claim per pod - Each pod will therefore have its own volume - These volumes are numbered (like the pods) - When updating the stateful set (e.g. image upgrade), each pod keeps its volume - When pods get rescheduled (e.g. node failure), they keep their volume (this requires a storage system that is not node-local) - These volumes are not automatically deleted (when the stateful set is scaled down or deleted) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Stateful set recap - A Stateful sets manages a number of identical pods (like a Deployment) - These pods are numbered, and started/upgraded/stopped in a specific order - These pods are aware of their number (e.g., #0 can decide to be the primary, and #1 can be secondary) - These pods can find the IP addresses of the other pods in the set (through a *headless service*) - These pods can each have their own persistent storage (Deployments cannot do that) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- class: pic .interstitial[] --- name: toc-running-a-consul-cluster class: title Running a Consul cluster .nav[ [Previous section](#toc-stateful-sets) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-local-persistent-volumes) ] .debug[(automatically generated title slide)] --- # Running a Consul cluster - Here is a good use-case for Stateful sets! - We are going to deploy a Consul cluster with 3 nodes - Consul is a highly-available key/value store (like etcd or Zookeeper) - One easy way to bootstrap a cluster is to tell each node: - the addresses of other nodes - how many nodes are expected (to know when quorum is reached) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Bootstrapping a Consul cluster *After reading the Consul documentation carefully (and/or asking around), we figure out the minimal command-line to run our Consul cluster.* ``` consul agent -data=dir=/consul/data -client=0.0.0.0 -server -ui \ -bootstrap-expect=3 \ -retry-join=`X.X.X.X` \ -retry-join=`Y.Y.Y.Y` ``` - Replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes - The same command-line can be used on all nodes (convenient!) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Cloud Auto-join - Since version 1.4.0, Consul can use the Kubernetes API to find its peers - This is called [Cloud Auto-join] - Instead of passing an IP address, we need to pass a parameter like this: ``` consul agent -retry-join "provider=k8s label_selector=\"app=consul\"" ``` - Consul needs to be able to talk to the Kubernetes API - We can provide a `kubeconfig` file - If Consul runs in a pod, it will use the *service account* of the pod [Cloud Auto-join]: https://www.consul.io/docs/agent/cloud-auto-join.html#kubernetes-k8s- .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Setting up Cloud auto-join - We need to create a service account for Consul - We need to create a role that can `list` and `get` pods - We need to bind that role to the service account - And of course, we need to make sure that Consul pods use that service account .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Putting it all together - The file `k8s/consul.yaml` defines the required resources (service account, cluster role, cluster role binding, service, stateful set) - It has a few extra touches: - a `podAntiAffinity` prevents two pods from running on the same node - a `preStop` hook makes the pod leave the cluster when shutdown gracefully This was inspired by this [excellent tutorial](https://github.com/kelseyhightower/consul-on-kubernetes) by Kelsey Hightower. Some features from the original tutorial (TLS authentication between nodes and encryption of gossip traffic) were removed for simplicity. .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Running our Consul cluster - We'll use the provided YAML file .exercise[ - Create the stateful set and associated service: ```bash kubectl apply -f ~/container.training/k8s/consul.yaml ``` - Check the logs as the pods come up one after another: ```bash stern consul ``` - Check the health of the cluster: ```bash kubectl exec consul-0 consul members ``` ] .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Caveats - We haven't used a `volumeClaimTemplate` here - That's because we don't have a storage provider yet (except if you're running this on your own and your cluster has one) - What happens if we lose a pod? - a new pod gets rescheduled (with an empty state) - the new pod tries to connect to the two others - it will be accepted (after 1-2 minutes of instability) - and it will retrieve the data from the other pods .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- ## Failure modes - What happens if we lose two pods? - manual repair will be required - we will need to instruct the remaining one to act solo - then rejoin new pods - What happens if we lose three pods? (aka all of them) - we lose all the data (ouch) - If we run Consul without persistent storage, backups are a good idea! .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/statefulsets.md)] --- class: pic .interstitial[] --- name: toc-local-persistent-volumes class: title Local Persistent Volumes .nav[ [Previous section](#toc-running-a-consul-cluster) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-highly-available-persistent-volumes) ] .debug[(automatically generated title slide)] --- # Local Persistent Volumes - We want to run that Consul cluster *and* actually persist data - But we don't have a distributed storage system - We are going to use local volumes instead (similar conceptually to `hostPath` volumes) - We can use local volumes without installing extra plugins - However, they are tied to a node - If that node goes down, the volume becomes unavailable .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## With or without dynamic provisioning - We will deploy a Consul cluster *with* persistence - That cluster's StatefulSet will create PVCs - These PVCs will remain unboundΒΉ, until we will create local volumes manually (we will basically do the job of the dynamic provisioner) - Then, we will see how to automate that with a dynamic provisioner .footnote[ΒΉUnbound = without an associated Persistent Volume.] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## If we have a dynamic provisioner ... - The labs in this section assume that we *do not* have a dynamic provisioner - If we do have one, we need to disable it .exercise[ - Check if we have a dynamic provisioner: ```bash kubectl get storageclass ``` - If the output contains a line with `(default)`, run this command: ```bash kubectl annotate sc storageclass.kubernetes.io/is-default-class- --all ``` - Check again that it is no longer marked as `(default)` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Work in a separate namespace - To avoid conflicts with existing resources, let's create and use a new namespace .exercise[ - Create a new namespace: ```bash kubectl create namespace orange ``` - Switch to that namespace: ```bash kns orange ``` ] .warning[Make sure to call that namespace `orange`: it is hardcoded in the YAML files.] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Deploying Consul - We will use a slightly different YAML file - The only differences between that file and the previous one are: - `volumeClaimTemplate` defined in the Stateful Set spec - the corresponding `volumeMounts` in the Pod spec - the namespace `orange` used for discovery of Pods .exercise[ - Apply the persistent Consul YAML file: ```bash kubectl apply -f ~/container.training/k8s/persistent-consul.yaml ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Observing the situation - Let's look at Persistent Volume Claims and Pods .exercise[ - Check that we now have an unbound Persistent Volume Claim: ```bash kubectl get pvc ``` - We don't have any Persistent Volume: ```bash kubectl get pv ``` - The Pod `consul-0` is not scheduled yet: ```bash kubectl get pods -o wide ``` ] *Hint: leave these commands running with `-w` in different windows.* .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Explanations - In a Stateful Set, the Pods are started one by one - `consul-1` won't be created until `consul-0` is running - `consul-0` has a dependency on an unbound Persistent Volume Claim - The scheduler won't schedule the Pod until the PVC is bound (because the PVC might be bound to a volume that is only available on a subset of nodes; for instance EBS are tied to an availability zone) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Creating Persistent Volumes - Let's create 3 local directories (`/mnt/consul`) on node2, node3, node4 - Then create 3 Persistent Volumes corresponding to these directories .exercise[ - Create the local directories: ```bash for NODE in node2 node3 node4; do ssh $NODE sudo mkdir -p /mnt/consul done ``` - Create the PV objects: ```bash kubectl apply -f ~/container.training/k8s/volumes-for-consul.yaml ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Check our Consul cluster - The PVs that we created will be automatically matched with the PVCs - Once a PVC is bound, its pod can start normally - Once the pod `consul-0` has started, `consul-1` can be created, etc. - Eventually, our Consul cluster is up, and backend by "persistent" volumes .exercise[ - Check that our Consul clusters has 3 members indeed: ```bash kubectl exec consul-0 consul members ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Devil is in the details (1/2) - The size of the Persistent Volumes is bogus (it is used when matching PVs and PVCs together, but there is no actual quota or limit) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Devil is in the details (2/2) - This specific example worked because we had exactly 1 free PV per node: - if we had created multiple PVs per node ... - we could have ended with two PVCs bound to PVs on the same node ... - which would have required two pods to be on the same node ... - which is forbidden by the anti-affinity constraints in the StatefulSet - To avoid that, we need to associated the PVs with a Storage Class that has: ```yaml volumeBindingMode: WaitForFirstConsumer ``` (this means that a PVC will be bound to a PV only after being used by a Pod) - See [this blog post](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) for more details .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Bulk provisioning - It's not practical to manually create directories and PVs for each app - We *could* pre-provision a number of PVs across our fleet - We could even automate that with a Daemon Set: - creating a number of directories on each node - creating the corresponding PV objects - We also need to recycle volumes - ... This can quickly get out of hand .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Dynamic provisioning - We could also write our own provisioner, which would: - watch the PVCs across all namespaces - when a PVC is created, create a corresponding PV on a node - Or we could use one of the dynamic provisioners for local persistent volumes (for instance the [Rancher local path provisioner](https://github.com/rancher/local-path-provisioner)) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- ## Strategies for local persistent volumes - Remember, when a node goes down, the volumes on that node become unavailable - High availability will require another layer of replication (like what we've just seen with Consul; or primary/secondary; etc) - Pre-provisioning PVs makes sense for machines with local storage (e.g. cloud instance storage; or storage directly attached to a physical machine) - Dynamic provisioning makes sense for large number of applications (when we can't or won't dedicate a whole disk to a volume) - It's possible to mix both (using distinct Storage Classes) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/local-persistent-volumes.md)] --- class: pic .interstitial[] --- name: toc-highly-available-persistent-volumes class: title Highly available Persistent Volumes .nav[ [Previous section](#toc-local-persistent-volumes) | [Back to table of contents](#toc-chapter-12) | [Next section](#toc-next-steps) ] .debug[(automatically generated title slide)] --- # Highly available Persistent Volumes - How can we achieve true durability? - How can we store data that would survive the loss of a node? -- - We need to use Persistent Volumes backed by highly available storage systems - There are many ways to achieve that: - leveraging our cloud's storage APIs - using NAS/SAN systems or file servers - distributed storage systems -- - We are going to see one distributed storage system in action .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Our test scenario - We will set up a distributed storage system on our cluster - We will use it to deploy a SQL database (PostgreSQL) - We will insert some test data in the database - We will disrupt the node running the database - We will see how it recovers .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Portworx - Portworx is a *commercial* persistent storage solution for containers - It works with Kubernetes, but also Mesos, Swarm ... - It provides [hyper-converged](https://en.wikipedia.org/wiki/Hyper-converged_infrastructure) storage (=storage is provided by regular compute nodes) - We're going to use it here because it can be deployed on any Kubernetes cluster (it doesn't require any particular infrastructure) - We don't endorse or support Portworx in any particular way (but we appreciate that it's super easy to install!) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## A useful reminder - We're installing Portworx because we need a storage system - If you are using AKS, EKS, GKE ... you already have a storage system (but you might want another one, e.g. to leverage local storage) - If you have setup Kubernetes yourself, there are other solutions available too - on premises, you can use a good old SAN/NAS - on a private cloud like OpenStack, you can use e.g. Cinder - everywhere, you can use other systems, e.g. Gluster, StorageOS .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Portworx requirements - Kubernetes cluster βοΈ - Optional key/value store (etcd or Consul) β - At least one available block device β .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## The key-value store - In the current version of Portworx (1.4) it is recommended to use etcd or Consul - But Portworx also has beta support for an embedded key/value store - For simplicity, we are going to use the latter option (but if we have deployed Consul or etcd, we can use that, too) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## One available block device - Block device = disk or partition on a disk - We can see block devices with `lsblk` (or `cat /proc/partitions` if we're old school like that!) - If we don't have a spare disk or partition, we can use a *loop device* - A loop device is a block device actually backed by a file - These are frequently used to mount ISO (CD/DVD) images or VM disk images .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Setting up a loop device - We are going to create a 10 GB (empty) file on each node - Then make a loop device from it, to be used by Portworx .exercise[ - Create a 10 GB file on each node: ```bash for N in $(seq 1 4); do ssh node$N sudo truncate --size 10G /portworx.blk; done ``` (If SSH asks to confirm host keys, enter `yes` each time.) - Associate the file to a loop device on each node: ```bash for N in $(seq 1 4); do ssh node$N sudo losetup /dev/loop4 /portworx.blk; done ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Installing Portworx - To install Portworx, we need to go to https://install.portworx.com/ - This website will ask us a bunch of questions about our cluster - Then, it will generate a YAML file that we should apply to our cluster -- - Or, we can just apply that YAML file directly (it's in `k8s/portworx.yaml`) .exercise[ - Install Portworx: ```bash kubectl apply -f ~/container.training/k8s/portworx.yaml ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Generating a custom YAML file If you want to generate a YAML file tailored to your own needs, the easiest way is to use https://install.portworx.com/. FYI, this is how we obtained the YAML file used earlier: ``` KBVER=$(kubectl version -o json | jq -r .serverVersion.gitVersion) BLKDEV=/dev/loop4 curl https://install.portworx.com/1.4/?kbver=$KBVER&b=true&s=$BLKDEV&c=px-workshop&stork=true&lh=true ``` If you want to use an external key/value store, add one of the following: ``` &k=etcd://`XXX`:2379 &k=consul://`XXX`:8500 ``` ... where `XXX` is the name or address of your etcd or Consul server. .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Waiting for Portworx to be ready - The installation process will take a few minutes .exercise[ - Check out the logs: ```bash stern -n kube-system portworx ``` - Wait until it gets quiet (you should see `portworx service is healthy`, too) ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Dynamic provisioning of persistent volumes - We are going to run PostgreSQL in a Stateful set - The Stateful set will specify a `volumeClaimTemplate` - That `volumeClaimTemplate` will create Persistent Volume Claims - Kubernetes' [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) will satisfy these Persistent Volume Claims (by creating Persistent Volumes and binding them to the claims) - The Persistent Volumes are then available for the PostgreSQL pods .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Storage Classes - It's possible that multiple storage systems are available - Or, that a storage system offers multiple tiers of storage (SSD vs. magnetic; mirrored or not; etc.) - We need to tell Kubernetes *which* system and tier to use - This is achieved by creating a Storage Class - A `volumeClaimTemplate` can indicate which Storage Class to use - It is also possible to mark a Storage Class as "default" (it will be used if a `volumeClaimTemplate` doesn't specify one) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Our default Storage Class This is our Storage Class (in `k8s/storage-class.yaml`): ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: portworx-replicated annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/portworx-volume parameters: repl: "2" priority_io: "high" ``` - It says "use Portworx to create volumes" - It tells Portworx to "keep 2 replicas of these volumes" - It marks the Storage Class as being the default one .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Creating our Storage Class - Let's apply that YAML file! .exercise[ - Create the Storage Class: ```bash kubectl apply -f ~/container.training/k8s/storage-class.yaml ``` - Check that it is now available: ```bash kubectl get sc ``` ] It should show as `portworx-replicated (default)`. .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Our Postgres Stateful set - The next slide shows `k8s/postgres.yaml` - It defines a Stateful set - With a `volumeClaimTemplate` requesting a 1 GB volume - That volume will be mounted to `/var/lib/postgresql/data` - There is another little detail: we enable the `stork` scheduler - The `stork` scheduler is optional (it's specific to Portworx) - It helps the Kubernetes scheduler to colocate the pod with its volume (see [this blog post](https://portworx.com/stork-storage-orchestration-kubernetes/) for more details about that) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- .small[ ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres spec: selector: matchLabels: app: postgres serviceName: postgres template: metadata: labels: app: postgres spec: schedulerName: stork containers: - name: postgres image: postgres:10.5 volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumeClaimTemplates: - metadata: name: postgres spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Creating the Stateful set - Before applying the YAML, watch what's going on with `kubectl get events -w` .exercise[ - Apply that YAML: ```bash kubectl apply -f ~/container.training/k8s/postgres.yaml ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Testing our PostgreSQL pod - We will use `kubectl exec` to get a shell in the pod - Good to know: we need to use the `postgres` user in the pod .exercise[ - Get a shell in the pod, as the `postgres` user: ```bash kubectl exec -ti postgres-0 su postgres ``` - Check that default databases have been created correctly: ```bash psql -l ``` ] (This should show us 3 lines: postgres, template0, and template1.) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Inserting data in PostgreSQL - We will create a database and populate it with `pgbench` .exercise[ - Create a database named `demo`: ```bash createdb demo ``` - Populate it with `pgbench`: ```bash pgbench -i -s 10 demo ``` ] - The `-i` flag means "create tables" - The `-s 10` flag means "create 10 x 100,000 rows" .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Checking how much data we have now - The `pgbench` tool inserts rows in table `pgbench_accounts` .exercise[ - Check that the `demo` base exists: ```bash psql -l ``` - Check how many rows we have in `pgbench_accounts`: ```bash psql demo -c "select count(*) from pgbench_accounts" ``` ] (We should see a count of 1,000,000 rows.) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Find out which node is hosting the database - We can find that information with `kubectl get pods -o wide` .exercise[ - Check the node running the database: ```bash kubectl get pod postgres-0 -o wide ``` ] We are going to disrupt that node. -- By "disrupt" we mean: "disconnect it from the network". .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Disconnect the node - We will use `iptables` to block all traffic exiting the node (except SSH traffic, so we can repair the node later if needed) .exercise[ - SSH to the node to disrupt: ```bash ssh `nodeX` ``` - Allow SSH traffic leaving the node, but block all other traffic: ```bash sudo iptables -I OUTPUT -p tcp --sport 22 -j ACCEPT sudo iptables -I OUTPUT 2 -j DROP ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Check that the node is disconnected .exercise[ - Check that the node can't communicate with other nodes: ``` ping node1 ``` - Logout to go back on `node1` - Watch the events unfolding with `kubectl get events -w` and `kubectl get pods -w` ] - It will take some time for Kubernetes to mark the node as unhealthy - Then it will attempt to reschedule the pod to another node - In about a minute, our pod should be up and running again .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Check that our data is still available - We are going to reconnect to the (new) pod and check .exercise[ - Get a shell on the pod: ```bash kubectl exec -ti postgres-0 su postgres ``` - Check the number of rows in the `pgbench_accounts` table: ```bash psql demo -c "select count(*) from pgbench_accounts" ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Double-check that the pod has really moved - Just to make sure the system is not bluffing! .exercise[ - Look at which node the pod is now running on ```bash kubectl get pod postgres-0 -o wide ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Re-enable the node - Let's fix the node that we disconnected from the network .exercise[ - SSH to the node: ```bash ssh `nodeX` ``` - Remove the iptables rule blocking traffic: ```bash sudo iptables -D OUTPUT 2 ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## A few words about this PostgreSQL setup - In a real deployment, you would want to set a password - This can be done by creating a `secret`: ``` kubectl create secret generic postgres \ --from-literal=password=$(base64 /dev/urandom | head -c16) ``` - And then passing that secret to the container: ```yaml env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres key: password ``` .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Troubleshooting Portworx - If we need to see what's going on with Portworx: ``` PXPOD=$(kubectl -n kube-system get pod -l name=portworx -o json | jq -r .items[0].metadata.name) kubectl -n kube-system exec $PXPOD -- /opt/pwx/bin/pxctl status ``` - We can also connect to Lighthouse (a web UI) - check the port with `kubectl -n kube-system get svc px-lighthouse` - connect to that port - the default login/password is `admin/Password1` - then specify `portworx-service` as the endpoint .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Removing Portworx - Portworx provides a storage driver - It needs to place itself "above" the Kubelet (it installs itself straight on the nodes) - To remove it, we need to do more than just deleting its Kubernetes resources - It is done by applying a special label: ``` kubectl label nodes --all px/enabled=remove --overwrite ``` - Then removing a bunch of local files: ``` sudo chattr -i /etc/pwx/.private.json sudo rm -rf /etc/pwx /opt/pwx ``` (on each node where Portworx was running) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: extra-details ## Dynamic provisioning without a provider - What if we want to use Stateful sets without a storage provider? - We will have to create volumes manually (by creating Persistent Volume objects) - These volumes will be automatically bound with matching Persistent Volume Claims - We can use local volumes (essentially bind mounts of host directories) - Of course, these volumes won't be available in case of node failure - Check [this blog post](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) for more information and gotchas .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- ## Acknowledgements The Portworx installation tutorial, and the PostgreSQL example, were inspired by [Portworx examples on Katacoda](https://katacoda.com/portworx/scenarios/), in particular: - [installing Portworx on Kubernetes](https://www.katacoda.com/portworx/scenarios/deploy-px-k8s) (with adapatations to use a loop device and an embedded key/value store) - [persistent volumes on Kubernetes using Portworx](https://www.katacoda.com/portworx/scenarios/px-k8s-vol-basic) (with adapatations to specify a default Storage Class) - [HA PostgreSQL on Kubernetes with Portworx](https://www.katacoda.com/portworx/scenarios/px-k8s-postgres-all-in-one) (with adaptations to use a Stateful Set and simplify PostgreSQL's setup) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/portworx.md)] --- class: pic .interstitial[] --- name: toc-next-steps class: title Next steps .nav[ [Previous section](#toc-highly-available-persistent-volumes) | [Back to table of contents](#toc-chapter-13) | [Next section](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Next steps *Alright, how do I get started and containerize my apps?* -- Suggested containerization checklist: .checklist[ - write a Dockerfile for one service in one app - write Dockerfiles for the other (buildable) services - write a Compose file for that whole app - make sure that devs are empowered to run the app in containers - set up automated builds of container images from the code repo - set up a CI pipeline using these container images - set up a CD pipeline (for staging/QA) using these images ] And *then* it is time to look at orchestration! .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Options for our first production cluster - Get a managed cluster from a major cloud provider (AKS, EKS, GKE...) (price: $, difficulty: medium) - Hire someone to deploy it for us (price: $$, difficulty: easy) - Do it ourselves (price: $-$$$, difficulty: hard) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## One big cluster vs. multiple small ones - Yes, it is possible to have prod+dev in a single cluster (and implement good isolation and security with RBAC, network policies...) - But it is not a good idea to do that for our first deployment - Start with a production cluster + at least a test cluster - Implement and check RBAC and isolation on the test cluster (e.g. deploy multiple test versions side-by-side) - Make sure that all our devs have usable dev clusters (whether it's a local minikube or a full-blown multi-node cluster) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Namespaces - Namespaces let you run multiple identical stacks side by side - Two namespaces (e.g. `blue` and `green`) can each have their own `redis` service - Each of the two `redis` services has its own `ClusterIP` - CoreDNS creates two entries, mapping to these two `ClusterIP` addresses: `redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local` - Pods in the `blue` namespace get a *search suffix* of `blue.svc.cluster.local` - As a result, resolving `redis` from a pod in the `blue` namespace yields the "local" `redis` .warning[This does not provide *isolation*! That would be the job of network policies.] .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Relevant sections - [Namespaces](kube-selfpaced.yml.html#toc-namespaces) - [Network Policies](kube-selfpaced.yml.html#toc-network-policies) - [Role-Based Access Control](kube-selfpaced.yml.html#toc-authentication-and-authorization) (covers permissions model, user and service accounts management ...) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Stateful services (databases etc.) - As a first step, it is wiser to keep stateful services *outside* of the cluster - Exposing them to pods can be done with multiple solutions: - `ExternalName` services (`redis.blue.svc.cluster.local` will be a `CNAME` record) - `ClusterIP` services with explicit `Endpoints` (instead of letting Kubernetes generate the endpoints from a selector) - Ambassador services (application-level proxies that can provide credentials injection and more) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Stateful services (second take) - If we want to host stateful services on Kubernetes, we can use: - a storage provider - persistent volumes, persistent volume claims - stateful sets - Good questions to ask: - what's the *operational cost* of running this service ourselves? - what do we gain by deploying this stateful service on Kubernetes? - Relevant sections: [Volumes](kube-selfpaced.yml.html#toc-volumes) | [Stateful Sets](kube-selfpaced.yml.html#toc-stateful-sets) | [Persistent Volumes](kube-selfpaced.yml.html#toc-highly-available-persistent-volumes) - Excellent [blog post](http://www.databasesoup.com/2018/07/should-i-run-postgres-on-kubernetes.html) tackling the question: βShould I run Postgres on Kubernetes?β .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Managing stack deployments - The best deployment tool will vary, depending on: - the size and complexity of your stack(s) - how often you change it (i.e. add/remove components) - the size and skills of your team - A few examples: - shell scripts invoking `kubectl` - YAML resources descriptions committed to a repo - [Helm](https://github.com/kubernetes/helm) (~package manager) - [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform) - [Brigade](https://brigade.sh/) (event-driven scripting; no YAML) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Cluster federation --  -- Sorry Star Trek fans, this is not the federation you're looking for! -- (If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!) .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Cluster federation - Kubernetes master operation relies on etcd - etcd uses the [Raft](https://raft.github.io/) protocol - Raft recommends low latency between nodes - What if our cluster spreads to multiple regions? -- - Break it down in local clusters - Regroup them in a *cluster federation* - Synchronize resources across clusters - Discover resources across clusters .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- ## Developer experience *We've put this last, but it's pretty important!* - How do you on-board a new developer? - What do they need to install to get a dev stack? - How does a code change make it from dev to prod? - How does someone add a component to a stack? .debug[[k8s/whatsnext.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/whatsnext.md)] --- class: pic .interstitial[] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous section](#toc-next-steps) | [Back to table of contents](#toc-chapter-13) | [Next section](#toc-extra-material) ] .debug[(automatically generated title slide)] --- # Links and resources All things Kubernetes: - [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups - [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes) - [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b) All things Docker: - [Docker documentation](http://docs.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Play With Docker Hands-On Labs](http://training.play-with-docker.com/) Everything else: - [Local meetups](https://www.meetup.com/) .footnote[These slides (and future updates) are on β http://container.training/] .debug[[k8s/links.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/links.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks! Questions?  .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/shared/thankyou.md)] --- class: pic .interstitial[] --- name: toc-extra-material class: title (Extra material) .nav[ [Previous section](#toc-links-and-resources) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-controlling-a-kubernetes-cluster-remotely) ] .debug[(automatically generated title slide)] --- # (Extra material) .debug[[three.yml](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/three.yml)] --- class: pic .interstitial[] --- name: toc-controlling-a-kubernetes-cluster-remotely class: title Controlling a Kubernetes cluster remotely .nav[ [Previous section](#toc-extra-material) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-accessing-internal-services) ] .debug[(automatically generated title slide)] --- # Controlling a Kubernetes cluster remotely - `kubectl` can be used either on cluster instances or outside the cluster - Here, we are going to use `kubectl` from our local machine .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Requirements .warning[The exercises in this chapter should be done *on your local machine*.] - `kubectl` is officially available on Linux, macOS, Windows (and unofficially anywhere we can build and run Go binaries) - You may skip these exercises if you are following along from: - a tablet or phone - a web-based terminal - an environment where you can't install and run new binaries .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Installing `kubectl` - If you already have `kubectl` on your local machine, you can skip this .exercise[ - Download the `kubectl` binary from one of these links: [Linux](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl) | [macOS](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/darwin/amd64/kubectl) | [Windows](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe) - On Linux and macOS, make the binary executable with `chmod +x kubectl` (And remember to run it with `./kubectl` or move it to your `$PATH`) ] Note: if you are following along with a different platform (e.g. Linux on an architecture different from amd64, or with a phone or tablet), installing `kubectl` might be more complicated (or even impossible) so feel free to skip this section. .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Testing `kubectl` - Check that `kubectl` works correctly (before even trying to connect to a remote cluster!) .exercise[ - Ask `kubectl` to show its version number: ```bash kubectl version --client ``` ] The output should look like this: ``` Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"} ``` .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Preserving the existing `~/.kube/config` - If you already have a `~/.kube/config` file, rename it (we are going to overwrite it in the following slides!) - If you never used `kubectl` on your machine before: nothing to do! .exercise[ - Make a copy of `~/.kube/config`; if you are using macOS or Linux, you can do: ```bash cp ~/.kube/config ~/.kube/config.before.training ``` - If you are using Windows, you will need to adapt this command ] .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Copying the configuration file from `node1` - The `~/.kube/config` file that is on `node1` contains all the credentials we need - Let's copy it over! .exercise[ - Copy the file from `node1`; if you are using macOS or Linux, you can do: ``` scp `USER`@`X.X.X.X`:.kube/config ~/.kube/config # Make sure to replace X.X.X.X with the IP address of node1, # and USER with the user name used to log into node1! ``` - If you are using Windows, adapt these instructions to your SSH client ] .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Updating the server address - There is a good chance that we need to update the server address - To know if it is necessary, run `kubectl config view` - Look for the `server:` address: - if it matches the public IP address of `node1`, you're good! - if it is anything else (especially a private IP address), update it! - To update the server address, run: ```bash kubectl config set-cluster kubernetes --server=https://`X.X.X.X`:6443 # Make sure to replace X.X.X.X with the IP address of node1! ``` .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- class: extra-details ## What if we get a certificate error? - Generally, the Kubernetes API uses a certificate that is valid for: - `kubernetes` - `kubernetes.default` - `kubernetes.default.svc` - `kubernetes.default.svc.cluster.local` - the ClusterIP address of the `kubernetes` service - the hostname of the node hosting the control plane (e.g. `node1`) - the IP address of the node hosting the control plane - On most clouds, the IP address of the node is an internal IP address - ... And we are going to connect over the external IP address - ... And that external IP address was not used when creating the certificate! .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- class: extra-details ## Working around the certificate error - We need to tell `kubectl` to skip TLS verification (only do this with testing clusters, never in production!) - The following command will do the trick: ```bash kubectl config set-cluster kubernetes --insecure-skip-tls-verify ``` .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- ## Checking that we can connect to the cluster - We can now run a couple of trivial commands to check that all is well .exercise[ - Check the versions of the local client and remote server: ```bash kubectl version ``` - View the nodes of the cluster: ```bash kubectl get nodes ``` ] We can now utilize the cluster exactly as if we're logged into a node, except that it's remote. .debug[[k8s/localkubeconfig.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/localkubeconfig.md)] --- class: pic .interstitial[] --- name: toc-accessing-internal-services class: title Accessing internal services .nav[ [Previous section](#toc-controlling-a-kubernetes-cluster-remotely) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-kustomize) ] .debug[(automatically generated title slide)] --- # Accessing internal services - When we are logged in on a cluster node, we can access internal services (by virtue of the Kubernetes network model: all nodes can reach all pods and services) - When we are accessing a remote cluster, things are different (generally, our local machine won't have access to the cluster's internal subnet) - How can we temporarily access a service without exposing it to everyone? -- - `kubectl proxy`: gives us access to the API, which includes a proxy for HTTP resources - `kubectl port-forward`: allows forwarding of TCP ports to arbitrary pods, services, ... .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## Suspension of disbelief The exercises in this section assume that we have set up `kubectl` on our local machine in order to access a remote cluster. We will therefore show how to access services and pods of the remote cluster, from our local machine. You can also run these exercises directly on the cluster (if you haven't installed and set up `kubectl` locally). Running commands locally will be less useful (since you could access services and pods directly), but keep in mind that these commands will work anywhere as long as you have installed and set up `kubectl` to communicate with your cluster. .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl proxy` in theory - Running `kubectl proxy` gives us access to the entire Kubernetes API - The API includes routes to proxy HTTP traffic - These routes look like the following: `/api/v1/namespaces//services//proxy` - We just add the URI to the end of the request, for instance: `/api/v1/namespaces//services//proxy/index.html` - We can access `services` and `pods` this way .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl proxy` in practice - Let's access the `webui` service through `kubectl proxy` .exercise[ - Run an API proxy in the background: ```bash kubectl proxy & ``` - Access the `webui` service: ```bash curl localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html ``` - Terminate the proxy: ```bash kill %1 ``` ] .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl port-forward` in theory - What if we want to access a TCP service? - We can use `kubectl port-forward` instead - It will create a TCP relay to forward connections to a specific port (of a pod, service, deployment...) - The syntax is: `kubectl port-forward service/name_of_service local_port:remote_port` - If only one port number is specified, it is used for both local and remote ports .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- ## `kubectl port-forward` in practice - Let's access our remote Redis server .exercise[ - Forward connections from local port 10000 to remote port 6379: ```bash kubectl port-forward svc/redis 10000:6379 & ``` - Connect to the Redis server: ```bash telnet localhost 10000 ``` - Issue a few commands, e.g. `INFO server` then `QUIT` - Terminate the port forwarder: ```bash kill %1 ``` ] .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/accessinternal.md)] --- class: pic .interstitial[] --- name: toc-kustomize class: title Kustomize .nav[ [Previous section](#toc-accessing-internal-services) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-managing-stacks-with-helm) ] .debug[(automatically generated title slide)] --- # Kustomize - Kustomize lets us transform YAML files representing Kubernetes resources - The original YAML files are valid resource files (e.g. they can be loaded with `kubectl apply -f`) - They are left untouched by Kustomize - Kustomize lets us define *overlays* that extend or change the resource files .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Differences with Helm - Helm charts use placeholders `{{ like.this }}` - Kustomize "bases" are standard Kubernetes YAML - It is possible to use an existing set of YAML as a Kustomize base - As a result, writing a Helm chart is more work ... - ... But Helm charts are also more powerful; e.g. they can: - use flags to conditionally include resources or blocks - check if a given Kubernetes API group is supported - [and much more](https://helm.sh/docs/chart_template_guide/) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Kustomize concepts - Kustomize needs a `kustomization.yaml` file - That file can be a *base* or a *variant* - If it's a *base*: - it lists YAML resource files to use - If it's a *variant* (or *overlay*): - it refers to (at least) one *base* - and some *patches* .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## An easy way to get started with Kustomize - We are going to use [Replicated Ship](https://www.replicated.com/ship/) to experiment with Kustomize - The [Replicated Ship CLI](https://github.com/replicatedhq/ship/releases) has been installed on our clusters - Replicated Ship has multiple workflows; here is what we will do: - initialize a Kustomize overlay from a remote GitHub repository - customize some values using the web UI provided by Ship - look at the resulting files and apply them to the cluster .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Getting started with Ship - We need to run `ship init` in a new directory - `ship init` requires a URL to a remote repository containing Kubernetes YAML - It will clone that repository and start a web UI - Later, it can watch that repository and/or update from it - We will use the [jpetazzo/kubercoins](https://github.com/jpetazzo/kubercoins) repository (it contains all the DockerCoins resources as YAML files) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## `ship init` .exercise[ - Change to a new directory: ```bash mkdir ~/kustomcoins cd ~/kustomcoins ``` - Run `ship init` with the kustomcoins repository: ```bash ship init https://github.com/jpetazzo/kubercoins ``` ] .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Access the web UI - `ship init` tells us to connect on `localhost:8800` - We need to replace `localhost` with the address of our node (since we run on a remote machine) - Follow the steps in the web UI, and change one parameter (e.g. set the number of replicas in the worker Deployment) - Complete the web workflow, and go back to the CLI .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Inspect the results - Look at the content of our directory - `base` contains the kubercoins repository + a `kustomization.yaml` file - `overlays/ship` contains the Kustomize overlay referencing the base + our patch(es) - `rendered.yaml` is a YAML bundle containing the patched application - `.ship` contains a state file used by Ship .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Using the results - We can `kubectl apply -f rendered.yaml` (on any version of Kubernetes) - Starting with Kubernetes 1.14, we can apply the overlay directly with: ```bash kubectl apply -k overlays/ship ``` - But let's not do that for now! - We will create a new copy of DockerCoins in another namespace .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Deploy DockerCoins with Kustomize .exercise[ - Create a new namespace: ```bash kubectl create namespace kustomcoins ``` - Deploy DockerCoins: ```bash kubectl apply -f rendered.yaml --namespace=kustomcoins ``` - Or, with Kubernetes 1.14, you can also do this: ```bash kubectl apply -k overlays/ship --namespace=kustomcoins ``` ] .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- ## Checking our new copy of DockerCoins - We can check the worker logs, or the web UI .exercise[ - Retrieve the NodePort number of the web UI: ```bash kubectl get service webui --namespace=kustomcoins ``` - Open it in a web browser - Look at the worker logs: ```bash kubectl logs deploy/worker --tail=10 --follow --namespace=kustomcoins ``` ] Note: it might take a minute or two for the worker to start. .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/kustomize.md)] --- class: pic .interstitial[] --- name: toc-managing-stacks-with-helm class: title Managing stacks with Helm .nav[ [Previous section](#toc-kustomize) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-creating-helm-charts) ] .debug[(automatically generated title slide)] --- # Managing stacks with Helm - We created our first resources with `kubectl run`, `kubectl expose` ... - We have also created resources by loading YAML files with `kubectl apply -f` - For larger stacks, managing thousands of lines of YAML is unreasonable - These YAML bundles need to be customized with variable parameters (E.g.: number of replicas, image version to use ...) - It would be nice to have an organized, versioned collection of bundles - It would be nice to be able to upgrade/rollback these bundles carefully - [Helm](https://helm.sh/) is an open source project offering all these things! .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Helm concepts - `helm` is a CLI tool - `tiller` is its companion server-side component - A "chart" is an archive containing templatized YAML bundles - Charts are versioned - Charts can be stored on private or public repositories .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Installing Helm - If the `helm` CLI is not installed in your environment, install it .exercise[ - Check if `helm` is installed: ```bash helm ``` - If it's not installed, run the following command: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash ``` ] .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Installing Tiller - Tiller is composed of a *service* and a *deployment* in the `kube-system` namespace - They can be managed (installed, upgraded...) with the `helm` CLI .exercise[ - Deploy Tiller: ```bash helm init ``` ] If Tiller was already installed, don't worry: this won't break it. At the end of the install process, you will see: ``` Happy Helming! ``` .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Fix account permissions - Helm permission model requires us to tweak permissions - In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings .exercise[ - Grant `cluster-admin` role to `kube-system:default` service account: ```bash kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin --serviceaccount=kube-system:default ``` ] (Defining the exact roles and permissions on your cluster requires a deeper knowledge of Kubernetes' RBAC model. The command above is fine for personal and development clusters.) .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## View available charts - A public repo is pre-configured when installing Helm - We can view available charts with `helm search` (and an optional keyword) .exercise[ - View all available charts: ```bash helm search ``` - View charts related to `prometheus`: ```bash helm search prometheus ``` ] .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Install a chart - Most charts use `LoadBalancer` service types by default - Most charts require persistent volumes to store data - We need to relax these requirements a bit .exercise[ - Install the Prometheus metrics collector on our cluster: ```bash helm install stable/prometheus \ --set server.service.type=NodePort \ --set server.persistentVolume.enabled=false ``` ] Where do these `--set` options come from? .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Inspecting a chart - `helm inspect` shows details about a chart (including available options) .exercise[ - See the metadata and all available options for `stable/prometheus`: ```bash helm inspect stable/prometheus ``` ] The chart's metadata includes a URL to the project's home page. (Sometimes it conveniently points to the documentation for the chart.) .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Viewing installed charts - Helm keeps track of what we've installed .exercise[ - List installed Helm charts: ```bash helm list ``` ] .debug[[k8s/helm.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/helm.md)] --- ## Creating a chart - We are going to show a way to create a *very simplified* chart - In a real chart, *lots of things* would be templatized (Resource names, service types, number of replicas...) .exercise[ - Create a sample chart: ```bash helm create dockercoins ``` - Move away the sample templates and create an empty template directory: ```bash mv dockercoins/templates dockercoins/default-templates mkdir dockercoins/templates ``` ] .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- ## Exporting the YAML for our application - The following section assumes that DockerCoins is currently running .exercise[ - Create one YAML file for each resource that we need: .small[ ```bash while read kind name; do kubectl get -o yaml $kind $name > dockercoins/templates/$name-$kind.yaml done < `Error: release loitering-otter failed: services "hasher" already exists` - To avoid naming conflicts, we will deploy the application in another *namespace* .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- ## Switching to another namespace - We can create a new namespace and switch to it (Helm will automatically use the namespace specified in our context) - We can also tell Helm which namespace to use .exercise[ - Tell Helm to use a specific namespace: ```bash helm install dockercoins --namespace=magenta ``` ] .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- ## Checking our new copy of DockerCoins - We can check the worker logs, or the web UI .exercise[ - Retrieve the NodePort number of the web UI: ```bash kubectl get service webui --namespace=magenta ``` - Open it in a web browser - Look at the worker logs: ```bash kubectl logs deploy/worker --tail=10 --follow --namespace=magenta ``` ] Note: it might take a minute or two for the worker to start. .debug[[k8s/create-chart.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-chart.md)] --- class: pic .interstitial[] --- name: toc-creating-helm-charts class: title Creating Helm charts .nav[ [Previous section](#toc-managing-stacks-with-helm) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-authentication-and-authorization) ] .debug[(automatically generated title slide)] --- # Creating Helm charts - We are going to create a generic Helm chart - We will use that Helm chart to deploy DockerCoins - Each component of DockerCoins will have its own *release* - In other words, we will "install" that Helm chart multiple times (one time per component of DockerCoins) .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Creating a generic chart - Rather than starting from scratch, we will use `helm create` - This will give us a basic chart that we will customize .exercise[ - Create a basic chart: ```bash cd ~ helm create helmcoins ``` ] This creates a basic chart in the directory `helmcoins`. .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## What's in the basic chart? - The basic chart will create a Deployment and a Service - Optionally, it will also include an Ingress - If we don't pass any values, it will deploy the `nginx` image - We can override many things in that chart - Let's try to deploy DockerCoins components with that chart! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Writing `values.yaml` for our components - We need to write one `values.yaml` file for each component (hasher, redis, rng, webui, worker) - We will start with the `values.yaml` of the chart, and remove what we don't need - We will create 5 files: hasher.yaml, redis.yaml, rng.yaml, webui.yaml, worker.yaml .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Getting started - For component X, we want to use the image dockercoins/X:v0.1 (for instance, for rng, we want to use the image dockercoins/rng:v0.1) - Exception: for redis, we want to use the official image redis:latest .exercise[ - Write minimal YAML files for the 5 components, specifying only the image ] -- *Hint: our YAML files should look like this.* ```yaml ### rng.yaml image: repository: dockercoins/`rng` tag: v0.1 ``` .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Deploying DockerCoins components - For convenience, let's work in a separate namespace .exercise[ - Create a new namespace: ```bash kubectl create namespace helmcoins ``` - Switch to that namespace: ```bash kns helmcoins ``` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Deploying the chart - To install a chart, we can use the following command: ```bash helm install [--name `X`] ``` - We can also use the following command, which is idempotent: ```bash helm upgrade --install `X` chart ``` .exercise[ - Install the 5 components of DockerCoins: ```bash for COMPONENT in hasher redis rng webui worker; do helm upgrade --install $COMPONENT helmcoins/ --values=$COMPONENT.yaml done ``` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Checking what we've done - Let's see if DockerCoins is working! .exercise[ - Check the logs of the worker: ```bash stern worker ``` - Look at the resources that were created: ```bash kubectl get all ``` ] There are *many* issues to fix! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Service names - Our services should be named `rng`, `hasher`, etc., but they are named differently - Look at the YAML template used for the services - Does it look like we can override the name of the services? -- - *Yes*, we can use `.Values.nameOverride` - This means setting `nameOverride` in the values YAML file .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Setting service names - Let's add `nameOverride: X` in each values YAML file! (where X is hasher, redis, rng, etc.) .exercise[ - Edit the 5 YAML files to add `nameOverride: X` - Deploy the updated Chart: ```bash for COMPONENT in hasher redis rng webui worker; do helm upgrade --install $COMPONENT helmcoins/ --values=$COMPONENT.yaml done ``` (Yes, this is exactly the same command as before!) ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Checking what we've done .exercise[ - Check the service names: ```bash kubectl get services ``` Great! (We have a useless service for `worker`, but let's ignore it for now.) - Check the state of the pods: ```bash kubectl get pods ``` Not so great... Some pods are *not ready.* ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Troubleshooting pods - The easiest way to troubleshoot pods is to look at *events* - We can look at all the events on the cluster (with `kubectl get events`) - Or we can use `kubectl describe` on the objects that have problems (`kubectl describe` will retrieve the events related to the object) .exercise[ - Check the events for the redis pods: ```bash kubectl describe pod -l app.kubernetes.io/name=redis ``` ] What's going on? .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Healthchecks - The default chart defines healthchecks doing HTTP requests on port 80 - That won't work for redis and worker (redis is not HTTP, and not on port 80; worker doesn't even listen) -- - We could comment out the healthchecks - We could also make them conditional - This sounds more interesting, let's do that! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Conditionals - We need to enclose the healthcheck block with: `{{ if CONDITION }}` at the beginning `{{ end }}` at the end - For the condition, we will use `.Values.healthcheck` .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Updating the deployment template .exercise[ - Edit `helmcoins/templates/deployment.yaml` - Before the healthchecks section (it starts with `livenessProbe:`), add: `{{ if .Values.healthcheck }}` - After the healthchecks section (just before `resources:`), add: `{{ end }}` - Edit `hasher.yaml`, `rng.yaml`, `webui.yaml` to add: `healthcheck: true` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Update the deployed charts - We can now apply the new templates (and the new values) .exercise[ - Use the same command as earlier to upgrade all five components - Use `kubectl describe` to confirm that `redis` starts correctly - Use `kubectl describe` to confirm that `hasher` still has healthchecks ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Is it working now? - If we look at the worker logs, it appears that the worker is still stuck - What could be happening? -- - The redis service is not on port 80! - We need to update the port number in redis.yaml - We also need to update the port number in deployment.yaml (it is hard-coded to 80 there) .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Setting the redis port .exercise[ - Edit `redis.yaml` to add: ```yaml service: port: 6379 ``` - Edit `helmcoins/templates/deployment.yaml` - The line with `containerPort` should be: ```yaml containerPort: {{ .Values.service.port }} ``` ] .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Apply changes - Re-run the for loop to execute `helm upgrade` one more time - Check the worker logs - This time, it should be working! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- ## Extra steps - We don't need to create a service for the worker - We can put the whole service block in a conditional (this will require additional changes in other files referencing the service) - We can set the webui to be a NodePort service - We can change the number of workers with `replicaCount` - And much more! .debug[[k8s/create-more-charts.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/create-more-charts.md)] --- class: pic .interstitial[] --- name: toc-authentication-and-authorization class: title Authentication and authorization .nav[ [Previous section](#toc-creating-helm-charts) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-network-policies) ] .debug[(automatically generated title slide)] --- # Authentication and authorization *And first, a little refresher!* - Authentication = verifying the identity of a person On a UNIX system, we can authenticate with login+password, SSH keys ... - Authorization = listing what they are allowed to do On a UNIX system, this can include file permissions, sudoer entries ... - Sometimes abbreviated as "authn" and "authz" - In good modular systems, these things are decoupled (so we can e.g. change a password or SSH key without having to reset access rights) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication in Kubernetes - When the API server receives a request, it tries to authenticate it (it examines headers, certificates... anything available) - Many authentication methods are available and can be used simultaneously (we will see them on the next slide) - It's the job of the authentication method to produce: - the user name - the user ID - a list of groups - The API server doesn't interpret these; that'll be the job of *authorizers* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication methods - TLS client certificates (that's what we've been doing with `kubectl` so far) - Bearer tokens (a secret token in the HTTP headers of the request) - [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) (carrying user and password in an HTTP header) - Authentication proxy (sitting in front of the API and setting trusted headers) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Anonymous requests - If any authentication method *rejects* a request, it's denied (`401 Unauthorized` HTTP code) - If a request is neither rejected nor accepted by anyone, it's anonymous - the user name is `system:anonymous` - the list of groups is `[system:unauthenticated]` - By default, the anonymous user can't do anything (that's what you get if you just `curl` the Kubernetes API) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication with TLS certificates - This is enabled in most Kubernetes deployments - The user name is derived from the `CN` in the client certificates - The groups are derived from the `O` fields in the client certificate - From the point of view of the Kubernetes API, users do not exist (i.e. they are not stored in etcd or anywhere else) - Users can be created (and added to groups) independently of the API - The Kubernetes API can be set up to use your custom CA to validate client certs .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Viewing our admin certificate - Let's inspect the certificate we've been using all this time! .exercise[ - This command will show the `CN` and `O` fields for our certificate: ```bash kubectl config view \ --raw \ -o json \ | jq -r .users[0].user[\"client-certificate-data\"] \ | openssl base64 -d -A \ | openssl x509 -text \ | grep Subject: ``` ] Let's break down that command together! π .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Breaking down the command - `kubectl config view` shows the Kubernetes user configuration - `--raw` includes certificate information (which shows as REDACTED otherwise) - `-o json` outputs the information in JSON format - `| jq ...` extracts the field with the user certificate (in base64) - `| openssl base64 -d -A` decodes the base64 format (now we have a PEM file) - `| openssl x509 -text` parses the certificate and outputs it as plain text - `| grep Subject:` shows us the line that interests us β We are user `kubernetes-admin`, in group `system:masters`. (We will see later how and why this gives us the permissions that we have.) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## User certificates in practice - The Kubernetes API server does not support certificate revocation (see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982)) - As a result, we don't have an easy way to terminate someone's access (if their key is compromised, or they leave the organization) - Option 1: re-create a new CA and re-issue everyone's certificates β Maybe OK if we only have a few users; no way otherwise - Option 2: don't use groups; grant permissions to individual users β Inconvenient if we have many users and teams; error-prone - Option 3: issue short-lived certificates (e.g. 24 hours) and renew them often β This can be facilitated by e.g. Vault or by the Kubernetes CSR API .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authentication with tokens - Tokens are passed as HTTP headers: `Authorization: Bearer and-then-here-comes-the-token` - Tokens can be validated through a number of different methods: - static tokens hard-coded in a file on the API server - [bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) (special case to create a cluster or join nodes) - [OpenID Connect tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) (to delegate authentication to compatible OAuth2 providers) - service accounts (these deserve more details, coming right up!) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Service accounts - A service account is a user that exists in the Kubernetes API (it is visible with e.g. `kubectl get serviceaccounts`) - Service accounts can therefore be created / updated dynamically (they don't require hand-editing a file and restarting the API server) - A service account is associated with a set of secrets (the kind that you can view with `kubectl get secrets`) - Service accounts are generally used to grant permissions to applications, services... (as opposed to humans) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Token authentication in practice - We are going to list existing service accounts - Then we will extract the token for a given service account - And we will use that token to authenticate with the API .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Listing service accounts .exercise[ - The resource name is `serviceaccount` or `sa` for short: ```bash kubectl get sa ``` ] There should be just one service account in the default namespace: `default`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Finding the secret .exercise[ - List the secrets for the `default` service account: ```bash kubectl get sa default -o yaml SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name) ``` ] It should be named `default-token-XXXXX`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Extracting the token - The token is stored in the secret, wrapped with base64 encoding .exercise[ - View the secret: ```bash kubectl get secret $SECRET -o yaml ``` - Extract the token and decode it: ```bash TOKEN=$(kubectl get secret $SECRET -o json \ | jq -r .data.token | openssl base64 -d -A) ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Using the token - Let's send a request to the API, without and with the token .exercise[ - Find the ClusterIP for the `kubernetes` service: ```bash kubectl get svc kubernetes API=$(kubectl get svc kubernetes -o json | jq -r .spec.clusterIP) ``` - Connect without the token: ```bash curl -k https://$API ``` - Connect with the token: ```bash curl -k -H "Authorization: Bearer $TOKEN" https://$API ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Results - In both cases, we will get a "Forbidden" error - Without authentication, the user is `system:anonymous` - With authentication, it is shown as `system:serviceaccount:default:default` - The API "sees" us as a different user - But neither user has any rights, so we can't do nothin' - Let's change that! .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Authorization in Kubernetes - There are multiple ways to grant permissions in Kubernetes, called [authorizers](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules): - [Node Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) (used internally by kubelet; we can ignore it) - [Attribute-based access control](https://kubernetes.io/docs/reference/access-authn-authz/abac/) (powerful but complex and static; ignore it too) - [Webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) (each API request is submitted to an external service for approval) - [Role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (associates permissions to users dynamically) - The one we want is the last one, generally abbreviated as RBAC .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Role-based access control - RBAC allows to specify fine-grained permissions - Permissions are expressed as *rules* - A rule is a combination of: - [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete... - resources (as in "API resource," like pods, nodes, services...) - resource names (to specify e.g. one specific pod instead of all pods) - in some case, [subresources](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources) (e.g. logs are subresources of pods) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## From rules to roles to rolebindings - A *role* is an API object containing a list of *rules* Example: role "external-load-balancer-configurator" can: - [list, get] resources [endpoints, services, pods] - [update] resources [services] - A *rolebinding* associates a role with a user Example: rolebinding "external-load-balancer-configurator": - associates user "external-load-balancer-configurator" - with role "external-load-balancer-configurator" - Yes, there can be users, roles, and rolebindings with the same name - It's a good idea for 1-1-1 bindings; not so much for 1-N ones .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Cluster-scope permissions - API resources Role and RoleBinding are for objects within a namespace - We can also define API resources ClusterRole and ClusterRoleBinding - These are a superset, allowing us to: - specify actions on cluster-wide objects (like nodes) - operate across all namespaces - We can create Role and RoleBinding resources within a namespace - ClusterRole and ClusterRoleBinding resources are global .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Pods and service accounts - A pod can be associated with a service account - by default, it is associated with the `default` service account - as we saw earlier, this service account has no permissions anyway - The associated token is exposed to the pod's filesystem (in `/var/run/secrets/kubernetes.io/serviceaccount/token`) - Standard Kubernetes tooling (like `kubectl`) will look for it there - So Kubernetes tools running in a pod will automatically use the service account .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## In practice - We are going to create a service account - We will use a default cluster role (`view`) - We will bind together this role and this service account - Then we will run a pod using that service account - In this pod, we will install `kubectl` and check our permissions .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Creating a service account - We will call the new service account `viewer` (note that nothing prevents us from calling it `view`, like the role) .exercise[ - Create the new service account: ```bash kubectl create serviceaccount viewer ``` - List service accounts now: ```bash kubectl get serviceaccounts ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Binding a role to the service account - Binding a role = creating a *rolebinding* object - We will call that object `viewercanview` (but again, we could call it `view`) .exercise[ - Create the new role binding: ```bash kubectl create rolebinding viewercanview \ --clusterrole=view \ --serviceaccount=default:viewer ``` ] It's important to note a couple of details in these flags... .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Roles vs Cluster Roles - We used `--clusterrole=view` - What would have happened if we had used `--role=view`? - we would have bound the role `view` from the local namespace (instead of the cluster role `view`) - the command would have worked fine (no error) - but later, our API requests would have been denied - This is a deliberate design decision (we can reference roles that don't exist, and create/update them later) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Users vs Service Accounts - We used `--serviceaccount=default:viewer` - What would have happened if we had used `--user=default:viewer`? - we would have bound the role to a user instead of a service account - again, the command would have worked fine (no error) - ...but our API requests would have been denied later - What's about the `default:` prefix? - that's the namespace of the service account - yes, it could be inferred from context, but... `kubectl` requires it .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Testing - We will run an `alpine` pod and install `kubectl` there .exercise[ - Run a one-time pod: ```bash kubectl run eyepod --rm -ti --restart=Never \ --serviceaccount=viewer \ --image alpine ``` - Install `curl`, then use it to install `kubectl`: ```bash apk add --no-cache curl URLBASE=https://storage.googleapis.com/kubernetes-release/release KUBEVER=$(curl -s $URLBASE/stable.txt) curl -LO $URLBASE/$KUBEVER/bin/linux/amd64/kubectl chmod +x kubectl ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Running `kubectl` in the pod - We'll try to use our `view` permissions, then to create an object .exercise[ - Check that we can, indeed, view things: ```bash ./kubectl get all ``` - But that we can't create things: ``` ./kubectl create deployment testrbac --image=nginx ``` - Exit the container with `exit` or `^D` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- ## Testing directly with `kubectl` - We can also check for permission with `kubectl auth can-i`: ```bash kubectl auth can-i list nodes kubectl auth can-i create pods kubectl auth can-i get pod/name-of-pod kubectl auth can-i get /url-fragment-of-api-request/ kubectl auth can-i '*' services ``` - And we can check permissions on behalf of other users: ```bash kubectl auth can-i list nodes \ --as some-user kubectl auth can-i list nodes \ --as system:serviceaccount:: ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Where does this `view` role come from? - Kubernetes defines a number of ClusterRoles intended to be bound to users - `cluster-admin` can do *everything* (think `root` on UNIX) - `admin` can do *almost everything* (except e.g. changing resource quotas and limits) - `edit` is similar to `admin`, but cannot view or edit permissions - `view` has read-only access to most resources, except permissions and secrets *In many situations, these roles will be all you need.* *You can also customize them!* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Customizing the default roles - If you need to *add* permissions to these default roles (or others), you can do it through the [ClusterRole Aggregation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) mechanism - This happens by creating a ClusterRole with the following labels: ```yaml metadata: labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" ``` - This ClusterRole permissions will be added to `admin`/`edit`/`view` respectively - This is particulary useful when using CustomResourceDefinitions (since Kubernetes cannot guess which resources are sensitive and which ones aren't) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Where do our permissions come from? - When interacting with the Kubernetes API, we are using a client certificate - We saw previously that this client certificate contained: `CN=kubernetes-admin` and `O=system:masters` - Let's look for these in existing ClusterRoleBindings: ```bash kubectl get clusterrolebindings -o yaml | grep -e kubernetes-admin -e system:masters ``` (`system:masters` should show up, but not `kubernetes-admin`.) - Where does this match come from? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## The `system:masters` group - If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out! - It is in the `cluster-admin` binding: ```bash kubectl describe clusterrolebinding cluster-admin ``` - This binding associates `system:masters` with the cluster role `cluster-admin` - And the `cluster-admin` is, basically, `root`: ```bash kubectl describe clusterrole cluster-admin ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: extra-details ## Figuring out who can do what - For auditing purposes, sometimes we want to know who can perform an action - There is a proof-of-concept tool by Aqua Security which does exactly that: https://github.com/aquasecurity/kubectl-who-can - This is one way to install it: ```bash docker run --rm -v /usr/local/bin:/go/bin golang \ go get -v github.com/aquasecurity/kubectl-who-can ``` - This is one way to use it: ```bash kubectl-who-can create pods ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/authn-authz.md)] --- class: pic .interstitial[] --- name: toc-network-policies class: title Network policies .nav[ [Previous section](#toc-authentication-and-authorization) | [Back to table of contents](#toc-chapter-14) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Network policies - Namespaces help us to *organize* resources - Namespaces do not provide isolation - By default, every pod can contact every other pod - By default, every service accepts traffic from anyone - If we want this to be different, we need *network policies* .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## What's a network policy? A network policy is defined by the following things. - A *pod selector* indicating which pods it applies to e.g.: "all pods in namespace `blue` with the label `zone=internal`" - A list of *ingress rules* indicating which inbound traffic is allowed e.g.: "TCP connections to ports 8000 and 8080 coming from pods with label `zone=dmz`, and from the external subnet 4.42.6.0/24, except 4.42.6.5" - A list of *egress rules* indicating which outbound traffic is allowed A network policy can provide ingress rules, egress rules, or both. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## How do network policies apply? - A pod can be "selected" by any number of network policies - If a pod isn't selected by any network policy, then its traffic is unrestricted (In other words: in the absence of network policies, all traffic is allowed) - If a pod is selected by at least one network policy, then all traffic is blocked ... ... unless it is explicitly allowed by one of these network policies .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- class: extra-details ## Traffic filtering is flow-oriented - Network policies deal with *connections*, not individual packets - Example: to allow HTTP (80/tcp) connections to pod A, you only need an ingress rule (You do not need a matching egress rule to allow response traffic to go through) - This also applies for UDP traffic (Allowing DNS traffic can be done with a single rule) - Network policy implementations use stateful connection tracking .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Pod-to-pod traffic - Connections from pod A to pod B have to be allowed by both pods: - pod A has to be unrestricted, or allow the connection as an *egress* rule - pod B has to be unrestricted, or allow the connection as an *ingress* rule - As a consequence: if a network policy restricts traffic going from/to a pod, the restriction cannot be overridden by a network policy selecting another pod - This prevents an entity managing network policies in namespace A (but without permission to do so in namespace B) from adding network policies giving them access to namespace B .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## The rationale for network policies - In network security, it is generally considered better to "deny all, then allow selectively" (The other approach, "allow all, then block selectively" makes it too easy to leave holes) - As soon as one network policy selects a pod, the pod enters this "deny all" logic - Further network policies can open additional access - Good network policies should be scoped as precisely as possible - In particular: make sure that the selector is not too broad (Otherwise, you end up affecting pods that were otherwise well secured) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Our first network policy This is our game plan: - run a web server in a pod - create a network policy to block all access to the web server - create another network policy to allow access only from specific pods .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Running our test web server .exercise[ - Let's use the `nginx` image: ```bash kubectl create deployment testweb --image=nginx ``` - Find out the IP address of the pod with one of these two commands: ```bash kubectl get pods -o wide -l app=testweb IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP) ``` - Check that we can connect to the server: ```bash curl $IP ``` ] The `curl` command should show us the "Welcome to nginx!" page. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Adding a very restrictive network policy - The policy will select pods with the label `app=testweb` - It will specify an empty list of ingress rules (matching nothing) .exercise[ - Apply the policy in this YAML file: ```bash kubectl apply -f ~/container.training/k8s/netpol-deny-all-for-testweb.yaml ``` - Check if we can still access the server: ```bash curl $IP ``` ] The `curl` command should now time out. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-all-for-testweb spec: podSelector: matchLabels: app: testweb ingress: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Allowing connections only from specific pods - We want to allow traffic from pods with the label `run=testcurl` - Reminder: this label is automatically applied when we do `kubectl run testcurl ...` .exercise[ - Apply another policy: ```bash kubectl apply -f ~/container.training/k8s/netpol-allow-testcurl-for-testweb.yaml ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the second file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-testcurl-for-testweb spec: podSelector: matchLabels: app: testweb ingress: - from: - podSelector: matchLabels: run: testcurl ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Testing the network policy - Let's create pods with, and without, the required label .exercise[ - Try to connect to testweb from a pod with the `run=testcurl` label: ```bash kubectl run testcurl --rm -i --image=centos -- curl -m3 $IP ``` - Try to connect to testweb with a different label: ```bash kubectl run testkurl --rm -i --image=centos -- curl -m3 $IP ``` ] The first command will work (and show the "Welcome to nginx!" page). The second command will fail and time out after 3 seconds. (The timeout is obtained with the `-m3` option.) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## An important warning - Some network plugins only have partial support for network policies - For instance, Weave added support for egress rules [in version 2.4](https://github.com/weaveworks/weave/pull/3313) (released in July 2018) - But only recently added support for ipBlock [in version 2.5](https://github.com/weaveworks/weave/pull/3367) (released in Nov 2018) - Unsupported features might be silently ignored (Making you believe that you are secure, when you're not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Network policies, pods, and services - Network policies apply to *pods* - A *service* can select multiple pods (And load balance traffic across them) - It is possible that we can connect to some pods, but not some others (Because of how network policies have been defined for these pods) - In that case, connections to the service will randomly pass or fail (Depending on whether the connection was sent to a pod that we have access to or not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Network policies and namespaces - A good strategy is to isolate a namespace, so that: - all the pods in the namespace can communicate together - other namespaces cannot access the pods - external access has to be enabled explicitly - Let's see what this would look like for the DockerCoins app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Network policies for DockerCoins - We are going to apply two policies - The first policy will prevent traffic from other namespaces - The second policy will allow traffic to the `webui` pods - That's all we need for that app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Blocking traffic from other namespaces This policy selects all pods in the current namespace. It allows traffic only from pods in the current namespace. (An empty `podSelector` means "all pods.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-from-other-namespaces spec: podSelector: {} ingress: - from: - podSelector: {} ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Allowing traffic to `webui` pods This policy selects all pods with label `app=webui`. It allows traffic from any source. (An empty `from` field means "all sources.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-webui spec: podSelector: matchLabels: app: webui ingress: - from: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Applying both network policies - Both network policies are declared in the file `k8s/netpol-dockercoins.yaml` .exercise[ - Apply the network policies: ```bash kubectl apply -f ~/container.training/k8s/netpol-dockercoins.yaml ``` - Check that we can still access the web UI from outside (and that the app is still working correctly!) - Check that we can't connect anymore to `rng` or `hasher` through their ClusterIP ] Note: using `kubectl proxy` or `kubectl port-forward` allows us to connect regardless of existing network policies. This allows us to debug and troubleshoot easily, without having to poke holes in our firewall. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Cleaning up our network policies - The network policies that we have installed block all traffic to the default namespace - We should remove them, otherwise further exercises will fail! .exercise[ - Remove all network policies: ```bash kubectl delete networkpolicies --all ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Protecting the control plane - Should we add network policies to block unauthorized access to the control plane? (etcd, API server, etc.) -- - At first, it seems like a good idea ... -- - But it *shouldn't* be necessary: - not all network plugins support network policies - the control plane is secured by other methods (mutual TLS, mostly) - the code running in our pods can reasonably expect to contact the API (and it can do so safely thanks to the API permission model) - If we block access to the control plane, we might disrupt legitimate code - ...Without necessarily improving security .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)] --- ## Further resources - As always, the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is a good starting point - The API documentation has a lot of detail about the format of various objects: - [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicy-v1-networking-k8s-io) - [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyspec-v1-networking-k8s-io) - [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyingressrule-v1-networking-k8s-io) - etc. - And two resources by [Ahmet Alp Balkan](https://ahmet.im/): - a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017 - a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/maersk-2019-07/slides/k8s/netpol.md)]