Containerize
Before we can truly understand K8S, I think it makes sense to understand some basic containerization.
Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of loading an application onto a virtual machine, as the application can be run on any suitable physical machine without any worries about dependencies.
Containerization has gained recent prominence with the open-source Docker. Docker containers are designed to run on everything from physical computers to virtual machines, bare-metal servers, OpenStack cloud clusters, public instances and more.
Containerization vs. Virtualization via Traditional Hypervisors
The foundation for containerization lies in the LinuX Containers (LXC) format, which is a userspace interface for the Linux kernel containment features. As a result, containerization only works in Linux environments and can only run Linux applications.
This is in contrast with traditional hypervisors like VMware's ESXi, Xen or KVM, wherein applications can run on Windows or any other operating system that supports the hypervisor.
Another key difference with containerization as opposed to traditional hypervisors is that containers share the Linux kernel used by the operating system running the host machine, which means any other containers running on the host machine will also be using the same Linux kernel.
Docker Not the Only Containerization Option
Docker may have been the first to bring attention to containerization, but it's no longer the only container system option. CoreOS recently released a streamlined alternative to Docker called Rocket.
And Canonical, developers of the Ubuntu Linux-based operating system, has announced the LXD containerization engine for Ubuntu, which will also be integrated with OpenStack.
Docker
For our workshop we are using docker.
Docker by itself could be one whole workshop (it should be). But we aren't here today to learn docker. So I will quickly go over the main things we need to know for our workshop.
Image Registry
The docker images we build needs to have a space to live. You can use both public and private registry. For your super awesome secret app you probably want a private registry. With your IBM Cloud account you actually get a private image registry for free.
Writing Dockerfile
Lets quickly see how to write a dockerfile
This is a example Dockerfile.
Each instruction creates one layer:
FROMcreates a layer from theubuntu:18.04Docker image.COPYadds files from your Docker client’s current directory.RUNbuilds your application withmake.CMDspecifies what command to run within the container.
Docker file support the following instructions
- FROM - Base Image we are building from
- LABEL - For organization
- RUN - To run a command while building
- CMD - To run a command at start
- EXPOSE - Expose a port
- ENV - Set env variable
- ADD or COPY - Copy files from host or a previous layer to current layer
- ENTRYPOINT - Main command of the image at runtime
- VOLUME - For any kind of storage (DB, Local Files etc)
- USER - Set user. Preferably no root if the service allows.
- WORKDIR - Where RUN and other commands should execute in.
- ONBUILD - Executes after current Build completes
Lets look at one of our Dockerfiles that we are using.
This is the folder structure
Best Practices
- Create ephemeral containers
- Exclude with .dockerignore
- Use multi-stage builds
- Don’t install unnecessary packages
- Minimize the number of layers
- Decouple applications
- Leverage build cache
- Make small containers
- Use the least privileged access.
- Don't store sensitive information.