Azure Kubernetes Services (AKS) - Key Features
Hands-On Getting Started
AKS Running in Localhost
Creating AKS cluster
Scaling options in AKS
AKS cluster monitoring
Kubernetes Architecture Components
As seen in the following diagram, Kubernetes follows client-server architecture. Wherein, we have master installed on one machine and the node on separate Linux machines.
Kubernetes ─ Master Machine Components
Following are the components of Kubernetes Master Machine.
It stores the configuration information which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server as it may have some sensitive information. It is a distributed key value Store which is accessible to all.
Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface, which means different tools and libraries can readily communicate with it. Kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API.
This component is responsible for most of the collectors that regulates the state of cluster and performs a task. In general, it can be considered as a daemon which runs in nonterminating loop and is responsible for collecting and sending information to API server. It works toward getting the shared state of cluster and then make changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoints, etc.
Some types of these controllers are:
- Node controller: Responsible for noticing and responding when nodes go down.
- Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
- Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
- Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
This is one of the key components of Kubernetes master. It is a service in master responsible for distributing the workload. It is responsible for tracking utilization of working load on cluster nodes and then placing the workload on which resources are available and accept the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating pod to new node.
Kubernetes ─ Node Components
Following are the key components of Node server which are necessary to communicate with Kubernetes master.
The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.
Kubelet / Kubelet Service
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
This is a small service in each node responsible for relaying information to and from control plane service. It interacts with etcd store to read configuration details and wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.
Kubernetes Proxy Service
This is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. It makes sure that the networking environment is predictable and accessible and at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers’ health checkup, etc.
A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
Deployments A deployment is one of many Kubernetes objects.
In technical terms, it encapsulates:
- Pod Specification
- Replica Count
- Deployment Strategy
In practical terms, you can think of a deployment as an instance of an application with it’s associated configuration. If you have two deployments, one could be a “production” environment and the other a “staging” environment.
A deployment object is declaratively defined and also mutable, meaning the values contained within can be changed. Some examples of a deployment object change include: • The underlying container referenced is changed • The application credentials have changed
When values change within the deployment object, Kubernetes controllers will be responsible for propagating these changes downstream and changing the state of the cluster to meet the definition of the deployment.
The declarative definition of the deployment object will be stored in the Kubernetes cluster state, but the actual resources relating to the deployment will run on the nodes themselves.
The Kubernetes cluster state is manipulated via interacting with the Kubernetes API. This is the only way deployments can be managed for end users. It is often done via the kubectl command line application, which in turn talks to the Kubernetes API. It is essentially a middleman.
Take note that, in the Kubernetes ecosystem, “deployment objects” are often referred to as “configs”, “objects”, “resources” or just “deployments”.
A pod is the smallest deployable unit that can be managed by Kubernetes. A pod is a logical group of one or more containers that share the same IP address and port space. The main purpose of a pod is to support co-located processes, such as an application server and its local cache. Containers within a pod can find each other via localhost, and can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. In other words, a pod represents a “logical host”. Pods are not durable; they will not survive scheduling failures or node failures. If a node where the pod is running dies, the pod is deleted. It can then be replaced by an identical pod, with even the same name, but with a new unique identifier (UID).
A label is a key/value pair that is attached to Kubernetes resource, for example, a pod. Labels can be attached to resources at creation time, as well as added and modified at any later time. THE BASICS POD L
A label selector can be used to organize Kubernetes resources that have labels. An equality-based selector defines a condition for selecting resources that have the specified label value. A set-based selector defines a condition for selecting resources that have a label value within the specified set of values.
A controller manages a set of pods and ensures that the cluster is in the specified state. Unlike manually created pods, the pods maintained by a replication controller are automatically replaced if they fail, get deleted, or are terminated. There are several controller types, such as replication controllers or deployment controllers.
A replication controller is responsible for running the specified number of pod copies (replicas) across the cluster.
A deployment defines a desired state for logical group of pods and replica sets. It creates new resources or replaces the existing resources, if necessary. A deployment can be updated, rolled out, or rolled back. A practical use case for a deployment is to bring up a replica set and pods, then update the deployment to re-create the pods (for example, to use a new image). Later, the deployment can be rolled back to an earlier revision if the current deployment is not stable.
A replica set is the next-generation replication controller. A replication controller supports only equality-based selectors, while a replica set supports set-based selectors.
A service uses a selector to define a logical group of pods and defines a policy to access such logical groups. Because pods are not durable, the actual pods that are running may change. A client that uses one or more containers within a pod should not need to be aware of which specific pod it works with, especially if there are several pods (replicas). There are several types of services in Kubernetes, including ClusterIP, NodePort, LoadBalancer. A ClusterIP service exposes pods to connections from inside the cluster. A NodePort service exposes pods to external traffic by forwarding traffic from a port on each node of the cluster to the container port. A LoadBalancer service also exposes pods to external traffic, as NodePort service does, however it also provides a load balancer.
**SERVICE DISCOVERY **
Kubernetes supports finding a service in two ways: through environment variables and using DNS.
ENVIRONMENT VARIABLES Kubernetes injects a set of environment variables into pods for each active service. Such environment variables contain the service host and port, for example:
An application in the pod can use these variables to establish a connection to the service. The service should be created before the replication controller or replica set creates a pod’s replicas. Changes made to an active service are not reflected in a previously created replica.
Kubernetes automatically assigns DNS names to services. A special DNS record can be used to specify port numbers as well. To use DNS for service discovery, a Kubernetes cluster should be properly configured to support it
A container file system is ephemeral: if a container crashes, the changes to its file system are lost. A volume is defined at the pod level, and is used to preserve data across container crashes. A volume can be also used to share data between containers in a pod. A volume has the same lifecycle as the the pod that encloses it— when a pod is deleted, the volume is deleted as well. Kubernetes supports different volume types, which are implemented as plugins
A persistent volume represents a real networked storage unit in a cluster that has been provisioned by an administrator. Persistent storage has a lifecycle independent of any individual pod. It supports different access modes, such as mounting as read-write by a single node, mounting as read-only by many nodes, and mounting as read-write by many nodes. Kubernetes supports different persistent volume types, which are implemented as plugins. Examples of persistent volume types include AWS EBS, vSphere volume, Azure File, GCE Persistent Disk, CephFS, Ceph RBD, GlusterFS, iSCSI, NFS, and Host Path.
Persistent volume claim
A persistent volume claim defines a specific amount of storage requested and specific access modes. Kubernetes finds a matching persistent volume and binds it with the persistent volume claim. If a matching volume does not exist, a persistent volume claim will remain unbound indefinitely. It will be bound as soon as a matching volume become available.
A Kubernetes secret allows users to pass sensitive information, such as passwords, authentication tokens, SSH keys, and database credentials, to containers. A secret can then be referenced when declaring a container definition, and read from within containers as environment variables or from a local disk.
A Kubernetes config map allows users to externalize application configuration parameters from a container image and define application configuration details, such as key/value pairs, directory content, or file content. Config map values can be consumed by applications through environment variables, local disks, or command line arguments.
A job is used to create one or more pods and ensure that a specified number of them successfully terminate. It tracks the successful completions, and when a specified number of successful completions is reached, the job itself is complete. There are several types of jobs, including non-parallel jobs, parallel jobs with a fixed completion count, and parallel jobs with a work queue. A job should be used instead of a replication controller if you need to spread pods across cluster nodes and ensure, for example, so that each node has only one running pod of the specified type.
A daemon set ensures that all or some nodes run a copy of a pod. A daemon set tracks the additional and removal of cluster nodes and adds pods for nodes that are added to the cluster, terminates pods on nodes that are being removed from a cluster. Deleting a daemon set will clean up the pods it created. A typical use case for a daemon set is running a log collection daemon or a monitoring daemon on each node of a cluster.
A namespace provides a logical partition of the cluster’s resources. Kubernetes resources can use the same name when found in different namespaces. Different namespaces can be assigned different quotas for resource limitations.
A quota sets resource limitations, such as CPU, memory, number of pods or services, for a given namespace. It also forces users to explicitly request resource allotment for their pods.
Imperative approach vs Declarative approach
Before getting to the practical steps of the Kubernetes deployment, it’s important to understand the key approaches to orchestration.
The classic imperative approach for managing software involves several steps or tasks, some of which are manual. When working in a team, it is usually required that these steps be documented, and, in an ideal case, automated. Preparing good documentation for a classic imperative administrative procedure and automating these steps can be non-trivial tasks, even if each of the steps is simple.
A declarative approach for administrative tasks is intended to solve such challenges. With a declarative approach, an administrator defines a target state for a system (application, server, or cluster). Typically, a domain-specific language (DSL) is used to describe the target state. An administrative tool, such as Kubernetes, takes this definition as an input and takes care of how to achieve the target state from the current observable state.
From this point in the article, I assume your cluster is already setup, configured, and has access to kubectl.