Kubectl
Column 1 Viewing pod logs To view the logs of a specific pod, use the command 'kubectl logs <pod_name>'. You can also stream live log output with '-f' flag. To see timestamps in the log entries, add '--timestamps' to your kubectl logs command. If you have multiple containers within a pod, specify which container's log to display using '-c <container_name>'. Checking cluster events Use 'kubectl get events' to view the cluster's recent events, including warnings and errors. Events provide insights into what is happening within the cluster, such as pod scheduling failures or resource constraints. Filter event types using '--field-selector'. Use '-w' for continuous monitoring of new events. Describing a Pod A pod is the basic building block of Kubernetes. It represents one or more containers that should be run together on a single host. To describe a specific pod, you can use the kubectl describe command followed by the type and name of the resource (e.g., kubectl describe pod <pod-name>). This provides detailed information about the selected pod's configuration, events, and status. The output includes details such as container images used in each container within the specified pod, volumes associated with it, its IP address(es), conditions related to readiness and liveness probes. Inspecting resource utilization of pods and nodes Use 'kubectl top pod' to display CPU and memory usage for each pod. Use 'kubectl top node' to view the resource consumption on a per-node basis. This information is crucial for identifying performance bottlenecks, optimizing resource allocation, and troubleshooting issues related to overutilization or underutilization of resources within the cluster. Analyzing Kubernetes API server requests using the audit log To analyze Kubernetes API server requests, enable auditing and configure an audit policy. The logs can be stored in various backends such as file, webhook or cloud storage. Use kubectl to access these logs for monitoring and troubleshooting purposes. Troubleshooting Networking Issues with kubectl Commands When troubleshooting networking issues, use 'kubectl port-forward' to forward one or more local ports to a pod. Use 'kubectl describe service <service-name>' for detailed information about the selected service including endpoints and labels. Additionally, utilize 'kubectl get events' to view cluster events that may provide insights into network-related problems. Identifying and Resolving Image Pull Errors or Container Startup Failures 1. Check the image name, tag, and repository URL for correctness. Using kubectl exec for debugging inside containers 1. Syntax: kubectl exec <pod_name> [-c CONTAINER] -- COMMAND [args...] Setting up CPU-based autoscaling 1. Use the kubectl autoscale command to create a Horizontal Pod Autoscaler (HPA) for your deployment. Understanding Horizontal Pod Autoscaler (HPA) 1. HPA automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization. Defining resource metrics for autoscaling When defining resource metrics for autoscaling in Kubernetes, it's essential to consider the type of metric (e.g., CPU or memory), target value, and utilization threshold. The Horizontal Pod Autoscaler (HPA) uses these defined metrics to automatically scale the number of pods based on current usage compared to the specified targets. Metrics can be set using kubectl autoscale command with flags such as --cpu-percent and --min/--max. Using custom metrics for scaling applications Custom Metrics API allows users to scale their application based on specific, user-defined metrics such as queue length or latency. To use custom metrics for Horizontal Pod Autoscaling (HPA), the metric must be available in the Kubernetes cluster through a compatible adapter like Prometheus Adapter. Once configured, HPA can automatically adjust the number of replicas based on these custom metrics, ensuring optimal performance and resource utilization. Configuring memory-based autoscaling 1. Use the Horizontal Pod Autoscaler (HPA) to automatically scale the number of pods in a deployment based on observed CPU utilization. Scaling based on external metric sources Kubernetes supports scaling workloads based on custom metrics from various data sources such as Stackdriver, Prometheus, and others. This allows autoscaling decisions to be made using application-specific or infrastructure-related metrics. To enable this feature, the Horizontal Pod Autoscaler (HPA) can be configured with a custom metric specification that includes the resource type (e.g., CPU), target average value for the metric, and other parameters like utilization target. Managing HPA resources with kubectl commands 1. Use 'kubectl autoscale' command to create an HPA for a deployment or replica set. Increasing and Decreasing Replicas using Kubectl To increase the number of replicas for a deployment, use the command 'kubectl scale --replicas=<number> deployment/<deployment-name>'. To decrease the number of replicas, simply repeat this command with a lower <number>. Ensure that you have sufficient resources in your cluster to accommodate additional pods when increasing replicas. Use kubectl get deployments to verify changes in replica count after scaling. kubectl delete pod [pod_name] This command is used to delete a specific pod by name. When the specified pod is deleted, Kubernetes will create a new one to replace it if the deployment's replica count is greater than zero.$ kubectl delete pod my-pod kubectl get pods The 'kubectl get pods' command is used to list all the running pods in the current namespace. It provides information about pod names, statuses, and other relevant details.$ kubectl get pods kubectl describe pod [pod_name] The 'kubectl describe' command provides detailed information about a specific pod, including its current state, events, and related objects. It is useful for troubleshooting issues or understanding the status of a particular pod.$ kubectl describe pod my-pod kubectl create -f [filename] The 'create' command is used to create resources from a file or stdin. It allows you to define and create Kubernetes resources using YAML or JSON files. The '-f' flag specifies the filename containing the resource configuration.$ kubectl create -f deployment.yaml kubectl apply -f [directory_or_file] The 'apply' command is used to create or update resources defined in a file. It can be a single YAML/JSON file, or an entire directory containing multiple configuration files. This command helps in deploying and managing applications on the Kubernetes cluster efficiently.$ kubectl apply -f ./path/to/directory
$ kubectl apply -f example.yaml kubectl exec -it [pod_name] -- /bin/bash This command allows you to execute a command in a running container. The '-i' flag is for interactive mode, and the '-t' flag allocates a pseudo-TTY. Replace '[pod_name]' with the name of the pod where you want to run this command.$ kubectl exec -it my-pod -- /bin/bash kubectl logs [-f] <pod-name> [<container-name>] The 'kubectl logs' command retrieves the logs from a specific pod. Adding '-f' enables following of log output, similar to 'tail -f'. The '<pod-name>' argument specifies the name of the pod for which you want to retrieve or follow its logs. Optionally, '[<container-name>]' can be used when there are multiple containers within the specified pod.$ kubectl logs my-pod
$ kubectl logs -f my-pod-1 container1 Column 2 Viewing cluster-wide events Use 'kubectl get events' to view all the cluster-wide events. Events provide insight into what is happening in your cluster, such as scheduling errors or pod evictions. You can filter and format the output using flags like --field-selector and -o wide respectively. Monitoring resource utilization (CPU, memory) - Use 'kubectl top' to view the CPU and memory usage of resources in a cluster. Inspecting logs from pods or containers Use the kubectl logs command to retrieve container logs. Specify pod name and, optionally, a specific container within the pod. Use -f flag for real-time streaming of log data. Utilize --since and --tail flags to specify time range and number of lines respectively. Analyzing network traffic within the cluster Use kubectl proxy to create a secure connection between your local machine and the Kubernetes API server. Then, access services running in the cluster via localhost. Analyze network traffic using tools like Wireshark or tcpdump by capturing packets on specific pods or nodes for troubleshooting connectivity issues. Checking pod status and health Use 'kubectl get pods' to check the status of all pods in a namespace. Use 'kubectl describe pod <pod-name>' for detailed information about a specific pod, including events and conditions. Check the STATUS field to determine if a Pod is running or has encountered issues. Utilize readiness probes ('spec.containers.readinessProbe') within your Pod's configuration to ensure that it is ready to serve traffic before being added into load balancers. Gathering metrics for performance analysis Use 'kubectl top' to gather CPU and memory usage of resources within the cluster. Metrics Server should be deployed in the cluster to enable resource metric collection. Utilize Prometheus or other monitoring tools integrated with Kubernetes for advanced performance analysis. Consider setting up custom queries and alerts based on specific application requirements. Identifying potential issues with deployments or services - Use kubectl get events to check for any recent events related to the deployment or service. Setting up alerts and notifications 1. Use kubectl to create a ConfigMap or Secret with alerting configurations. Creating and managing secrets in Kubectl Kubectl allows the creation of secrets using imperative commands or YAML files. Use 'kubectl create secret' to generate a new secret, specifying type (e.g., generic) and data. Secrets can be managed with 'kubectl get', 'describe', and 'delete'. Ensure secure handling as base64 encoding is not encryption. Applying security contexts to pods, containers, and controllers Security contexts allow you to set privilege and access control settings for Pods or Containers. Use 'securityContext' in the Pod spec to define privileges at the pod level. For fine-grained control within a container, use 'securityContext' under Container definition. Security context fields include capabilities (e.g., adding/removing Linux kernel capabilities), SELinux options, runAsUser/runAsGroup settings for user identity management. Managing sensitive information securely with Kubernetes Secrets 1. Use the 'kubectl create secret' command to create a new Secret object. Using ConfigMaps to decouple configuration artifacts from image content ConfigMaps in Kubernetes are used to separate configuration data from application code, allowing for easier management and updates. They can be mounted as volumes or exposed as environment variables within pods. This approach enables changes to configurations without rebuilding the container images, promoting flexibility and scalability. When creating a ConfigMap, it's essential to consider the key-value pairs that represent the configuration data required by applications running inside containers. Defining environment variables using ConfigMaps in Kubectl When defining environment variables with ConfigMaps, you can use the 'envFrom' field to inject all key-value pairs from a ConfigMap into a container. Another method is to specify individual keys as separate 'env' entries within the pod specification file. Remember that changes made to the referenced ConfigMap will automatically be reflected in any pods referencing it without requiring redeployment. Setting up RBAC for Secrets and ConfigMaps 1. Use Role-Based Access Control (RBAC) to define fine-grained access controls for Secrets and ConfigMaps. Best practices for handling credentials within Kubernetes clusters 1. Use Secrets: Store sensitive information such as passwords, tokens, and keys in Kubernetes Secrets to keep them secure. Incorporating Security Contexts into pod specifications When defining security contexts in a Pod specification, you can set the privileges and access control settings for containers within the Pod. Use 'securityContext' field to define these settings at both the container level and the Pod level. This allows you to enforce policies such as running with non-root users, controlling capabilities, setting SELinux options, or configuring AppArmor profiles. Creating a Pod To create a pod using kubectl, use the command 'kubectl run <pod-name> --image=<container-image>'. Pods can be created imperatively or declaratively. Imperative creation involves directly running the kubectl run command, while declarative creation uses YAML files to define and create pods with specific configurations. Ensure that you have necessary permissions and access to the Kubernetes cluster before creating pods. Scaling Deployments 1. Use kubectl scale deployment <deployment-name> --replicas=<number> to scale a deployment by increasing or decreasing the number of replicas. Scaling ReplicaSets 1. Use kubectl scale to adjust the number of replicas in a ReplicaSet. Rolling Back Deployments To roll back a deployment to a previous version, you can use the kubectl rollout undo command followed by the resource type and name. For example: kubectl rollout undo deployment/my-deployment. You can also specify the revision number with --to-revision flag for more precise rollback. Updating Deployments To update a deployment, use the kubectl set image command to change the container image of an existing deployment. For example: kubectl set image deployment/my-deployment my-container=my-new-image:tag. Use rolling updates for seamless deployments without downtime by specifying maxUnavailable and maxSurge options in your rollout strategy. Always ensure you have proper version control and backup strategies before performing any updates. Pausing and Resuming Deployments To pause a deployment, use the command 'kubectl rollout pause deployment <deployment-name>'. To resume a paused deployment, apply changes to the paused resource using 'kubectl rollout resume'. Pausing deployments can be useful for performing maintenance or investigating issues without interruption. Liveness and Readiness Probes Liveness probes are used to determine if a container is running, while readiness probes indicate when the container is ready to serve traffic. These probes can be configured in the pod's YAML file using HTTP endpoints or commands. They help Kubernetes manage application availability by restarting containers that fail liveness checks and routing traffic away from pods that fail readiness checks. Multicontainer Pods 1. Multicontainer pods are a way to run multiple containers that need to work together in the same pod. Column 3 Creating a PersistentVolume 1. Define the PersistentVolume (PV) using a YAML file with specifications such as storage capacity, access modes, and storage class. Defining a StorageClass A StorageClass in Kubernetes provides a way for administrators to describe the 'classes' of storage they offer. It allows dynamic provisioning and management of different types of storage, such as SSD or HDD, based on user requirements. When defining a StorageClass, key parameters include provisioner (the volume plugin responsible for creating the underlying storage), reclaimPolicy (specifies what happens when the PersistentVolume associated with this class is released), and parameters specific to each provisioner. Provisioning storage with dynamic provisioning Dynamic provisioning is a feature in Kubernetes that allows storage volumes to be automatically created when they are requested. This eliminates the need for cluster administrators to pre-provision storage. When a PersistentVolumeClaim (PVC) is created, it triggers the dynamic provisioner, which then creates and binds a suitable volume based on StorageClasses defined by the administrator. The StorageClass defines how persistent volumes should be dynamically provisioned. Using volume expansion to increase the size of a PVC/PV dynamically When using Kubernetes, you can expand PersistentVolumeClaims (PVCs) and their associated PersistentVolumes (PVs) dynamically. This is achieved by updating the storage request in the PVC spec with an increased value. The underlying storage class must support dynamic provisioning and resizing for this feature to work effectively. After modifying the PVC's capacity, Kubernetes will automatically resize its corresponding PV if supported by both the cloud provider and StorageClass. Specifying access modes for persistent volumes and claims Access modes specify how the volume can be mounted. There are three access modes: ReadWriteOnce (RWO) - the volume can be mounted as read-write by a single node, ReadOnlyMany (ROX) - the volume can be mounted read-only by many nodes, ReadWriteMany (RWX) - the volume can be mounted as read-write by many nodes simultaneously. Managing Reclaim Policies for PVs When a PersistentVolume (PV) is deleted, the reclaim policy determines what happens to the volume. The three available policies are Retain, Recycle, and Delete. VolumeSnapshot API The VolumeSnapshot API in Kubernetes provides a way to capture the state of a persistent volume at a particular point in time. It allows for data protection and disaster recovery by creating snapshots that can be used to restore volumes or create new ones from those snapshots. The VolumeSnapshot objects are defined using Custom Resource Definitions (CRDs) and managed through controllers, enabling users to easily manage their storage resources within the cluster. Configuring Storage Quotas 1. Use 'kubectl create quota' to define a new storage quota. Exposing Services When exposing a service, use the 'kubectl expose' command followed by the resource type (deployment or pod), name of the resource, and port. Specify --type=NodePort to expose on each node's IP at a static port. Use --type=LoadBalancer for cloud providers that support external load balancers. Creating a Service Services in Kubernetes enable networking and communication between different pods. Use 'kubectl expose' command to create a new service, specifying the pod selector with '--selector'. Services can be of type ClusterIP (default), NodePort or LoadBalancer. Labels are used to select which pods will receive traffic from the service. Service Discovery and DNS - Kubernetes uses DNS for service discovery within the cluster. Network Policies - Network policies are used to control the traffic between different pods in a Kubernetes cluster. Ingress Resources In Kubernetes, Ingress resources are used to manage external access to services within a cluster. They provide HTTP and HTTPS routing as well as load balancing capabilities. When creating an Ingress resource, the rules for directing traffic based on hostnames or paths need to be defined. Additionally, annotations can be utilized in Ingress resources to configure advanced settings such as SSL certificates and timeouts. EndpointSlices EndpointSlices are a new API resource that provides a more scalable and efficient way to track endpoints. They divide large service endpoint lists into smaller, more manageable pieces for better performance. EndpointSlice objects contain subsets of the endpoints in a Service object, allowing finer control over traffic distribution and reducing load on the Kubernetes API server. Using kubectl to Manage Network Resources 1. View all services in the current namespace: Use 'kubectl get services'. Load Balancing with Services Kubernetes Service is an abstraction that defines a logical set of Pods and enables external traffic exposure. Types include ClusterIP, NodePort, LoadBalancer, and ExternalName. The 'type: LoadBalancer' service type integrates with cloud providers to create load balancers for the specified pods. |