Style:
Original
font-size
columns
#

Recommended

Kubectl Overview
Basic Kubectl Commands
Managing Pods with Kubectl
Working with Deployments and ReplicaSets
Services in Kubernetes using Kubectl
ConfigMaps and Secrets Management with Kubectl
Interacting with PersistentVolumes (PV) via kubectl
Using Namespaces for Resource Isolation
Scheduling Pods Using Node Affinity & Anti-Affinity Rules
Scaling Applications in Kubernetes using kubectrl commands
Leveraging Labels and Selectors to Manage Resources
Pipelining Output of One Command into Another Using `kubectl`
Determining the Status of Cluster Nodes through `kubectl`
Investigating Pod Logs, Executing Commands Inside a Running Container
terms and conditions
privacy policy
contact

Kubectl

Author:Eddie A.
Column 1

Viewing pod logs

To view the logs of a specific pod, use the command 'kubectl logs <pod_name>'. You can also stream live log output with '-f' flag. To see timestamps in the log entries, add '--timestamps' to your kubectl logs command. If you have multiple containers within a pod, specify which container's log to display using '-c <container_name>'.

Checking cluster events

Use 'kubectl get events' to view the cluster's recent events, including warnings and errors. Events provide insights into what is happening within the cluster, such as pod scheduling failures or resource constraints. Filter event types using '--field-selector'. Use '-w' for continuous monitoring of new events.

Describing a Pod

A pod is the basic building block of Kubernetes. It represents one or more containers that should be run together on a single host. To describe a specific pod, you can use the kubectl describe command followed by the type and name of the resource (e.g., kubectl describe pod <pod-name>). This provides detailed information about the selected pod's configuration, events, and status. The output includes details such as container images used in each container within the specified pod, volumes associated with it, its IP address(es), conditions related to readiness and liveness probes.

Inspecting resource utilization of pods and nodes

Use 'kubectl top pod' to display CPU and memory usage for each pod. Use 'kubectl top node' to view the resource consumption on a per-node basis. This information is crucial for identifying performance bottlenecks, optimizing resource allocation, and troubleshooting issues related to overutilization or underutilization of resources within the cluster.

Analyzing Kubernetes API server requests using the audit log

To analyze Kubernetes API server requests, enable auditing and configure an audit policy. The logs can be stored in various backends such as file, webhook or cloud storage. Use kubectl to access these logs for monitoring and troubleshooting purposes.

Troubleshooting Networking Issues with kubectl Commands

When troubleshooting networking issues, use 'kubectl port-forward' to forward one or more local ports to a pod. Use 'kubectl describe service <service-name>' for detailed information about the selected service including endpoints and labels. Additionally, utilize 'kubectl get events' to view cluster events that may provide insights into network-related problems.

Identifying and Resolving Image Pull Errors or Container Startup Failures

1. Check the image name, tag, and repository URL for correctness.
2. Verify that the container registry is accessible from your Kubernetes cluster.
3. Ensure proper authentication credentials are configured to access private registries if applicable.
4. Investigate network connectivity issues between nodes in the cluster and the external container registry.

Using kubectl exec for debugging inside containers

1. Syntax: kubectl exec <pod_name> [-c CONTAINER] -- COMMAND [args...]
2. Example: kubectl exec -it my-pod -- /bin/bash
3. Use the '-i' and '-t' flags to interact with the container's stdin and allocate a TTY, respectively.
4. To execute commands in a specific container within the pod, use the '-c' flag followed by the container name.
5. Debugging tools like curl or netcat can be used through 'kubectl exec', allowing real-time troubleshooting.

Setting up CPU-based autoscaling

1. Use the kubectl autoscale command to create a Horizontal Pod Autoscaler (HPA) for your deployment.
2. Specify the minimum and maximum number of pods, as well as the target average CPU utilization percentage in the HPA configuration.
3. Monitor and adjust resource requests/limits on containers to ensure accurate scaling based on actual usage.

Understanding Horizontal Pod Autoscaler (HPA)

1. HPA automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization.
2. It helps ensure that an application runs smoothly even during high traffic by dynamically adjusting resources.
3. Use 'kubectl autoscale' command to create an HPA for a resource and specify minimum/maximum replicas and target CPU utilization percentage.

Defining resource metrics for autoscaling

When defining resource metrics for autoscaling in Kubernetes, it's essential to consider the type of metric (e.g., CPU or memory), target value, and utilization threshold. The Horizontal Pod Autoscaler (HPA) uses these defined metrics to automatically scale the number of pods based on current usage compared to the specified targets. Metrics can be set using kubectl autoscale command with flags such as --cpu-percent and --min/--max.

Using custom metrics for scaling applications

Custom Metrics API allows users to scale their application based on specific, user-defined metrics such as queue length or latency. To use custom metrics for Horizontal Pod Autoscaling (HPA), the metric must be available in the Kubernetes cluster through a compatible adapter like Prometheus Adapter. Once configured, HPA can automatically adjust the number of replicas based on these custom metrics, ensuring optimal performance and resource utilization.

Configuring memory-based autoscaling

1. Use the Horizontal Pod Autoscaler (HPA) to automatically scale the number of pods in a deployment based on observed CPU utilization.
2. Set up resource requests and limits for containers within your pod specifications to provide data for HPA scaling decisions.
3. Monitor Memory Utilization: Ensure that you have metrics-server or other monitoring solutions set up to collect memory usage metrics from your cluster, as this is crucial for configuring memory-based autoscaling.

Scaling based on external metric sources

Kubernetes supports scaling workloads based on custom metrics from various data sources such as Stackdriver, Prometheus, and others. This allows autoscaling decisions to be made using application-specific or infrastructure-related metrics. To enable this feature, the Horizontal Pod Autoscaler (HPA) can be configured with a custom metric specification that includes the resource type (e.g., CPU), target average value for the metric, and other parameters like utilization target.

Managing HPA resources with kubectl commands

1. Use 'kubectl autoscale' command to create an HPA for a deployment or replica set.
2. View the current status of HPAs using 'kubectl get hpa'.
3. Update the minimum and maximum number of pods in an existing HPA with 'kubectl edit hpa <hpa-name>'.
4. Delete an existing HPA using 'kubectl delete hpa <hpa-name>'.

Increasing and Decreasing Replicas using Kubectl

To increase the number of replicas for a deployment, use the command 'kubectl scale --replicas=<number> deployment/<deployment-name>'. To decrease the number of replicas, simply repeat this command with a lower <number>. Ensure that you have sufficient resources in your cluster to accommodate additional pods when increasing replicas. Use kubectl get deployments to verify changes in replica count after scaling.

kubectl delete pod [pod_name]

This command is used to delete a specific pod by name. When the specified pod is deleted, Kubernetes will create a new one to replace it if the deployment's replica count is greater than zero.

$ kubectl delete pod my-pod

kubectl get pods

The 'kubectl get pods' command is used to list all the running pods in the current namespace. It provides information about pod names, statuses, and other relevant details.

$ kubectl get pods

kubectl describe pod [pod_name]

The 'kubectl describe' command provides detailed information about a specific pod, including its current state, events, and related objects. It is useful for troubleshooting issues or understanding the status of a particular pod.

$ kubectl describe pod my-pod

kubectl create -f [filename]

The 'create' command is used to create resources from a file or stdin. It allows you to define and create Kubernetes resources using YAML or JSON files. The '-f' flag specifies the filename containing the resource configuration.

$ kubectl create -f deployment.yaml

kubectl apply -f [directory_or_file]

The 'apply' command is used to create or update resources defined in a file. It can be a single YAML/JSON file, or an entire directory containing multiple configuration files. This command helps in deploying and managing applications on the Kubernetes cluster efficiently.

$ kubectl apply -f ./path/to/directory $ kubectl apply -f example.yaml

kubectl exec -it [pod_name] -- /bin/bash

This command allows you to execute a command in a running container. The '-i' flag is for interactive mode, and the '-t' flag allocates a pseudo-TTY. Replace '[pod_name]' with the name of the pod where you want to run this command.

$ kubectl exec -it my-pod -- /bin/bash

kubectl logs [-f] <pod-name> [<container-name>]

The 'kubectl logs' command retrieves the logs from a specific pod. Adding '-f' enables following of log output, similar to 'tail -f'. The '<pod-name>' argument specifies the name of the pod for which you want to retrieve or follow its logs. Optionally, '[<container-name>]' can be used when there are multiple containers within the specified pod.

$ kubectl logs my-pod $ kubectl logs -f my-pod-1 container1
Column 2

Viewing cluster-wide events

Use 'kubectl get events' to view all the cluster-wide events. Events provide insight into what is happening in your cluster, such as scheduling errors or pod evictions. You can filter and format the output using flags like --field-selector and -o wide respectively.

Monitoring resource utilization (CPU, memory)

- Use 'kubectl top' to view the CPU and memory usage of resources in a cluster.
- The command 'kubectl top node' provides an overview of resource consumption at the node level.
- To monitor specific pods, use 'kubectl top pod <pod_name>'.
- Resource metrics server must be running for kubectl top to work effectively.

Inspecting logs from pods or containers

Use the kubectl logs command to retrieve container logs. Specify pod name and, optionally, a specific container within the pod. Use -f flag for real-time streaming of log data. Utilize --since and --tail flags to specify time range and number of lines respectively.

Analyzing network traffic within the cluster

Use kubectl proxy to create a secure connection between your local machine and the Kubernetes API server. Then, access services running in the cluster via localhost. Analyze network traffic using tools like Wireshark or tcpdump by capturing packets on specific pods or nodes for troubleshooting connectivity issues.

Checking pod status and health

Use 'kubectl get pods' to check the status of all pods in a namespace. Use 'kubectl describe pod <pod-name>' for detailed information about a specific pod, including events and conditions. Check the STATUS field to determine if a Pod is running or has encountered issues. Utilize readiness probes ('spec.containers.readinessProbe') within your Pod's configuration to ensure that it is ready to serve traffic before being added into load balancers.

Gathering metrics for performance analysis

Use 'kubectl top' to gather CPU and memory usage of resources within the cluster. Metrics Server should be deployed in the cluster to enable resource metric collection. Utilize Prometheus or other monitoring tools integrated with Kubernetes for advanced performance analysis. Consider setting up custom queries and alerts based on specific application requirements.

Identifying potential issues with deployments or services

- Use kubectl get events to check for any recent events related to the deployment or service.
- Inspect logs using kubectl logs <pod_name> command to identify errors and warnings.
- Check resource utilization of pods, nodes, and containers using kubectl top.

Setting up alerts and notifications

1. Use kubectl to create a ConfigMap or Secret with alerting configurations.
2. Configure Prometheus Alertmanager for sending alerts via email, Slack, PagerDuty, etc.
3. Set up rules in Prometheus for defining conditions that trigger an alert.
4. Utilize Kubernetes event monitoring to receive notifications about changes in the cluster state.

Creating and managing secrets in Kubectl

Kubectl allows the creation of secrets using imperative commands or YAML files. Use 'kubectl create secret' to generate a new secret, specifying type (e.g., generic) and data. Secrets can be managed with 'kubectl get', 'describe', and 'delete'. Ensure secure handling as base64 encoding is not encryption.

Applying security contexts to pods, containers, and controllers

Security contexts allow you to set privilege and access control settings for Pods or Containers. Use 'securityContext' in the Pod spec to define privileges at the pod level. For fine-grained control within a container, use 'securityContext' under Container definition. Security context fields include capabilities (e.g., adding/removing Linux kernel capabilities), SELinux options, runAsUser/runAsGroup settings for user identity management.

Managing sensitive information securely with Kubernetes Secrets

1. Use the 'kubectl create secret' command to create a new Secret object.
2. Ensure that only authorized users have access to view or modify secrets by setting appropriate RBAC (Role-Based Access Control) rules.
3. Avoid storing sensitive data directly in YAML files; instead, use environment variables or mount them as volumes within pods for better security.

Using ConfigMaps to decouple configuration artifacts from image content

ConfigMaps in Kubernetes are used to separate configuration data from application code, allowing for easier management and updates. They can be mounted as volumes or exposed as environment variables within pods. This approach enables changes to configurations without rebuilding the container images, promoting flexibility and scalability. When creating a ConfigMap, it's essential to consider the key-value pairs that represent the configuration data required by applications running inside containers.

Defining environment variables using ConfigMaps in Kubectl

When defining environment variables with ConfigMaps, you can use the 'envFrom' field to inject all key-value pairs from a ConfigMap into a container. Another method is to specify individual keys as separate 'env' entries within the pod specification file. Remember that changes made to the referenced ConfigMap will automatically be reflected in any pods referencing it without requiring redeployment.

Setting up RBAC for Secrets and ConfigMaps

1. Use Role-Based Access Control (RBAC) to define fine-grained access controls for Secrets and ConfigMaps.
2. Create roles or cluster roles that specify the permissions required to interact with these resources, such as get, list, watch, create, update or delete actions.
3. Bind the defined role(s) to specific users or groups using RoleBindings or ClusterRoleBindings.
4. Regularly review and audit RBAC configurations to ensure proper access control is maintained.

Best practices for handling credentials within Kubernetes clusters

1. Use Secrets: Store sensitive information such as passwords, tokens, and keys in Kubernetes Secrets to keep them secure.
2. Restrict Access: Limit access to secrets by using RBAC (Role-Based Access Control) and only grant necessary permissions.
3. Avoid Hardcoding: Refrain from hardcoding credentials directly into configuration files or code; instead, use environment variables or external secret management systems.

Incorporating Security Contexts into pod specifications

When defining security contexts in a Pod specification, you can set the privileges and access control settings for containers within the Pod. Use 'securityContext' field to define these settings at both the container level and the Pod level. This allows you to enforce policies such as running with non-root users, controlling capabilities, setting SELinux options, or configuring AppArmor profiles.

Creating a Pod

To create a pod using kubectl, use the command 'kubectl run <pod-name> --image=<container-image>'. Pods can be created imperatively or declaratively. Imperative creation involves directly running the kubectl run command, while declarative creation uses YAML files to define and create pods with specific configurations. Ensure that you have necessary permissions and access to the Kubernetes cluster before creating pods.

Scaling Deployments

1. Use kubectl scale deployment <deployment-name> --replicas=<number> to scale a deployment by increasing or decreasing the number of replicas.
2. The autoscaler can automatically adjust the number of pods in a replication controller, deployment, replica set, or stateful set based on CPU utilization and other metrics.
3. Horizontal Pod Autoscaler (HPA) is used for scaling deployments based on observed CPU utilization.

Scaling ReplicaSets

1. Use kubectl scale to adjust the number of replicas in a ReplicaSet.
2. Syntax: kubectl scale --replicas=5 rs/foo
3. This command will set the 'foo' replica set's desired number of pods to 5.
4. Scaling can also be achieved by directly editing the .spec.replicas field in YAML or JSON file and then applying it using kubectl apply -f filename.yaml.

Rolling Back Deployments

To roll back a deployment to a previous version, you can use the kubectl rollout undo command followed by the resource type and name. For example: kubectl rollout undo deployment/my-deployment. You can also specify the revision number with --to-revision flag for more precise rollback.

Updating Deployments

To update a deployment, use the kubectl set image command to change the container image of an existing deployment. For example: kubectl set image deployment/my-deployment my-container=my-new-image:tag. Use rolling updates for seamless deployments without downtime by specifying maxUnavailable and maxSurge options in your rollout strategy. Always ensure you have proper version control and backup strategies before performing any updates.

Pausing and Resuming Deployments

To pause a deployment, use the command 'kubectl rollout pause deployment <deployment-name>'. To resume a paused deployment, apply changes to the paused resource using 'kubectl rollout resume'. Pausing deployments can be useful for performing maintenance or investigating issues without interruption.

Liveness and Readiness Probes

Liveness probes are used to determine if a container is running, while readiness probes indicate when the container is ready to serve traffic. These probes can be configured in the pod's YAML file using HTTP endpoints or commands. They help Kubernetes manage application availability by restarting containers that fail liveness checks and routing traffic away from pods that fail readiness checks.

Multicontainer Pods

1. Multicontainer pods are a way to run multiple containers that need to work together in the same pod.
2. Each container within a multicontainer pod shares the network namespace, IPC namespace, and can communicate via localhost.
3. Use cases for multicontainer pods include sidecar pattern (e.g., logging or monitoring), adapter pattern (e.g., translating data formats), and ambassador pattern (e.g., proxying).

Column 3

Creating a PersistentVolume

1. Define the PersistentVolume (PV) using a YAML file with specifications such as storage capacity, access modes, and storage class.
2. Apply the PV configuration to the cluster using 'kubectl apply -f <pv-config.yaml>'.
3. Verify that the PV is created by running 'kubectl get pv'.
4. Use labels and selectors in Pod definitions to bind them to specific persistent volumes.

Defining a StorageClass

A StorageClass in Kubernetes provides a way for administrators to describe the 'classes' of storage they offer. It allows dynamic provisioning and management of different types of storage, such as SSD or HDD, based on user requirements. When defining a StorageClass, key parameters include provisioner (the volume plugin responsible for creating the underlying storage), reclaimPolicy (specifies what happens when the PersistentVolume associated with this class is released), and parameters specific to each provisioner.

Provisioning storage with dynamic provisioning

Dynamic provisioning is a feature in Kubernetes that allows storage volumes to be automatically created when they are requested. This eliminates the need for cluster administrators to pre-provision storage. When a PersistentVolumeClaim (PVC) is created, it triggers the dynamic provisioner, which then creates and binds a suitable volume based on StorageClasses defined by the administrator. The StorageClass defines how persistent volumes should be dynamically provisioned.

Using volume expansion to increase the size of a PVC/PV dynamically

When using Kubernetes, you can expand PersistentVolumeClaims (PVCs) and their associated PersistentVolumes (PVs) dynamically. This is achieved by updating the storage request in the PVC spec with an increased value. The underlying storage class must support dynamic provisioning and resizing for this feature to work effectively. After modifying the PVC's capacity, Kubernetes will automatically resize its corresponding PV if supported by both the cloud provider and StorageClass.

Specifying access modes for persistent volumes and claims

Access modes specify how the volume can be mounted. There are three access modes: ReadWriteOnce (RWO) - the volume can be mounted as read-write by a single node, ReadOnlyMany (ROX) - the volume can be mounted read-only by many nodes, ReadWriteMany (RWX) - the volume can be mounted as read-write by many nodes simultaneously.

Managing Reclaim Policies for PVs

When a PersistentVolume (PV) is deleted, the reclaim policy determines what happens to the volume. The three available policies are Retain, Recycle, and Delete.
- 'Retain' keeps the data even after it's released from its claim.
- 'Recycle' performs a basic scrub on the volume before making it available again.
- 'Delete' allows automatic deletion of both storage asset and associated data.

VolumeSnapshot API

The VolumeSnapshot API in Kubernetes provides a way to capture the state of a persistent volume at a particular point in time. It allows for data protection and disaster recovery by creating snapshots that can be used to restore volumes or create new ones from those snapshots. The VolumeSnapshot objects are defined using Custom Resource Definitions (CRDs) and managed through controllers, enabling users to easily manage their storage resources within the cluster.

Configuring Storage Quotas

1. Use 'kubectl create quota' to define a new storage quota.
2. Specify the maximum amount of persistent volume claims (PVCs) that can be created in a namespace using 'spec.storage.pvc'.
3. Set limits for specific resource types such as requests and limits on PVCs, pods, or services within the defined quotas.
4. Monitor usage with 'kubectl describe quota <quota-name>' command to check if any resources are exceeding their allocated quotas.

Exposing Services

When exposing a service, use the 'kubectl expose' command followed by the resource type (deployment or pod), name of the resource, and port. Specify --type=NodePort to expose on each node's IP at a static port. Use --type=LoadBalancer for cloud providers that support external load balancers.

Creating a Service

Services in Kubernetes enable networking and communication between different pods. Use 'kubectl expose' command to create a new service, specifying the pod selector with '--selector'. Services can be of type ClusterIP (default), NodePort or LoadBalancer. Labels are used to select which pods will receive traffic from the service.

Service Discovery and DNS

- Kubernetes uses DNS for service discovery within the cluster.
- Each Service gets a unique hostname in the format: my-svc.my-namespace.svc.cluster.local
- Pods can access services using this naming convention, allowing dynamic scaling and load balancing without needing to know IP addresses.

Network Policies

- Network policies are used to control the traffic between different pods in a Kubernetes cluster.
- They define how groups of pods can communicate with each other and with other network endpoints.
- Rules within network policies specify which pod labels match the traffic, what types of connections are allowed or denied, and from/to where.

Ingress Resources

In Kubernetes, Ingress resources are used to manage external access to services within a cluster. They provide HTTP and HTTPS routing as well as load balancing capabilities. When creating an Ingress resource, the rules for directing traffic based on hostnames or paths need to be defined. Additionally, annotations can be utilized in Ingress resources to configure advanced settings such as SSL certificates and timeouts.

EndpointSlices

EndpointSlices are a new API resource that provides a more scalable and efficient way to track endpoints. They divide large service endpoint lists into smaller, more manageable pieces for better performance. EndpointSlice objects contain subsets of the endpoints in a Service object, allowing finer control over traffic distribution and reducing load on the Kubernetes API server.

Using kubectl to Manage Network Resources

1. View all services in the current namespace: Use 'kubectl get services'.
2. Expose a service using NodePort: Execute 'kubectl expose deployment <deployment-name> --type=NodePort --port=<port>'.
3. Check network policies for pods and namespaces: Employ 'kubectl get networkpolicies'.

Load Balancing with Services

Kubernetes Service is an abstraction that defines a logical set of Pods and enables external traffic exposure. Types include ClusterIP, NodePort, LoadBalancer, and ExternalName. The 'type: LoadBalancer' service type integrates with cloud providers to create load balancers for the specified pods.

https://www.cheatrepo.com/sheet/Kubectl-9dd2cb
Last Updated: Fri Apr 11 2025

Press space bar to start a drag. When dragging you can use the arrow keys to move the item around and escape to cancel. Some screen readers may require you to be in focus mode or to use your pass through key