4 Major Ways to Restart a Pod Kubernetes 

restart a pod in kubernetes

Table of Contents

Get up to 50% off now

Become a partner with CyberPanel and gain access to an incredible offer of up to 50% off on CyberPanel add-ons. Plus, as a partner, you’ll also benefit from comprehensive marketing support and a whole lot more. Join us on this journey today!

In Kubernetes, sometimes a pod restarts when the system automatically or manually stops and recreates the same pod. Understanding why and how it is essential to maintain a stable and efficient Kubernetes environment. 

However, Kubernetes does not directly offer a restart command for pods. Instead, it supports a declarative approach where the system ensures the desired states. If a pod crashes, Kubernetes automatically attempts to restart it based on its restart policy. 

In this guide, we will explore all the reasons and the methods to restart a pod Kubernetes effectively. 

Automatic Pod Restarts in Kubernetes:

Kubernetes automatically restarts pods in multiple scenarios, such as: 

  • Container Failures: If a container inside a pod crashes, the kubelet restarts it based on the pod’s restartPolicy.
  • Node Failures: If a node fails, Kubernetes reschedules the pod on another available node.
  • Liveness Probe Failures: If a liveness probe fails repeatedly, Kubernetes restarts the container.

4 Methods To Restart A Pod Kubernetes 

There are four basic methods that you can use to restart a pod Kubernetes. 

Tech Delivered to Your Inbox!

Get exclusive access to all things tech-savvy, and be the first to receive 

the latest updates directly in your inbox.

Method 1: Deleting the Pod (Recommended Approach)

The most effective way to restart a pod Kubernetes is by deleting the current one. Kubernetes automatically recreates the pod that is based on its associated Deployment, StatefulSet, or DaemonSet.

Using kubectl delete pod

To delete a specific pod, run:

kubectl delete pod <pod-name>

This command will remove the present pod and if it is managed by Deployment, ReplicaSet, or StatefulSet, Kubernetes will automatically create a new pod to maintain the right or needed number of replicas. 

Automatic Pod Restart by Deployment

Alternatively, if the pod is in Deployment, ReplicaSet, or StatefulSet, Kubernetes will ensure that the required number of pods are maintained. Once the latest pod is deleted, a new one will be created in its place automatically. 

To verify the pod restart:

kubectl get pods

When you verify the pod restart, you should see a new pod with a different name replacing the one that was deleted. 

Enhance Your CyerPanel Experience Today!
Discover a world of enhanced features and show your support for our ongoing development with CyberPanel add-ons. Elevate your experience today!

This method is the most recommended one because it ensures the least amount of disruption and the most efficient processing. 

Method 2: Rolling Restart of Deployments

The rolling restart of deployments allows you to restart all the current pods without any downtime. This is the best solution when you want to maintain availability of the application. 

Using kubectl rollout restart deployment

To restart all pods in a deployment, run:

kubectl rollout restart deployment <deployment-name>

This command triggers a rolling update, replacing old pods with new ones sequentially.

Ensuring Zero Downtime

Kubenerets will replace each pod one by one while keeping the services available all the time. To verify if the rollout was successful, run the following command: 

kubectl rollout status deployment <deployment-name>

This method is useful when updating application configurations or refreshing pods while maintaining uptime.

Method 3: Scaling the Deployment Down and Up

Another progressive approach to restart a pod Kubernetes is by scaling the deployment movement to zero and then back to its original state. 

Using kubectl scale to Restart Pods

First, scale down to remove all pods:

kubectl scale deployment <deployment-name> –replicas=0

Then, scale back up to recreate the pods:

kubectl scale deployment <deployment-name> –replicas=<desired-replica-count>

When to Use Scaling for Restarts

  • When you need to restart all the pods together.
  • When rolling restart is not working efficiently.
  • Troubleshooting application-level issues in a controlled environment. 

Method 4: Restarting kubelet on the Node

If the pod restart issue persists due to node-related problems, restarting the kubelet service on the affected node can be of help. 

Restarting the kubelet Service

On the affected node, restart kubelet using:

sudo systemctl restart kubelet

Impact on Running Pods

  • May cause some pods to be evicted and rescheduled.
  • Can temporarily impact workloads running on the node.
  • Recommended only when necessary, such as troubleshooting node health issues.

Best Practices for Restarting Kubernetes Pods

You should be mindful of application stability when you restart a pod Kubernetes to ensure a smooth process. Also keep these best practices in mind for added stability. 

  1. Use Rolling Restarts for Minimal Downtime

Prefer kubectl rollout restart deployment <deployment-name> to restart pods gradually, this ensures zero downtime due to subsequent restarting. 

Monitor the rollout status using by:
kubectl rollout status deployment <deployment-name>

  1. Avoid Manual Deletion Unless Necessary

If a pod is misbehaving, use kubectl delete pod <pod-name> only if it’s managed by a Deployment, StatefulSet, or ReplicaSet.

  1. Scale Down and Up for a Hard Reset

Use kubectl scale deployment <deployment-name> –replicas=0 followed by scaling up. This method is your best bet when troubleshooting stubborn issues. 

  1. Restart kubelet Only as a Last Resort

Restarting kubelet (sudo systemctl restart kubelet) can affect running workloads. Use only when pods are stuck or failing due to node issues.

  1. Monitor Logs and Events Before Restarting

Check logs to understand the issue:
kubectl logs <pod-name>

View Kubernetes events for possible errors:
kubectl get events –sort-by=.metadata.creationTimestamp

  1. Automate Pod Recovery Using Restart Policies

Set appropriate restart policies in pod definitions (Always, OnFailure, Never) to ensure smooth self-healing. 

  1. Test in a Staging Environment Before Restarting in Production

Generally for all such processes, it is best to test the method with a staging environment before jumping onto production. 

Troubleshooting Guide To Restart A Pod Kubernetes 

IssuePossible CauseSolution
Pod is not restarting after deletionPod is not part of a Deployment, ReplicaSet, or StatefulSetManually recreate the pod or ensure it is managed by a controller.
kubectl rollout restart is not workingThe deployment has paused updatesResume deployment using: kubectl rollout resume deployment <deployment-name>
Pod stuck in Terminating stateNetwork issues or finalizers preventing terminationForce delete the pod using: kubectl delete pod <pod-name> –force –grace-period=0
Pod stuck in CrashLoopBackOffApplication inside the container is failing repeatedlyCheck logs using kubectl logs <pod-name> and describe the pod for error details.
Pod not recreating after kubectl deleteDeployment is set to 0 replicas or misconfiguredScale the deployment using: kubectl scale deployment <deployment-name> –replicas=1
Pods are restarting too frequentlyLiveness probe is failingAdjust liveness probe settings in the deployment YAML file.
Worker nodes are unresponsivekubelet is not running properlyRestart kubelet on affected node: sudo systemctl restart kubelet
Pod stuck in Pending StateNo available nodes or resource limits reachedCheck node status with kubectl, get nodes and verify resource allocation.
Cannot restart pod via kubectl scaleDeployment replica count already matches the requestManually delete a pod or use kubectl rollout restart deployment <deployment-name>.

Wrapping Up – Restart A Pod Kubernetes 

Using a range of commands to restart a pod will help you use the best possible one when the situation presents itself. In general it is best to go with the rolling approach, however, being mindful of the situation and accessing the task at hand is always the way to go! 

What happens when I delete a pod in Kubernetes?

If the pod is managed by a Deployment, ReplicaSet, or StatefulSet, Kubernetes will automatically recreate it to maintain the desired state.

Why is my pod stuck in a terminating state after deletion?

A pod may be stuck in Terminating if it has a finalizer attached or is waiting for a resource cleanup. Use the force delete command:
kubectl delete pod <pod_name> --grace-period=0 --force

Does restarting a pod delete its data?

Yes, if data is stored inside the pod. Use persistent volumes to ensure data persists across restarts.

Marium Fahim
Hi! I am Marium, and I am a full-time content marketer fueled by an iced coffee. I mainly write about tech, and I absolutely love doing opinion-based pieces. Hit me up at [email protected].
Unlock Benefits

Become a Community Member

SIMPLIFY SETUP, MAXIMIZE EFFICIENCY!
Setting up CyberPanel is a breeze. We’ll handle the installation so you can concentrate on your website. Start now for a secure, stable, and blazing-fast performance!