Intro

Enterprises using Kubernetes often need to autoscale their resources based on more than just CPU usage — for example concurrent persistent connections or queue length.
This post walks you through an incident where one of our customers enabled autoscaling for their application and one day all their Pods disappeared.

Результат пошуку зображень за запитом "Autoscaler"


The Setup

In Kubernetes, the HorizontalPodAutoscaler automatically scales the number of Pods in a deployment, replication controller, replica set or stateful set based on CPU load or application-provided metrics.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 60

Typically in this deployment, we are explicitly setting the number of replicas - in this case, they had 60 for this application.
They wanted to enable autoscaling using a Horizontal Pod Autoscaler.

apiVersion: apps/v1
kind: HorizontalPodAutoscaler
spec:
  ScaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
  minReplicas: 1
  maxReplicas: 10
targetCPUUtilizationPercentage: 50

This is basically a description of an autoscaling policy. It looks at the CPU usage for the Pod and scales it up or down based on what our target is specified as. 
In this case what happens is that the controller that is in charge of the Horizontal Pod Autoscaler is managing the replica count of the deployment based on the metrics values defined in the manifest.

The Challenge

Now one day the customer decides that they don’t want to set that value explicitly, they want to remove it from their spec and allow it to be a dynamic runtime decision.
What happens when you have this value specified in a manifest that’s applied by kubectl? Well kubectl adds an annotation on it of what the manifest looked like when it was applied.

The Event

When they submitted the new manifest to the Kubernetes API server, it scaled down 59 replicas of their application.

The Root Cause

When you submit a new manifest, the API server compares the initial values with the new values. When it’s replaced with the updated manifest, it defaults to 1. So in their case they scaled down 59 replicas of their application.

The Fix

There is not a great work around for this one. Unfortunately you need to modify this annotation at the same time as you’re submitting your manifest. So you need to go in and use kubectl edit to change the replica count and remove the deployed annotation to prevent scaling down all of your applications.

For an autoscale controller called nginx, you can edit it via:

kubectl edit hpa nginx

Conclusion

We have seen that to avoid accidently scaling down your Pods, you need to interactively edit the resources in your cluster. You can minimize this kind of service disruption by adopting a failure prevention strategy. By using Kalc’s Kubernetes cluster validator, you will be able to protect your cluster from accidental scale down events. Our AI-first approach covers substantially more risk mitigation scenarios than just scaling. The kubernetes simulator will check whether a failure condition is reachable within your current cluster setup without disrupting normal cluster operation. After scanning your cluster, it produces a report with full root cause event chain.