Skip to content

Commit

Permalink
Add concept page about cluster autoscaling
Browse files Browse the repository at this point in the history
Co-Authored-By: Niranjan Darshann <[email protected]>
  • Loading branch information
sftim and niranjandarshann committed Mar 5, 2024
1 parent 11fc2c8 commit b39e01b
Show file tree
Hide file tree
Showing 8 changed files with 131 additions and 22 deletions.
2 changes: 2 additions & 0 deletions content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -578,6 +578,8 @@ Learn more about the following:
* [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
* [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document.
* [Cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) to
manage the number and size of nodes in your cluster.
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/).
* [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/).
1 change: 1 addition & 0 deletions content/en/docs/concepts/cluster-administration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ Before choosing a guide, here are some considerations:
## Managing a cluster

* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/).

* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/cluster-administration/addons.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Installing Addons
content_type: concept
weight: 120
weight: 150
---

<!-- overview -->
Expand Down
117 changes: 117 additions & 0 deletions content/en/docs/concepts/cluster-administration/cluster-autoscaling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
---
title: Cluster Autoscaling
linkTitle: Cluster Autoscaling
description: >-
Automatically manage the nodes in your cluster to adapt to demand.
content_type: concept
weight: 120
---

<!-- overview -->

Kubernetes requires {{< glossary_tooltip text="nodes" term_id="node" >}} in your cluster to
run {{< glossary_tooltip text="pods" term_id="pod" >}}. This means providing capacity for
the workload Pods and for Kubernetes itself.

You can adjust the amount of resources available in your cluster automatically:
_node autoscaling_. You can either change the number of nodes, or change the capacity
that nodes provide. The first approach is referred to as _horizontal scaling_, while the
second is referred to as _vertical scaling_.

Kubernetes can even provide multidimensional automatic scaling for nodes.

<!-- body -->

## Manual node management

You can manually manage node-level capacity, where you configure a fixed amount of nodes;
you can use this approach even if the provisioning (the process to set up, manage, and
decommission) for these nodes is automated.

This page is about taking the next step, and automating management of the amount of
node capacity (CPU, memory, and other node resources) available in your cluster.

## Automatic horizontal scaling {#autoscaling-horizontal}

### Cluster Autoscaler

You can use the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) to manage the scale of your nodes automatically.
The cluster autoscaler can integrate with a cloud provider, or with Kubernetes'
[cluster API](https://github.com/kubernetes/autoscaler/blob/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler/cloudprovider/clusterapi/README.md),
to achieve the actual node management that's needed.

The cluster autoscaler adds nodes when there are unschedulable Pods, and
removes nodes when those nodes are empty.

#### Cloud provider integrations {#cluster-autoscaler-providers}

The [README](https://github.com/kubernetes/autoscaler/tree/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler#readme)
for the cluster autoscaler lists some of the cloud provider integrations
that are available.

## Cost-aware multidimensional scaling {#autoscaling-multi-dimension}

### Karpenter {#autoscaler-karpenter}

[Karpenter](https://karpenter.sh/) supports direct node management, via
plugins that integrate with specific cloud providers, and can manage nodes
for you whilst optimizing for overall cost.

> Karpenter automatically launches just the right compute resources to
> handle your cluster's applications. It is designed to let you take
> full advantage of the cloud with fast and simple compute provisioning
> for Kubernetes clusters.
The Karpenter tool is designed to integrate with a cloud provider that
provides API-driven server management, and where the price information for
available servers is also available via a web API.

For example, if you start some more Pods in your cluster, the Karpenter
tool might buy a new node that is larger than one of the nodes you are
already using, and then shut down an existing node once the new node
is in service.

#### Cloud provider integrations {#karpenter-providers}

{{% thirdparty-content vendor="true" %}}

There are integrations available between Karpenter's core and the following
cloud providers:

- [Amazon Web Services](https://github.com/aws/karpenter-provider-aws)
- [Azure](https://github.com/Azure/karpenter-provider-azure)


## Related components

### Descheduler

The [descheduler](https://github.com/kubernetes-sigs/descheduler) can help you
consolidate Pods onto a smaller number of nodes, to help with automatic scale down
when the cluster has space capacity.

### Sizing a workload based on cluster size

#### Cluster proportional autoscaler

For workloads that need to be scaled based on the size of the cluster (for example
`cluster-dns` or other system components), you can use the
[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).<br />

The Cluster Proportional Autoscaler watches the number of schedulable nodes
and cores, and scales the number of replicas of the target workload accordingly.

#### Cluster proportional vertical autoscaler

If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using
the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler).
This project is in **beta** and can be found on GitHub.

While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler
adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores
in the cluster.


## {{% heading "whatsnext" %}}

- Read about [workload-level autoscaling](/docs/concepts/workloads/autoscaling/)
10 changes: 3 additions & 7 deletions content/en/docs/concepts/workloads/autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,13 +129,8 @@ its [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/). The `Cron` scaler
If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself.

Scaling the cluster infrastructure normally means adding or removing {{< glossary_tooltip text="nodes" term_id="node" >}}.
This can be done using one of two available autoscalers:

- [**Cluster Autoscaler**](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
- [**Karpenter**](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file)

Both scalers work by watching for pods marked as _unschedulable_ or _underutilized_ nodes and then adding or
removing nodes as needed.
Read [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/)
for more information.

## {{% heading "whatsnext" %}}

Expand All @@ -144,3 +139,4 @@ removing nodes as needed.
- [HorizontalPodAutoscaler Walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
- [Resize Container Resources In-Place](/docs/tasks/configure-pod-container/resize-container-resources/)
- [Autoscale the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
- Learn about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/)
4 changes: 1 addition & 3 deletions content/en/docs/setup/best-practices/cluster-large.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,9 +121,7 @@ Learn more about [Vertical Pod Autoscaler](https://github.com/kubernetes/autosca
and how you can use it to scale cluster
components, including cluster-critical addons.

* The [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme)
integrates with a number of cloud providers to help you run the right number of
nodes for the level of resource demand in your cluster.
* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/)

* The [addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme)
helps you in resizing the addons automatically as your cluster's scale changes.
12 changes: 3 additions & 9 deletions content/en/docs/setup/production-environment/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,15 +183,9 @@ simply as *nodes*).
to help determine how many nodes you need, based on the number of pods and
containers you need to run. If you are managing nodes yourself, this can mean
purchasing and installing your own physical equipment.
- *Autoscale nodes*: Most cloud providers support
[Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme)
to replace unhealthy nodes or grow and shrink the number of nodes as demand requires. See the
[Frequently Asked Questions](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md)
for how the autoscaler works and
[Deployment](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment)
for how it is implemented by different cloud providers. For on-premises, there
are some virtualization platforms that can be scripted to spin up new nodes
based on demand.
- *Autoscale nodes*: Read [Cluster Autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling) to learn about the
tools available to automatically manage your nodes and the capacity they
provide.
- *Set up node health checks*: For important workloads, you want to make sure
that the nodes and pods running on those nodes are healthy. Using the
[Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -596,8 +596,9 @@ guidelines, which cover this exact use case.

## {{% heading "whatsnext" %}}

If you configure autoscaling in your cluster, you may also want to consider running a
cluster-level autoscaler such as [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler).
If you configure autoscaling in your cluster, you may also want to consider using
[cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/)
to ensure you are running the right number of nodes.

For more information on HorizontalPodAutoscaler:

Expand Down

0 comments on commit b39e01b

Please sign in to comment.