This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Blog

This is the blog section. It has two categories: News and Releases.

Files in these directories will be listed in reverse chronological order.

Announcements

All the Ksctl announcements

Ksctl v2 Release

Why Ksctl v2 is released and what are the new features and improvements

Announcing Ksctl v2: A New Era of Effortless Kubernetes Management

We’re thrilled to announce the release of Ksctl v2! This major update is designed to simplify your Kubernetes journey—whether you’re a seasoned operator or just starting out. With v2, we’ve reimagined the way you interact with your clusters, taking away the pain of endless command-line flags and help lookups, and putting a smooth, intuitive experience at your fingertips.


🤔 Why Ksctl v2?

If you’ve ever found yourself repeatedly typing ksctl create --help just to recall the right flag, you’re not alone. Ksctl v2 was born out of our commitment to make Kubernetes cluster provisioning as frictionless as possible. Here’s what we aimed to solve with this release:

  • Simplified User Experience: Now, you simply specify the resource and the associated action, and Ksctl takes care of the rest—whether it’s listing VM types, regions, Kubernetes versions, or even providing cost insights on cloud resources.
  • Lowering the Entry Barrier: We know that new users often struggle with the complexities of Kubernetes.
    • Ksctl v2 is designed to be developer-friendly by offering clear defaults and guided choices Currently we help to sort out suitable cloud provider instance_type, and made sure that architecture x86 is used for maximum compatibility. Though we are working on adding support for arm architecture as well. And also in future we will be adding comparision feature for cloud providers like AWS vs Azure vs GCP, arm vs x86, AWS vs Azure (From Cost, Your Application Requirement based recommendations to Sustainability). So that you can focus on utilizing amazing Kubernetes technology, not configuring, 🫡
  • Unified Tooling: Expanding beyond just a cluster provisioning tool, Ksctl now incorporates enhanced capabilities for cluster management, and add-on support—all streamlined under one roof.

🆕 What’s New in Ksctl v2?

This release brings a host of improvements and new features to transform your Kubernetes management experience:

Redesign of Ksctl core: provisioner and cluster management components

🆕 New UX & CLI Improvements

v2.0.0v1.3.1

Before you need to remember what exactly is the vm_type, region sku, version, .. etc. Now you just need to specify the cloud provider and the rest will be taken care by ksctl. just choosing from the select options, It also shows the cost of instance_type with the block storage. A great step towards making the tool more user-friendly. 🎉

Enhanced Cloud Credential Management

  • Flexible Credentials: Cloud providers will now receive credentials directly from the client. This change empowers you to choose your credential source while ensuring providers get them securely via context variables. For now all credentials will be stored in your local system. for which you need to check out $ ksctl configure ... command.

Advanced Cluster Management

  • New Cluster-Management Controller: We’ve introduced a controller for managing clusters, allowing you to deploy it even on clusters not created with Ksctl. Initially focused on Ksctl-created clusters, this functionality is poised to become universally applicable.
  • Self-Managed Clusters: The old ha keyword is replaced with selfmanaged to clearly indicate the type of cluster you’re operating.

Networking & Node Operations

  • Cilium eBPF CNI with Hubble UI: Cilium now comes with the hubble UI enabled by default, enhancing your cluster’s observability.
  • Safer Node Deletion: A node drain step is now performed before any node deletion, ensuring smoother operations.

Robust State Management & Addons

  • Improved State Document: We’ve migrated versioning for components like Aks, Eks, Kind, HAProxy, and Etcd to a new field, ensuring better consistency and smoother migrations.

  • New Addons System: Customize your cluster with our new addons framework. Simply specify the addons you want (e.g., switching CNI from “none” to “cilium”) in your configuration, and let Ksctl handle the rest.

    Here’s a snippet of under-the-hood configuration on how ksctl helps in creating Cilium 🐝 enabled cluster:

    {
      "cluster_name": "test-e2e-local-cilium",
      "kubernetes_version": "1.32.0",
      "cloud_provider": "local",
      "storage_type": "store-local",
      "cluster_type": "managed",
      "desired_no_of_managed_nodes": 2,
      "addons": [
        {
          "label": "kind",
          "name": "none",
          "cni": true
        },
        {
          "label": "ksctl",
          "name": "cilium",
          "cni": true
        }
      ]
    }
    

New Tools & CLI Improvements

  • Two New Kubernetes Controllers: Meet ksctl/ka and ksctl/kcm! These controllers manage the application stack and cluster management addons respectively.
  • Streamlined CLI Operations:
    • Self-Update Command: Keeping your CLI up-to-date is now a breeze.
    • Improved Cluster Connection: When you run $ ksctl cluster connect, your kubeconfig is automatically merged into ~/.kube/config and the current context is set accordingly.
    • Bug Fixes & Enhancements: From loading cloud credentials to scaling self-managed clusters, we’ve addressed several key issues to enhance reliability.

✨ New CLI Commands

⚠️ Important Compatibility Notice

With this release, we’re making significant changes that break backward compatibility:

  • Module Transition: We are transitioning to the module name github.com/ksctl/ksctl/v2.
  • Cloud Provider Update: The civo cloud provider has been completely removed.

Please ensure you update your references and workflows accordingly to take full advantage of Ksctl v2’s capabilities.

In Conclusion

Ksctl v2 is more than just an update—it’s a rethinking of the Kubernetes management experience. Our goal is to empower Developers with a tool that handles the complexity behind the scenes, letting you focus on deploying and managing clusters with ease and help you all create amazing software. Whether you’re provisioning your first cluster or managing a fleet of production environments, Ksctl v2 is here to streamline your operations.

Ready to dive in? Check out the full changelog and get started with Ksctl v2 today!

Happy clustering!

The Ksctl Team

Our Findings

Ksctl documentation

About K8s Cluster Autoscaling

Our Findings on how does the K8s Cluster Autoscaling works.

Pod Auto-scaler

https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale

Cluster Auto-Scaler

This blog might help The Guide To Kubernetes Cluster Autoscaler by Example

  • Information from slack channel (Kubernetes #karpenter, #auto-scaler)
  • How does Karpenter do it?
    • Karpenter scales using pending pod pressure.
    • There are two primary controllers for scale up and scale down
    • Provisioning_trigger controller, which will create the nodeclaims or initialize a scale request.https://github.com/kubernetes-sigs/karpenter/blob/main/pkg/controllers/provisioning/controller.go#L58 , The provisioning trigger controller watches for pending pods, and in response to those pending pods it checks if it can schedule them on the existing nodes, if not, karpenter will solve scheduling by creating addtional nodeclaims. We have another set of controllers called the node lifecycle controllers that watch for these nodeclaims, and will launch vms for a given nodeclaim.
    • Disruption Controller, which will tell karpenter to scale down the nodes. For scale down, we poll the cluster every 10 seconds via the disruption controller. https://github.com/kubernetes-sigs/karpenter/blob/main/pkg/controllers/disruption/controller.go#L126 We will iterate through all of our disruption methods and see if we can initiate some disruption action based on various factors.
  • How Cluster-auto-scaler does it?
    • for scale up, the CAS looks for pending pods. for scale down, it looks for under-utilized nodes (as calculated by resource usage).
    • Note that resource usage is sum(resource_requests) / node_allocatable
    • It has nothing to do with “real” utilization
    • Basically CA’s job is to make all the pods able to schedule using as few nodes as possible, all it looks at is scheduling
    • You can use HPA/VPA to update pods based on actual resource usage and that will in turn trigger CA to add/remove nodes
  • Also found a paper on CO_2 efficient karpenter