1 - Architecture

Overall architecture of Ksctl

Architecture diagrams

1.1 - Api Components

Learn how different components communicate with each other via API’s and automation scripts to serve you in best way possible.

Core Design Components

Design

Overview architecture of ksctl

light mode

Managed Cluster creation & deletion

light mode

Self-Managed Cluster creation & deletion

light mode

Architecture change to event based for much more capabilities

img

2 - Getting Started

What does your user need to know to try your project?

Getting Started Documentation

Installation & Uninstallation Instructions

Ksctl CLI

Lets begin with installation of the tools their are various method

Single command method

curl -sfL https://get.ksctl.com | python3 -
bash <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
zsh <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)

From Source Code

make install_linux

# macOS on M1
make install_macos

# macOS on INTEL
make install_macos_intel

# For uninstalling
make uninstall

Configure Ksctl CLI

ksctl configure cloud # To configure cloud
ksctl configure storage # To configure storage

3 - Cloud Provider

Info about the cloud providers available

This Page includes more info about different cloud providers

3.1 - Amazon Web Services

Amazon Web Services

AWS integration for Self-Managed and Managed Kubernetes Clusters

Authentication Methods

Command Line Interface

Use the ksctl credential manager:

ksctl configure cloud

Available Cluster Types

Self-Managed Clusters

Self-managed clusters with the following components:

  • Distributed etcd database instances
  • HAProxy load balancer for control plane high availability
  • Multiple control plane nodes
  • Worker nodes

Choose between two bootstrap options:

  • k3s (lightweight Kubernetes distribution)
  • kubeadm (official Kubernetes bootstrap tool)

Amazon EKS (Managed Clusters)

Elastic Kubernetes Service deployment with automated:

  • IAM role creation and management
  • Control plane setup
  • Node group configuration

IAM Configuration

For each cluster, ksctl creates two roles:

  • ksctl-<clustername>-wp-role: Manages node pool permissions
  • ksctl-<clustername>-cp-role: Handles control plane access

Required IAM Policies

  1. Custom IAM Role Access Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor6",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:GetInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:CreateRole",
                "iam:DeleteRole",
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:ListInstanceProfiles",
                "iam:AddRoleToInstanceProfile",
                "iam:ListInstanceProfilesForRole",
                "iam:PassRole",
                "iam:CreateServiceLinkedRole",
                "iam:DetachRolePolicy",
                "iam:DeleteRolePolicy",
                "iam:DeleteServiceLinkedRole",
                "iam:GetRolePolicy",
                "iam:SetSecurityTokenServicePreferences"
            ],
            "Resource": [
                "arn:aws:iam::*:role/ksctl-*",
                "arn:aws:iam::*:instance-profile/*"
            ]
        }
    ]
}
  1. Custom EKS Access Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "eks:ListNodegroups",
                "eks:ListClusters",
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}
  1. AWS Managed Policies Required
  • AmazonEC2FullAccess
  • IAMReadOnlyAccess

Looking for CLI Commands?

All CLI commands mentioned in this documentation have detailed explanations in our command reference guide.

3.2 - Azure

Azure Cloud Provider

Azure support for Self-Managed and Managed Kubernetes Clusters

Azure Credential Requirements

Subscription ID

Your Azure subscription identifier can be found in your subscription details.

azure-subscription

Tenant ID

Located in the Azure Dashboard, which provides access to all required credentials.

azure-dashboard

To locate your Tenant ID:

Client ID (Application ID)

Represents the identifier of your registered application.

Steps to create:

  1. Navigate to App Registrations

  1. Register a new application

  2. Obtain the Client ID

Client Secret

Authentication key for your registered application.

Steps to generate:

  1. Access secret creation create app secret

  2. Configure secret settings after-click

  3. Save the generated secret copy-secret

Role Assignment

Configure application permissions:

  1. Navigate to Subscriptions > Access Control (IAM)
  2. Select “Role Assignment”
  3. Click “Add > Add Role Assignment”
  4. Create new role and specify the application name
  5. Configure desired permissions

role-assign-app

Authentication Methods

Command Line Interface

ksctl configure cloud

Available Cluster Types

Self-Managed Clusters

Self-managed clusters with the following components:

  • Distributed etcd database instances
  • HAProxy load balancer for control plane high availability
  • Multiple control plane nodes
  • Worker nodes

Bootstrap options:

  • k3s (lightweight Kubernetes distribution)
  • kubeadm (official Kubernetes bootstrap tool)

Azure Kubernetes Service (AKS)

Fully managed Kubernetes service by Azure.

Cluster Management Features

Looking for CLI Commands?

All CLI commands mentioned in this documentation have detailed explanations in our command reference guide.

3.3 - Kind

Local Provider

It creates cluster on the host machine utilizing kind

Current features

currently using Kind Kubernetes in Docker

Looking for CLI Commands?

All CLI commands mentioned in this documentation have detailed explanations in our command reference guide.

4 - Reference

Low level reference docs for your project.

The Below CLI Command Reference are mapped from ksctl/cli repo

CLI Command Reference

Docs are available now in cli repo Here are the links for the documentation files

CLI Root Command

4.1 - ksctl

Command documentation for ksctl

ksctl

CLI tool for managing multiple K8s clusters

Synopsis

CLI tool which can manage multiple K8s clusters from local clusters to cloud provider specific clusters.

Options

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -h, --help        help for ksctl
  -t, --toggle      Help message for toggle
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.2 - ksctl_addons

Command documentation for ksctl_addons

ksctl addons

Use to work with addons

Synopsis

It is used to work with addons

Examples


ksctl addons --help

Options

  -h, --help   help for addons

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.3 - ksctl_addons_disable

Command documentation for ksctl_addons_disable

ksctl addons disable

Use to disable an addon

Synopsis

It is used to disable an addon

ksctl addons disable [flags]

Examples


ksctl addons disable --help

Options

  -h, --help   help for disable

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.4 - ksctl_addons_enable

Command documentation for ksctl_addons_enable

ksctl addons enable

Use to enable an addon

Synopsis

It is used to enable an addon

ksctl addons enable [flags]

Examples


ksctl addons enable --help

Options

  -h, --help   help for enable

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.5 - ksctl_cluster

Command documentation for ksctl_cluster

ksctl cluster

Use to work with clusters

Synopsis

It is used to work with cluster

Examples


ksctl cluster --help
		

Options

  -h, --help   help for cluster

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.6 - ksctl_cluster_connect

Command documentation for ksctl_cluster_connect

ksctl cluster connect

Connect to existing cluster

Synopsis

It is used to connect to existing cluster

ksctl cluster connect [flags]

Examples


ksctl connect --help
		

Options

  -h, --help   help for connect

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.7 - ksctl_cluster_create

Command documentation for ksctl_cluster_create

ksctl cluster create

Use to create a cluster

Synopsis

It is used to create cluster with the given name from user

ksctl cluster create [flags]

Examples


ksctl create --help
		

Options

  -h, --help   help for create

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.8 - ksctl_cluster_delete

Command documentation for ksctl_cluster_delete

ksctl cluster delete

Use to delete a cluster

Synopsis

It is used to delete cluster with the given name from user

ksctl cluster delete [flags]

Examples


ksctl delete --help
		

Options

  -h, --help   help for delete

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.9 - ksctl_cluster_get

Command documentation for ksctl_cluster_get

ksctl cluster get

Use to get the cluster

Synopsis

It is used to get the cluster created by the user

ksctl cluster get [flags]

Examples


ksctl get --help

Options

  -h, --help   help for get

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.10 - ksctl_cluster_list

Command documentation for ksctl_cluster_list

ksctl cluster list

Use to list all the clusters

Synopsis

It is used to list all the clusters created by the user

ksctl cluster list [flags]

Examples


ksctl list --help

Options

  -h, --help   help for list

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.11 - ksctl_cluster_scaledown

Command documentation for ksctl_cluster_scaledown

ksctl cluster scaledown

Use to manually scaledown a selfmanaged cluster

Synopsis

It is used to manually scaledown a selfmanaged cluster

ksctl cluster scaledown [flags]

Examples


ksctl update scaledown --help
		

Options

  -h, --help   help for scaledown

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.12 - ksctl_cluster_scaleup

Command documentation for ksctl_cluster_scaleup

ksctl cluster scaleup

Use to manually scaleup a selfmanaged cluster

Synopsis

It is used to manually scaleup a selfmanaged cluster

ksctl cluster scaleup [flags]

Examples


ksctl update scaleup --help
		

Options

  -h, --help   help for scaleup

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.13 - ksctl_configure

Command documentation for ksctl_configure

ksctl configure

Configure ksctl cli

Synopsis

It will help you to configure the ksctl cli

Options

  -h, --help   help for configure

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.14 - ksctl_configure_cloud

Command documentation for ksctl_configure_cloud

ksctl configure cloud

Configure cloud

Synopsis

It will help you to configure the cloud

ksctl configure cloud [flags]

Options

  -h, --help   help for cloud

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.15 - ksctl_configure_storage

Command documentation for ksctl_configure_storage

ksctl configure storage

Configure storage

Synopsis

It will help you to configure the storage

ksctl configure storage [flags]

Options

  -h, --help   help for storage

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

Auto generated by spf13/cobra on 14-Feb-2025

4.16 - ksctl_self-update

Command documentation for ksctl_self-update

ksctl self-update

Use to update the ksctl cli

Synopsis

It is used to update the ksctl cli

ksctl self-update [flags]

Examples


ksctl self-update --help

Options

  -h, --help   help for self-update

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

  • ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 14-Feb-2025

4.17 - ksctl_version

Command documentation for ksctl_version

ksctl version

ksctl version

Synopsis

To get version for ksctl components

ksctl version [flags]

Examples


ksctl version --help
		

Options

  -h, --help   help for version

Options inherited from parent commands

      --debug-cli   Its used to run debug mode against cli's menudriven interface
  -v, --verbose     Enable verbose output

SEE ALSO

  • ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 14-Feb-2025

5 - Contribution Guidelines

How to contribute to the docs

You can do almost all the tests in your local except e2e tests which requires you to provide cloud credentials

Provide a generic tasks for new and existing contributors

Types of changes

There are many ways to contribute to the ksctl project. Here are a few examples:

  • New changes to docs: You can contribute by writing new documentation, fixing typos, or improving the clarity of existing documentation.
  • New features: You can contribute by proposing new features, implementing new features, or fixing bugs.
  • Cloud support: You can contribute by adding support for new cloud providers.
  • Kubernetes distribution support: You can contribute by adding support for new Kubernetes distributions.

Phases a change / feature goes through

  1. Raise a issue regarding it (used for prioritizing)
  2. what all changes does it demands
  3. if all goes well you will be assigned
  4. If its about adding Cloud Support then usages of CloudFactory is needed and sperate the logic of vm, firewall, etc. to their respective files and do have a helper file for behind the scenes logic for ease of use
  5. If its about adding Distribution support do check its compatability with different cloud providers vm configs and firewall rules which needs to be done

Formating for PR & Issue subject line

Subject / Title

# Releated to enhancement
enhancement: <Title>

# Related to feature
feat: <Title>

# Related to Bug fix or other types of fixes
fix: <Title>

# Related to update
update: <Title>

Body

Follow the PR or Issue template add all the significant changes to the PR description

Commit messages

mention the detailed description in the git commits. what? why? How?

Each commit must be sign-off and should follow conventional commit guidelines.

Conventional Commits

The commit message should be structured as follows:

<type>(optional scope): <description>

[optional body]

[optional footer(s)]

For more detailed information on conventional commits, you can refer to the official Conventional Commits specification.

Sign-off

Each commit must be signed-off. You can do this by adding a sign-off line to your commit messages. When committing changes in your local branch, add the -S flag to the git commit command:

$ git commit -S -m "YOUR_COMMIT_MESSAGE"
# Creates a signed commit

You can find more comprehensive details on how to sign off git commits by referring to the GitHub section on signing commits.

Pre Commit Hooks

pip install pre-commit
pre-commit install

Verification of Commit Signatures

You have the option to sign commits and tags locally, which adds a layer of assurance regarding the origin of your changes. GitHub designates commits or tags as either “Verified” or “Partially verified” if they possess a GPG, SSH, or S/MIME signature that is cryptographically valid.

GPG Commit Signature Verification

To sign commits using GPG and ensure their verification on GitHub, adhere to these steps:

  • Check for existing GPG keys.
  • Generate a new GPG key.
  • Add the GPG key to your GitHub account.
  • Inform Git about your signing key.
  • Proceed to sign commits.

SSH Commit Signature Verification

To sign commits using SSH and ensure their verification on GitHub, follow these steps:

  • Check for existing SSH keys.
  • Generate a new SSH key.
  • Add an SSH signing key to your GitHub account.
  • Inform Git about your signing key.
  • Proceed to sign commits.

S/MIME Commit Signature Verification

To sign commits using S/MIME and ensure their verification on GitHub, follow these steps:

  • Inform Git about your signing key.
  • Proceed to sign commits.

For more detailed instructions, refer to GitHub’s documentation on commit signature verification

Development

First you have to fork the ksctl repository. fork

cd <path> # to you directory where you want to clone ksctl
mkdir <directory name> # create a directory
cd <directory name> # go inside the directory
git clone https://github.com/${YOUR_GITHUB_USERNAME}/ksctl.git # clone you fork repository
cd ksctl # go inside the ksctl directory
git remote add upstream https://github.com/ksctl/ksctl.git # set upstream
git remote set-url --push upstream no_push # no push to upstream

Trying out code changes

Before submitting a code change, it is important to test your changes thoroughly. You can do this by running the unit tests and integration tests.

Submitting changes

Once you have tested your changes, you can submit them to the ksctl project by creating a pull request. Make sure you use the provided PR template

Getting help

If you need help contributing to the ksctl project, you can ask for help on the kubesimplify Discord server, ksctl-cli channel or else raise issue or discussion

Thank you for contributing!

We appreciate your contributions to the ksctl project!

Some of our contributors ksctl contributors

5.1 - Contribution Guidelines for CLI

How to contribute to the ksctl-cli

Repository: ksctl/cli

How to Build from source

Linux

make install_linux # for linux

Mac OS

make install_macos # for macos

5.2 - Contribution Guidelines for Core

How to contribute to the ksctl

Repository: ksctl/ksctl

Test out both all Unit tests

make unit_test

Test out both all integeration_test

make integration_test

Test out both unit tests and integeration tests

make test_all

for E2E tests on local

set the required token as ENV vars

For cloud provider specific e2e tests

token for Azure

export AZURE_SUBSCRIPTION_ID=""
export AZURE_TENANT_ID=""
export AZURE_CLIENT_ID=""
export AZURE_CLIENT_SECRET=""

token for AWS

export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""

token for Mongodb as storage

export MONGODB_SRV=<true|false> # boolean
export MONGODB_HOST=""
export MONGODB_PORT=""
export MONGODB_USER=""
export MONGODB_PASS=""
cd test/e2e

# then the syntax for running
go run . -op create -file azure/create.json

# for operations you can refer file test/e2e/consts.go

5.3 - Contribution Guidelines for Docs

How to contribute to the ksctl-docs

Repository: ksctl/docs

How to Build from source

# Prequisites
npm install -D postcss
npm install -D postcss-cli
npm install -D autoprefixer
npm install hugo-extended

Install Dependencies

hugo serve

6 - Concepts

Concepts around ksctl core

This section will help you to learn about the underlying system of Ksctl. It will help you to obtain a deeper understanding of how Ksctl works.

Sequence diagrams for 2 major operations

Create Cloud-Managed Clusters

sequenceDiagram
    participant cm as Manager Cluster Managed
    participant cc as Cloud Controller
    participant kc as Ksctl Kubernetes Controller
    cm->>cc: transfers specs from user or machine
    cc->>cc: to create the cloud infra (network, subnet, firewall, cluster)
    cc->>cm: 'kubeconfig' and other cluster access to the state
    cm->>kc: shares 'kubeconfig'
    kc->>kc: installs kubectl agent, stateimporter and controllers
    kc->>cm: status of creation

Create Self-Managed HA clusters

sequenceDiagram
    participant csm as Manager Cluster Self-Managed
    participant cc as Cloud Controller
    participant bc as Bootstrap Controller
    participant kc as Ksctl Kubernetes Controller
    csm->>cc: transfers specs from user or machine
    cc->>cc: to create the cloud infra (network, subnet, firewall, vms)
    cc->>csm: return state to be used by BootstrapController
    csm->>bc: transfers infra state like ssh key, pub IPs, etc
    bc->>bc: bootstrap the infra by either (k3s or kubeadm)
    bc->>csm: 'kubeconfig' and other cluster access to the state
    csm->>kc: shares 'kubeconfig'
    kc->>kc: installs kubectl agent, stateimporter and controllers
    kc->>csm: status of creation

6.1 - Cloud Controller

The Component of Ksctl responsible for creating and managing clusters for different Cloud platforms.

It is responsible for controlling the sequence of tasks for every cloud provider to be executed

6.2 - Core functionalities

How does the core functionalities of ksctl work

Basic cluster operations

Create

  • HA self-managed cluster (VM is provisioned and ssh into and configure them just like ansible)
  • Managed (cloud provider creates the clusters and we get the kubeconfig in return)

Delete

  • HA self managed cluster
  • Managed cluster

Scaleup

  • Only for ha cluster as the user has manual ability to increase the number of worknodes
  • Example: if workernode 1 then it will create 2 then 3…

Scaledown

  • Only for ha cluster as the user has manual ability to decrease the number of worknodes
  • Example: if workernodes are 1, 2 then it will delete from the last to first aka 2 then 1

Switch

  • It will just use the request from the user to get the kubeconfig from specific cluster and save to specific folder that is ~/.ksctl/kubeconfig

Get

  • Both ha and manage cluster it will search folders in specific directory to get what all cluster have been created for specific provider

Example: Here for get request of azure it will scan the directory .ksctl/state/azure/ha and for managed as well to get all the folder names

6.3 - Core Manager

The Component of Ksctl responsible for managing Cloud controller and Distribution controller. It has multiple types of managers

It is responsible for managing client requests and calls the corresponding controller

Types

ManagerClusterKsctl

Role: Perform ksctl getCluster, switchCluster

ManagerClusterKubernetes

Role: Perform ksctl addApplicationAndCrds Currently to be used by machine to machine not by ksctl cli

ManagerClusterManaged

Role: Perform ksctl createCluster, deleteCluster

ManagerClusterSelfManaged

Role: Perform ksctl createCluster, deleteCluster, addWorkerNodes, delWorkerNodes

6.4 - Distribution Controller

The Component of Ksctl responsible for selecting the type of Bootstrap solution (Kubeadm or K3s).

It is responsible for controlling the execution sequence for configuring Cloud Resources wrt to the Kubernetes distribution choosen

7 - Contributors

Organizations, communities who support our project.

Sponsors

NameHow
AzureAzure Open Source Program Office
CivoProvided Us with credits to run and test our project and were the first cloud provider we supported.

Communities

NameSocial Mentions
Kubernetes Architect
WeMakeDevs HacktoberFestMentioned our project in their Hacktoberfest event. Youtube Link
Kubesimplify CommunityWe started from here and got a lot of support. Some of the mentions Youtube Link, Tweet etc.

8 - Faq

Frequently asked questions about ksctl

General

What is ksctl?

Ksctl is a lightweight, easy-to-use tool that simplifies the process of managing Kubernetes clusters. It provides a unified interface for common cluster operations like create, delete, scaleup and down, and is designed to be simple, efficient, and developer-friendly.

What can I do with ksctl?

With ksctl, you can deploy Kubernetes clusters across any cloud provider, switch between providers seamlessly, and choose between managed and self-managed HA clusters. You can deploy clusters with a single command, without any complex configuration, and manage them with a unified interface that eliminates the need for provider-specific CLIs.

How does ksctl simplify cluster management?

Ksctl simplifies cluster management by providing a streamlined interface for common cluster operations like create, delete, scaleup and down. It eliminates the need for complex configuration and provider-specific CLIs, and provides a consistent experience across environments. With ksctl, developers can focus on building great applications without getting bogged down by the complexities of cluster management.

Who is ksctl for?

Ksctl is designed for developers, DevOps engineers, and anyone who needs to manage Kubernetes clusters. It is ideal for teams of all skill levels, from beginners to experts, and provides a simple, efficient, and developer-friendly way to deploy and manage clusters.

How does ksctl differ from other cluster management tools?

Ksctl is a lightweight, easy-to-use tool that simplifies the process of managing Kubernetes clusters. It provides a unified interface for common cluster operations like create, delete, scaleup and down, and is designed to be simple, efficient, and developer-friendly. Ksctl is not a full-fledged platform like Rancher, but rather a simple CLI tool that provides a streamlined interface for common cluster operations.

Comparisons

Ksctl vs Cluster API

  • Simplicity vs Complexity: Cluster API uses a sophisticated set of CRDs (Custom Resource Definitions) to manage machines, machine sets, and deployments. In contrast, Ksctl adopts a minimalist approach, focusing on reducing complexity for developers and operators.
  • Target Audience: Ksctl caters to users seeking a lightweight, user-friendly tool for quick cluster management tasks, particularly in development and testing environments. Cluster API is designed for production-grade use cases, emphasizing flexibility and integration with Kubernetes’ declarative model.
  • Dependencies: Ksctl is a standalone CLI tool that does not require a running Kubernetes cluster, making it easy to set up and run anywhere. On the other hand, Cluster API requires a pre-existing Kubernetes cluster to operate.
  • Feature Focus: Ksctl emphasizes speed and simplicity in managing cluster lifecycle operations (create, delete, scale). Cluster API provides deeper control and automation features suitable for enterprises managing complex Kubernetes ecosystems.

What is the difference between Ksctl and k3sup?

  • Scope: Ksctl is a comprehensive tool for managing Kubernetes clusters across multiple environments from cloud managed Kubernetes flavour to K3s and kubeadm. K3sup, on the other hand, focuses primarily on bootstrapping lightweight k3s clusters.
  • Features: Ksctl handles infrastructure provisioning, cluster scaling, and cloud-agnostic lifecycle management, whereas k3sup is limited to installing k3s clusters without managing the underlying infrastructure.
  • Cloud Support: Ksctl provides a unified interface for managing clusters across different providers, making it suitable for multi-cloud strategies. K3sup is more limited and designed for standalone setups.

How does Ksctl compare to Rancher?

  • Tool vs Platform: Ksctl is a streamlined CLI tool for cluster management. Rancher, by contrast, is a feature-rich platform offering cluster governance, monitoring, access control, and application management.
  • Use Case: Ksctl is lightweight and ideal for developers needing quick, uncomplicated cluster management. Rancher is tailored for enterprise environments where centralized management and control of multiple clusters are essential.
  • Operational Scope: Ksctl focuses on basic lifecycle operations (create, delete, scale). Rancher includes features like Helm chart deployment, RBAC integration, and advanced workload management.

What is the difference between Ksctl and k3d, Kind, or Minikube?

  • Environment Scope: Ksctl is designed for both local and cloud-based Kubernetes cluster management. Tools like k3d, Kind, and Minikube are primarily for local development and testing purposes.
  • Cluster Management: Ksctl can provision, scale, and delete clusters in cloud environments, whereas k3d, Kind, and Minikube focus on providing lightweight clusters for experimentation and local development.
  • Infrastructure Management: Ksctl integrates with infrastructure provisioning, while the others rely on pre-existing local environments (e.g., Docker for k3d and Kind, or virtual machines for Minikube).

How does Ksctl compare to eksctl?

  • Cloud Support: Ksctl is cloud-agnostic and supports multiple providers, making it suitable for multi-cloud setups. Eksctl, on the other hand, is tightly coupled with AWS and designed exclusively for managing EKS clusters.
  • Features: Ksctl provides an all-in-one tool for provisioning infrastructure, managing the cluster lifecycle, and scaling across different environments. Eksctl is focused on streamlining EKS setup and optimizing AWS integrations like IAM, VPCs, and Load Balancers.
  • Target Audience: Ksctl appeals to users seeking a flexible, multi-cloud solution. Eksctl is ideal for AWS-centric teams that require deep integration with AWS services.

9 - Features

Features of ksctl

Our Vision

Transform your Kubernetes experience with a tool that puts simplicity and efficiency first. Ksctl eliminates the complexity of cluster management, allowing developers to focus on what matters most – building great applications.

Key Features

🌐 Universal Cloud Support

  • Deploy clusters across any cloud provider
  • Seamless switching between providers
  • Support for both managed and self-managed clusters
  • Freedom to choose your bootstrap provider (K3s or Kubeadm)

πŸš€ Zero-to-Cluster Simplicity

  • Single command cluster deployment
  • No complex configuration required
  • Automated setup and initialization
  • Instant development environment readiness
  • Local file-based or MongoDB storage options
  • Single binary deployment thus light-weight and efficient

πŸ› οΈ Streamlined Management

  • Unified interface for all operations
  • Eliminates need for provider-specific CLIs
  • Consistent experience across environments
  • Simplified scaling and upgrades

🎯 Developer-Focused Design

  • Near-zero learning curve
  • Intuitive command structure
  • No new configurations to learn
  • Perfect for teams of all skill levels
  • We have WASM workload support as well Refer

πŸ”„ Flexible Operation

  • Self-managed cluster support
  • Cloud provider managed offerings
  • Multiple bootstrap provider options
  • Seamless environment transitions

Technical Benefits

  • Infrastructure Agnostic: Deploy anywhere, manage consistently
  • Rapid Deployment: Bypass complex setup steps and day 0 tasks
  • Future-Ready: Upcoming support for day 1 operations and Wasm
  • Community-Driven: Active development and continuous improvements

10 - Ksctl Cluster Management

Place of all the documentation for the Operators used specifically for k8s clusters

Ksctl Cluster Management used for ksctl based cluster management.

Supported Addons

  • ksctl stack (ksctl/ka)

10.1 - Ksctl Stack

Documentation on ksctl stack controller

It helps in deploying stack using crd to help manage with installation, upgrades, downgrades, uninstallaztion. from one version to another and provide a single place of truth where to look for which applications are installed

How to Install?

ksctl/kcm is a pre-requisite for this to work

apiVersion: manage.ksctl.com/v1
kind: ClusterAddon
metadata:
  labels:
    app.kubernetes.io/name: kcm
  name: ksctl-stack
spec:
  addons:
  - name: stack

Types

Stack

For defining a hetrogenous components we came up with a stack which contains M number of components which are different applications with their versions

Supported Apps and CNI

NameTypeMore Info
GitOpsstandardLink
MonitoringliteLink
Service MeshstandardLink
SpinKubestandardLink
KwasmplusLink

GitOps-Standard

How to use it (Basic Usage)

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: gitops
spec:
  stackName: "gitops-standard"

Overrides available

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: gitops
spec:
  stackName: "gitops-standard"
  disableComponents: <list[str]> # list of components to disable accepeted values are argocd, argorollouts
  overrides:
    argocd:
      version: <string> # version of the argocd
      noUI: <bool> # to disable the UI
      namespace: <string> # namespace to install argocd
      namespaceInstall: <bool> # to install namespace specific argocd
    argorollouts:
      version: <string> # version of the argorollouts
      namespace: <string> # namespace to install argrollouts
      namespaceInstall: <bool> # to install namespace specific argorollouts

Monitoring-Lite

How to use it (Basic Usage)

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: monitoring
spec:
  stackName: "monitoring-lite"

Overrides available

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: monitoring
spec:
  stackName: "monitoring-lite"
  disableComponents: <list[str]> # list of components to disable accepeted values are kube-prometheus
  overrides:
    kube-prometheus:
      version: <string> # version of the kube-prometheus
      helmKubePromChartOverridings: <map[string]any> # helm chart overridings, kube-prometheus

Service-Mesh-Standard

How to use it (Basic Usage)

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: mesh
spec:
  stackName: "mesh-standard"

Overrides available

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: mesh
spec:
  stackName: "mesh-standard"
  disableComponents: <list[str]> # list of components to disable accepeted values are istio
  overrides:
    istio:
      version: <string> # version of the istio
      helmBaseChartOverridings: <map[string]any> # helm chart overridings, istio/base
      helmIstiodChartOverridings: <map[string]any> # helm chart overridings, istio/istiod

Wasm Spinkube-standard

How to use it (Basic Usage)

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: spinkube
spec:
  stackName: "wasm/spinkube-standard"

Demo app

kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
kubectl port-forward svc/simple-spinapp 8083:80
curl localhost:8083/hello

Overrides available

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: spinkube
spec:
  stackName: "wasm/spinkube-standard"
  disableComponents: <list[str]> # list of components to disable accepeted values are spinkube-operator, spinkube-operator-shim-executor, spinkube-operator-crd, cert-manager, kwasm-operator, spinkube-operator-runtime-class
  overrides:
    spinkube-operator:
      version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
      helmOperatorChartOverridings: <map[string]any> # helm chart overridings, spinkube-operator

    spinkube-operator-shim-executor:
      version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

    spinkube-operator-runtime-class:
      version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

    spinkube-operator-crd:
      version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

    cert-manager:
      version: <string>
      certmanagerChartOverridings: <map[string]any> # helm chart overridings, cert-manager

    kwasm-operator:
      version: <string>
      kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator

Wasm Kwasm-plus

How to use it (Basic Usage)

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: kwasm
spec:
  stackName: "wasm/kwasm-plus"

Demo app(wasmedge)

---
apiVersion: v1
kind: Pod
metadata:
  name: "myapp"
  namespace: default
  labels:
    app: nice
spec:
  runtimeClassName: wasmedge
  containers:
  - name: myapp
    image: "docker.io/cr7258/wasm-demo-app:v1"
    ports:
    - containerPort: 8080
      name: http
  restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nice
spec:
  selector:
    app: nice
  type: ClusterIP
  ports:
  - name: nice
    protocol: TCP
    port: 8080
    targetPort: 8080

Demo app(wasmtime)

apiVersion: batch/v1
kind: Job
metadata:
  name: nice
  namespace: default
  labels:
    app: nice
spec:
  template:
    metadata:
      name: nice
      labels:
        app: nice
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: nice
        image: "meteatamel/hello-wasm:0.1"
      restartPolicy: OnFailure
#### For wasmedge
# once up and running
kubectl port-forward svc/nice 8080:8080

# then you can curl the service
curl localhost:8080
#### For wasmtime
# just check the logs

Overrides available

apiVersion: app.ksctl.com/v1
kind: Stack
metadata:
  labels:
    app.kubernetes.io/name: ka
  name: kwasm
spec:
  stackName: "wasm/kwasm-plus"
  disableComponents: <list[str]> # list of components to disable accepeted values are kwasm-operator
  overrides:
    kwasm-operator:
      version: <string>
      kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator

11 - Kubernetes Distributions

Various Kubernetes Distributions

K3s and Kubeadm only work for self-managed clusters

11.1 - K3s

K3s Kubernetes Distributions

K3s for self-managed Cluster on supported provider

K3s is used for self-managed clusters. Its a lightweight k8s distribution. We are using it as follows:

  • controlplane (k3s server)
  • workerplane (k3s agent)
  • datastore (etcd members)

11.2 - Kubeadm

Kubeadm Kubernetes Distributions

Kubeadm for HA Cluster on supported provider

Kubeadm support is added with etcd as datastore

12 - Maintainers

What does your user need to know to try your project?

Maintainers

NameRoleTwitterGithubDiscord
DipanakarCreator & MaintainerTwitterGithubdipankardas
PrafulMaintainerTwitterGithubpraful_
Saiyam PathakCreator & ArchitectTwitterGithubsaiyam

13 - Roadmap

What does your user need to know to try your project?

Current Status on Supported Providers and Next Features

Supported Providers

Done
Not Started
No Plans
Backlog
flowchart LR;
  classDef green color:#022e1f,fill:#00f500;
  classDef red color:#022e1f,fill:#f11111;
  classDef white color:#022e1f,fill:#fff;
  classDef black color:#fff,fill:#000;
  classDef blue color:#fff,fill:#00f;

  XX[ksctl]--CloudFactory-->web{Cloud Providers};
  XX[ksctl]--DistroFactory-->web2{Distributions};
  XX[ksctl]--StorageFactory-->web3{State Storage};

  web--Civo-->civo{Types};
  civo:::green--managed-->civom[Create & Delete]:::green;
  civo--HA-->civoha[Create & Delete]:::green;

  web--Local-Kind-->local{Types};
  local:::green--managed-->localm[Create & Delete]:::green;
  local--HA-->localha[Create & Delete]:::black;

  web--AWS-->aws{Types};
  aws:::green--managed-->awsm[Create & Delete]:::green;
  aws--HA-->awsha[Create & Delete]:::green;

  web--Azure-->az{Types};
  az:::green--managed-->azsm[Create & Delete]:::green;
  az--HA-->azha[Create & Delete]:::green;

  web2--K3S-->k3s{Types};
  k3s:::green--HA-->k3ha[Create & Delete]:::green;

  web2--Kubeadm-->kubeadm{Types};
  kubeadm:::green--HA-->kubeadmha[Create & Delete]:::green;

  web3--Local-Store-->slocal{Local}:::green;
  web3--Remote-Store-->rlocal{Remote}:::green;
  rlocal--Provider-->mongo[MongoDB]:::green;

Next Features

Project Board

All the below features will be moved to the Project Board and will be tracked there.

  • Talos as the next Bootstrap provider
  • Green software which can help you save energy and also better somehow
  • WASM first class support feature
  • ML features unikernels and better ML workload scalability
  • Production stack for monitoring, security, to application specific application integrations like vault, kafka, etc.
  • Health checks of various k8s cluster
  • Role Based Access Control for any cluster
  • Ability import any existing cluster and also to respect the existing state and not overwrite it with the new state from ksctl but to be able to manage only the resources which the tool has access
  • add initial production ready for cert manager + ingress controller (nginx) + gateway api
  • add initial production ready for monitoring (prometheus + grafana) tracing (jaeger) Opentelemtery support
  • add initial production ready for Networking (cilium)
  • add initial production ready for service mesh (istio)
  • add support for Kubernetes migration like moving from one cloud provider to another
  • add support Kubernetes Backup
  • open telemetry support will lead to better observability by combining logs, metrics, and traces in one place and some amazing tools we can use to make the detection amazing with Alerting, suggestions, … from errors to suggestions based on some patterns

14 - Search Results

15 - Storage

What does your user need to know to try your project?

storage providers

15.1 - External Storage

What does your user need to know to try your project?

External MongoDB as a Storage provider

Data to store and filtering it performs

  1. first it gets the cluster data / credentials data based on this filters
    • cluster_name (for cluster)
    • region (for cluster)
    • cloud_provider (for cluster & credentials)
    • cluster_type (for cluster)
    • also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
  2. make sure the above things are specified before writing in the storage

How to use it

  1. you need to call the Init function to get the storage make sure you have the interface type variable as the caller
  2. before performing any operations you must call the Connect().
  3. for using methods: Read(), Write(), Delete() make sure you have called the Setup()
  4. for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
  5. for calling GetOneOrMoreClusters() you need simply specify the filter
  6. for calling AlreadyCreated() you just have to specify the func args
  7. Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
  8. Custom Storage Directory you would need to specify the env var KSCTL_CUSTOM_DIR_ENABLED the value must be directory names wit space separated
  9. You need to pass the secrets in the context.

Hint: mongodb://${username}:${password}@${domain}:${port} or mongo+atlas mongodb+srv://${username}:${password}@${domain}

Things to look for

  1. make sure when you recieve return data from Read(). copy the address value to the storage pointer variable and not the address!

  2. When any credentials are written, it will be stored in

    • Database: ksctl-{userid}-db
    • Collection: {cloud_provider}
    • Document/Record: raw bson data with above specified data and filter fields
  3. When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and returns the kubeconfig data

15.2 - Local Storage

What does your user need to know to try your project?

Local as a Storage Provider

Refer: internal/storage/local

Data to store and filtering it performs

  1. first it gets the cluster data / credentials data based on this filters
    • cluster_name (for cluster)
    • region (for cluster)
    • cloud_provider (for cluster & credentials)
    • cluster_type (for cluster)
    • also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
  2. make sure the above things are specified before writing in the storage

it is stored something like this

 it will use almost the same construct.
 * ClusterInfos => $USER_HOME/.ksctl/state/
	 |-- {cloud_provider}
		|-- {cluster_type} aka (ha, managed)
			|-- "{cluster_name} {region}"
				|-- state.json
 * CredentialInfo => $USER_HOME/.ksctl/credentials/{cloud_provider}.json

How to use it

  1. you need to call the Init function to get the storage make sure you have the interface type variable as the caller
  2. before performing any operations you must call the Connect().
  3. for using methods: Read(), Write(), Delete() make sure you have called the Setup()
  4. for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
  5. for calling GetOneOrMoreClusters() you need simply specify the filter
  6. for calling AlreadyCreated() you just have to specify the func args
  7. Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
  8. Custom Storage Directory you would need to specify the env var KSCTL_CUSTOM_DIR_ENABLED the value must be directory names wit space separated
  9. it creates the configuration directories on your behalf

Things to look for

  1. make sure when you receive return data from Read(). copy the address value to the storage pointer variable and not the address!
  2. When any credentials are written, it will be stored in <some_dir>/.ksctl/credentials/{cloud_provider}.json
  3. When any clusterState is written, it gets stored in <some_dir>/.ksctl/state/{cloud_provider}/{cluster_type}/{cluster_name} {region}/state.json
  4. When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and returns the kubeconfig data