Ksctl documentation
This is the multi-page printable view of this section. Click here to print.
Documentation
- 1: Architecture
- 1.1: Api Components
- 2: Getting Started
- 3: Cloud Provider
- 3.1: Amazon Web Services
- 3.2: Azure
- 3.3: Civo
- 3.4: Google Cloud Platform
- 3.5: Local
- 4: Reference
- 4.1: ksctl
- 4.2: ksctl_connect-cluster
- 4.3: ksctl_create-cluster
- 4.4: ksctl_create-cluster_aws
- 4.5: ksctl_create-cluster_azure
- 4.6: ksctl_create-cluster_civo
- 4.7: ksctl_create-cluster_ha-aws
- 4.8: ksctl_create-cluster_ha-aws_add-nodes
- 4.9: ksctl_create-cluster_ha-azure
- 4.10: ksctl_create-cluster_ha-azure_add-nodes
- 4.11: ksctl_create-cluster_ha-civo
- 4.12: ksctl_create-cluster_ha-civo_add-nodes
- 4.13: ksctl_create-cluster_local
- 4.14: ksctl_cred
- 4.15: ksctl_delete-cluster
- 4.16: ksctl_delete-cluster_aws
- 4.17: ksctl_delete-cluster_azure
- 4.18: ksctl_delete-cluster_civo
- 4.19: ksctl_delete-cluster_ha-aws
- 4.20: ksctl_delete-cluster_ha-aws_del-nodes
- 4.21: ksctl_delete-cluster_ha-azure
- 4.22: ksctl_delete-cluster_ha-azure_del-nodes
- 4.23: ksctl_delete-cluster_ha-civo
- 4.24: ksctl_delete-cluster_ha-civo_del-nodes
- 4.25: ksctl_delete-cluster_local
- 4.26: ksctl_get-clusters
- 4.27: ksctl_info-cluster
- 4.28: ksctl_self-update
- 4.29: ksctl_version
- 5: Contribution Guidelines
- 5.1: Contribution Guidelines for CLI
- 5.2: Contribution Guidelines for Core
- 5.3: Contribution Guidelines for Docs
- 6: Concepts
- 6.1: Cloud Controller
- 6.2: Core functionalities
- 6.3: Core Manager
- 6.4: Distribution Controller
- 7: Contributors
- 8: Faq
- 9: Features
- 10: Ksctl Components
- 10.1: Ksctl Agent
- 10.2: Ksctl Application Controller
- 10.3: Ksctl State-Importer
- 11: Kubernetes Distributions
- 12: Maintainers
- 13: Roadmap
- 14: Search Results
- 15: Storage
- 15.1: External Storage
- 15.2: Local Storage
1 - Architecture
Architecture diagrams
1.1 - Api Components
Core Design Components
Design
Overview architecture of ksctl
Managed Cluster creation & deletion
High Available Cluster creation & deletion
Architecture change to event based for much more capabilities
Note:
Currently This is WIP2 - Getting Started
Getting Started Documentation
Installation & Uninstallation Instructions
Ksctl CLI
Lets begin with installation of the tools their are various method
Single command method
Install
Steps to Install Ksctl cli toolcurl -sfL https://get.ksctl.com | python3 -
Uninstall
Steps to Uninstall Ksctl cli toolbash <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
zsh <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
From Source Code
Caution!
Under-Development binariesNote
The Binaries to testing ksctl cli is available in ksctl/cli repomake install_linux
# macOS on M1
make install_macos
# macOS on INTEL
make install_macos_intel
# For uninstalling
make uninstall
How to start with cli
Here is the CLI references3 - Cloud Provider
This Page includes more info about different cloud providers
3.1 - Amazon Web Services
AWS integration for High Availability and Managed Kubernetes Clusters
Caution
AWS credentials are required to access clusters. These credentials are sensitive information and must be kept secure.Authentication Methods
Environment Variables
Set the following environment variables:
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
Command Line Interface
Use the ksctl credential manager:
ksctl cred
Available Cluster Types
Highly Available (HA) Clusters
Self-managed clusters with the following components:
- Distributed etcd database instances
- HAProxy load balancer for control plane high availability
- Multiple control plane nodes
- Worker nodes
Choose between two bootstrap options:
- k3s (lightweight Kubernetes distribution)
- kubeadm (official Kubernetes bootstrap tool)
Amazon EKS (Managed Clusters)
Elastic Kubernetes Service deployment with automated:
- IAM role creation and management
- Control plane setup
- Node group configuration
IAM Configuration
For each cluster, ksctl creates two roles:
ksctl-<clustername>-wp-role
: Manages node pool permissionsksctl-<clustername>-cp-role
: Handles control plane access
Required IAM Policies
- Custom IAM Role Access Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor6",
"Effect": "Allow",
"Action": [
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:CreateRole",
"iam:DeleteRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:ListInstanceProfiles",
"iam:AddRoleToInstanceProfile",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:CreateServiceLinkedRole",
"iam:DetachRolePolicy",
"iam:DeleteRolePolicy",
"iam:DeleteServiceLinkedRole",
"iam:GetRolePolicy",
"iam:SetSecurityTokenServicePreferences"
],
"Resource": [
"arn:aws:iam::*:role/ksctl-*",
"arn:aws:iam::*:instance-profile/*"
]
}
]
}
- Custom EKS Access Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListNodegroups",
"eks:ListClusters",
"eks:*"
],
"Resource": "*"
}
]
}
- AWS Managed Policies Required
- AmazonEC2FullAccess
- IAMReadOnlyAccess
Kubeconfig Authentication
After switching to an AWS cluster using:
ksctl switch aws --name here-you-go --region us-east-1
The generated kubeconfig uses AWS STS tokens which expire after 15 minutes. When you encounter authentication errors, simply run the switch command again to refresh the credentials.
Looking for CLI Commands?
All CLI commands mentioned in this documentation have detailed explanations in our command reference guide.
CLI Reference
π Check out our comprehensive CLI Commands Reference for:
- Detailed command syntax
- Usage examples
- Available options and flags
- Common use cases
3.2 - Azure
Azure support for High Availability and Managed Kubernetes Clusters
Caution
Azure credentials are required to access clusters. These credentials are sensitive information and must be kept secure.Azure Credential Requirements
Subscription ID
Your Azure subscription identifier can be found in your subscription details.
Tenant ID
Located in the Azure Dashboard, which provides access to all required credentials.
To locate your Tenant ID:
Client ID (Application ID)
Represents the identifier of your registered application.
Steps to create:
- Navigate to App Registrations
Register a new application
Obtain the Client ID
Client Secret
Authentication key for your registered application.
Steps to generate:
Access secret creation
Configure secret settings
Save the generated secret
Role Assignment
Configure application permissions:
- Navigate to Subscriptions > Access Control (IAM)
- Select “Role Assignment”
- Click “Add > Add Role Assignment”
- Create new role and specify the application name
- Configure desired permissions
Authentication Methods
Environment Variables
export AZURE_TENANT_ID=""
export AZURE_SUBSCRIPTION_ID=""
export AZURE_CLIENT_ID=""
export AZURE_CLIENT_SECRET=""
Command Line Interface
ksctl cred
Available Cluster Types
High Availability (HA) Clusters
Self-managed clusters with the following components:
- Distributed etcd database instances
- HAProxy load balancer for control plane high availability
- Multiple control plane nodes
- Worker nodes
Bootstrap options:
- k3s (lightweight Kubernetes distribution)
- kubeadm (official Kubernetes bootstrap tool)
Azure Kubernetes Service (AKS)
Fully managed Kubernetes service by Azure.
Cluster Management Features
Cluster Operations
Managed Clusters (AKS)
- Create and delete operations
- Cluster switching
- Infrastructure updates currently not supported
High Availability Clusters
- Worker node scaling (add/remove)
- Secure SSH access to all components:
- Database nodes
- Load balancer
- Control plane nodes
- Worker nodes
- Protected by SSH key authentication
- Public access enabled
Looking for CLI Commands?
All CLI commands mentioned in this documentation have detailed explanations in our command reference guide.
CLI Reference
π Check out our comprehensive CLI Commands Reference for:
- Detailed command syntax
- Usage examples
- Available options and flags
- Common use cases
3.3 - Civo
Civo support for High Availability and Managed Kubernetes Clusters
Caution
Civo API credentials are required to access clusters. These credentials are sensitive information and must be kept secure.Obtaining Civo Credentials
1. Access API Settings
Navigate to your Civo dashboard settings:
2. Open Profile Settings
Select your profile section:
3. Generate API Key
Access the API keys section and create or copy your API token:
Authentication Methods
Environment Variables
Set your Civo API token:
export CIVO_TOKEN=""
Command Line Interface
Use the ksctl credential manager:
ksctl cred
Available Cluster Types
High Availability (HA) Clusters
Self-managed clusters with the following components:
- Distributed etcd database instances
- HAProxy load balancer for control plane high availability
- Multiple control plane nodes
- Worker nodes
Bootstrap options:
- k3s (lightweight Kubernetes distribution)
- kubeadm (official Kubernetes bootstrap tool)
Civo Kubernetes Service (CKS)
Fully managed Kubernetes service by Civo.
Cluster Management Features
Cluster Operations
Managed Clusters (CKS)
- Cluster creation and deletion
- Cluster switching capability
- Infrastructure updates currently not supported
High Availability Clusters
Node Management
- Dynamic worker node scaling (add/remove nodes)
- Secure SSH access to cluster components
Access Control
Control Plane Components
- Database nodes (Public access)
- Load balancer (Public access)
- Control plane nodes (Public access)
- All secured with SSH key authentication
Worker Nodes
- Private network access only
- SSH access via internal network
- Protected by SSH key authentication
Looking for CLI Commands?
All CLI commands mentioned in this documentation have detailed explanations in our command reference guide.
CLI Reference
π Check out our comprehensive CLI Commands Reference for:
- Detailed command syntax
- Usage examples
- Available options and flags
- Common use cases
3.4 - Google Cloud Platform
Gcp support for HA and Managed Clusters
Caution
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
3.5 - Local
It creates cluster on the host machine utilizing kind
Note
Prequisites: DockerCurrent features
currently using Kind Kubernetes in Docker
Looking for CLI Commands?
All CLI commands mentioned in this documentation have detailed explanations in our command reference guide.
CLI Reference
π Check out our comprehensive CLI Commands Reference for:
- Detailed command syntax
- Usage examples
- Available options and flags
- Common use cases
4 - Reference
The Below CLI Command Reference are mapped from ksctl/cli
repo
CLI Command Reference
Docs are available now in cli repo Here are the links for the documentation files
Info
These cli commands are available with cli specific versions from [email protected] onwards v1.2.0 cli references4.1 - ksctl
ksctl
CLI tool for managing multiple K8s clusters
Synopsis
Ksctl ascii [logo]
Options
-h, --help help for ksctl
-t, --toggle Help message for toggle
SEE ALSO
- ksctl connect-cluster - Use to switch between clusters
- ksctl create-cluster - Use to create a cluster
- ksctl cred - Login to your Cloud-provider Credentials
- ksctl delete-cluster - Use to delete a cluster
- ksctl get-clusters - Use to get clusters
- ksctl info-cluster - Use to info cluster
- ksctl self-update - update the ksctl cli
- ksctl version - Print the version number of ksctl
Auto generated by spf13/cobra on 2-Dec-2024
4.2 - ksctl_connect-cluster
ksctl connect-cluster
Use to switch between clusters
Synopsis
Ksctl ascii [logo]
ksctl connect-cluster [flags]
Examples
ksctl connect-context --provider civo --name <clustername> --region <region>
ksctl connect --provider civo --name <clustername> --region <region>
ksctl switch --provider civo --name <clustername> --region <region>
ksctl connect --provider civo --name <clustername> --region <region>
ksctl connect-context --provider local --name <clustername>
ksctl connect-context --provider azure --name <clustername> --region <region>
ksctl connect-context --provider ha-civo --name <clustername> --region <region>
ksctl connect-context --provider ha-azure --name <clustername> --region <region>
ksctl connect-context --provider ha-aws --name <clustername> --region <region>
ksctl connect-context --provider aws --name <clustername> --region <region>
For Storage specific
ksctl connect-context -s store-local -p civo -n <clustername> -r <region>
ksctl connect-context -s external-store-mongodb -p civo -n <clustername> -r <region>
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for connect-cluster
-m, --mode string Mode of access can be shell or k9s or none
-n, --name string Cluster Name (default "demo")
-p, --provider string Provider
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 2-Dec-2024
4.3 - ksctl_create-cluster
ksctl create-cluster
Use to create a cluster
Synopsis
Ksctl ascii [logo]
Examples
ksctl create --help
Options
-h, --help help for create-cluster
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
- ksctl create-cluster aws - Use to create a EKS cluster in Aws
- ksctl create-cluster azure - Use to create a AKS cluster in Azure
- ksctl create-cluster civo - Use to create a Civo managed k3s cluster
- ksctl create-cluster ha-aws - Use to create a self-managed Highly Available cluster on AWS
- ksctl create-cluster ha-azure - Use to create a self-managed Highly-Available cluster on Azure
- ksctl create-cluster ha-civo - Use to create a self-managed Highly Available cluster on Civo
- ksctl create-cluster local - Use to create a kind cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.4 - ksctl_create-cluster_aws
ksctl create-cluster aws
Use to create a EKS cluster in Aws
Synopsis
Ksctl ascii [logo]
ksctl create-cluster aws [flags]
Examples
ksctl create-cluster aws -n demo -r ap-south-1 -s store-local --nodeSizeMP t2.micro --noMP 3
Options
--bootstrap string Kubernetes Bootstrap
--cni string CNI
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for aws
-n, --name string Cluster Name (default "demo")
--noMP int Number of Managed Nodes (default -1)
--nodeSizeMP string Node size of managed cluster nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
--version string Kubernetes Version
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster - Use to create a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.5 - ksctl_create-cluster_azure
ksctl create-cluster azure
Use to create a AKS cluster in Azure
Synopsis
Ksctl ascii [logo]
ksctl create-cluster azure [flags]
Examples
ksctl create-cluster azure -n demo -r eastus -s store-local --nodeSizeMP Standard_DS2_v2 --noMP 3
Options
--bootstrap string Kubernetes Bootstrap
--cni string CNI
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for azure
-n, --name string Cluster Name (default "demo")
--noMP int Number of Managed Nodes (default -1)
--nodeSizeMP string Node size of managed cluster nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
--version string Kubernetes Version
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster - Use to create a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.6 - ksctl_create-cluster_civo
ksctl create-cluster civo
Use to create a Civo managed k3s cluster
Synopsis
Ksctl ascii [logo]
ksctl create-cluster civo [flags]
Examples
ksctl create-cluster civo --name demo --region LON1 --storage store-local --nodeSizeMP g4s.kube.small --noMP 3
Options
--bootstrap string Kubernetes Bootstrap
--cni string CNI
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for civo
-n, --name string Cluster Name (default "demo")
--noMP int Number of Managed Nodes (default -1)
--nodeSizeMP string Node size of managed cluster nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
--version string Kubernetes Version
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster - Use to create a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.7 - ksctl_create-cluster_ha-aws
ksctl create-cluster ha-aws
Use to create a self-managed Highly Available cluster on AWS
Synopsis
Ksctl ascii [logo]
ksctl create-cluster ha-aws [flags]
Examples
ksctl create-cluster ha-aws -n demo -r us-east-1 --bootstrap k3s -s store-local --nodeSizeCP t2.medium --nodeSizeWP t2.medium --nodeSizeLB t2.micro --nodeSizeDS t2.small --noWP 1 --noCP 3 --noDS 3 --cni [email protected]
Options
--bootstrap string Kubernetes Bootstrap
--cni string CNI
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for ha-aws
-n, --name string Cluster Name (default "demo")
--noCP int Number of ControlPlane Nodes (default -1)
--noDS int Number of DataStore Nodes (default -1)
--noWP int Number of WorkerPlane Nodes (default -1)
--nodeSizeCP string Node size of self-managed controlplane nodes
--nodeSizeDS string Node size of self-managed datastore nodes
--nodeSizeLB string Node size of self-managed loadbalancer node
--nodeSizeWP string Node size of self-managed workerplane nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
--version string Kubernetes Version
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster - Use to create a cluster
- ksctl create-cluster ha-aws add-nodes - Use to add more worker nodes in self-managed Highly-Available cluster on Aws
Auto generated by spf13/cobra on 2-Dec-2024
4.8 - ksctl_create-cluster_ha-aws_add-nodes
ksctl create-cluster ha-aws add-nodes
Use to add more worker nodes in self-managed Highly-Available cluster on Aws
Synopsis
It is used to add nodes to worker nodes in cluster with the given name from user.
ksctl create-cluster ha-aws add-nodes [flags]
Examples
ksctl create ha-aws add-nodes -n demo -r ap-south-1 -s store-local --noWP 3 --nodeSizeWP t2.medium # Here the noWP is the desired count of workernodes
Options
-h, --help help for add-nodes
-n, --name string Cluster Name (default "demo")
--noWP int Number of WorkerPlane Nodes (default -1)
--nodeSizeWP string Node size of self-managed workerplane nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster ha-aws - Use to create a self-managed Highly Available cluster on AWS
Auto generated by spf13/cobra on 2-Dec-2024
4.9 - ksctl_create-cluster_ha-azure
ksctl create-cluster ha-azure
Use to create a self-managed Highly-Available cluster on Azure
Synopsis
Ksctl ascii [logo]
ksctl create-cluster ha-azure [flags]
Examples
ksctl create-cluster ha-azure --name demo --region eastus --bootstrap k3s --storage store-local --nodeSizeCP Standard_F2s --nodeSizeWP Standard_F2s --nodeSizeLB Standard_F2s --nodeSizeDS Standard_F2s --noWP 1 --noCP 3 --noDS 3
ksctl create-cluster ha-azure --name demo --region eastus --bootstrap kubeadm --storage store-local --nodeSizeCP Standard_F2s --nodeSizeWP Standard_F4s --nodeSizeLB Standard_F2s --nodeSizeDS Standard_F2s --noWP 1 --noCP 3 --noDS 3 --cni [email protected]
Options
--bootstrap string Kubernetes Bootstrap
--cni string CNI
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for ha-azure
-n, --name string Cluster Name (default "demo")
--noCP int Number of ControlPlane Nodes (default -1)
--noDS int Number of DataStore Nodes (default -1)
--noWP int Number of WorkerPlane Nodes (default -1)
--nodeSizeCP string Node size of self-managed controlplane nodes
--nodeSizeDS string Node size of self-managed datastore nodes
--nodeSizeLB string Node size of self-managed loadbalancer node
--nodeSizeWP string Node size of self-managed workerplane nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
--version string Kubernetes Version
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster - Use to create a cluster
- ksctl create-cluster ha-azure add-nodes - Use to add more worker nodes in self-managed Highly-Available cluster on Azure
Auto generated by spf13/cobra on 2-Dec-2024
4.10 - ksctl_create-cluster_ha-azure_add-nodes
ksctl create-cluster ha-azure add-nodes
Use to add more worker nodes in self-managed Highly-Available cluster on Azure
Synopsis
It is used to add nodes to worker nodes in cluster with the given name from user
ksctl create-cluster ha-azure add-nodes [flags]
Examples
ksctl create ha-azure add-nodes -n demo -r eastus -s store-local --noWP 3 --nodeSizeWP Standard_F2s # Here the noWP is the desired count of workernodes
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for add-nodes
-n, --name string Cluster Name (default "demo")
--noWP int Number of WorkerPlane Nodes (default -1)
--nodeSizeWP string Node size of self-managed workerplane nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster ha-azure - Use to create a self-managed Highly-Available cluster on Azure
Auto generated by spf13/cobra on 2-Dec-2024
4.11 - ksctl_create-cluster_ha-civo
ksctl create-cluster ha-civo
Use to create a self-managed Highly Available cluster on Civo
Synopsis
Ksctl ascii [logo]
ksctl create-cluster ha-civo [flags]
Examples
ksctl create-cluster ha-civo --name demo --region LON1 --bootstrap k3s --storage store-local --nodeSizeCP g3.small --nodeSizeWP g3.medium --nodeSizeLB g3.small --nodeSizeDS g3.small --noWP 1 --noCP 3 --noDS 3
ksctl create-cluster ha-civo --name demo --region LON1 --bootstrap kubeadm --storage store-local --nodeSizeCP g3.medium --nodeSizeWP g3.large --nodeSizeLB g3.small --nodeSizeDS g3.small --noWP 1 --noCP 3 --noDS 3 --cni [email protected]
Options
--bootstrap string Kubernetes Bootstrap
--cni string CNI
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for ha-civo
-n, --name string Cluster Name (default "demo")
--noCP int Number of ControlPlane Nodes (default -1)
--noDS int Number of DataStore Nodes (default -1)
--noWP int Number of WorkerPlane Nodes (default -1)
--nodeSizeCP string Node size of self-managed controlplane nodes
--nodeSizeDS string Node size of self-managed datastore nodes
--nodeSizeLB string Node size of self-managed loadbalancer node
--nodeSizeWP string Node size of self-managed workerplane nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
--version string Kubernetes Version
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster - Use to create a cluster
- ksctl create-cluster ha-civo add-nodes - Use to add more worker nodes in self-managed Highly-Available cluster on Civo
Auto generated by spf13/cobra on 2-Dec-2024
4.12 - ksctl_create-cluster_ha-civo_add-nodes
ksctl create-cluster ha-civo add-nodes
Use to add more worker nodes in self-managed Highly-Available cluster on Civo
Synopsis
It is used to add nodes to worker nodes in cluster with the given name from user.
ksctl create-cluster ha-civo add-nodes [flags]
Examples
ksctl create ha-civo add-nodes -n demo -r LON1 -s store-local --noWP 3 --nodeSizeWP g3.medium # Here the noWP is the desired count of workernodes
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for add-nodes
-n, --name string Cluster Name (default "demo")
--noWP int Number of WorkerPlane Nodes (default -1)
--nodeSizeWP string Node size of self-managed workerplane nodes
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster ha-civo - Use to create a self-managed Highly Available cluster on Civo
Auto generated by spf13/cobra on 2-Dec-2024
4.13 - ksctl_create-cluster_local
ksctl create-cluster local
Use to create a kind cluster
Synopsis
Ksctl ascii [logo]
ksctl create-cluster local [flags]
Examples
ksctl create-cluster local --name demo --storage store-local --noMP 3
Options
--bootstrap string Kubernetes Bootstrap
--cni string CNI
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for local
-n, --name string Cluster Name (default "demo")
--noMP int Number of Managed Nodes (default -1)
-s, --storage string storage provider
-v, --verbose int for verbose output
--version string Kubernetes Version
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl create-cluster - Use to create a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.14 - ksctl_cred
ksctl cred
Login to your Cloud-provider Credentials
Synopsis
Ksctl ascii [logo]
ksctl cred [flags]
Options
-h, --help help for cred
-s, --storage string storage provider
-v, --verbose for verbose output (default true)
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 2-Dec-2024
4.15 - ksctl_delete-cluster
ksctl delete-cluster
Use to delete a cluster
Synopsis
Ksctl ascii [logo]
Examples
ksctl delete --help
Options
-h, --help help for delete-cluster
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
- ksctl delete-cluster aws - Use to deletes a EKS cluster
- ksctl delete-cluster azure - Use to deletes a AKS cluster
- ksctl delete-cluster civo - Use to delete a Civo managed k3s cluster
- ksctl delete-cluster ha-aws - Use to delete a self-managed Highly Available cluster on AWS
- ksctl delete-cluster ha-azure - Use to delete a self-managed Highly Available cluster on Azure
- ksctl delete-cluster ha-civo - Use to delete a self-managed Highly Available cluster on Civo
- ksctl delete-cluster local - Use to delete a kind cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.16 - ksctl_delete-cluster_aws
ksctl delete-cluster aws
Use to deletes a EKS cluster
Synopsis
Ksctl ascii [logo]
ksctl delete-cluster aws [flags]
Examples
ksctl delete aws --name demo --region ap-south-1 --storage store-local
Options
-h, --help help for aws
-n, --name string Cluster Name (default "demo")
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster - Use to delete a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.17 - ksctl_delete-cluster_azure
ksctl delete-cluster azure
Use to deletes a AKS cluster
Synopsis
Ksctl ascii [logo]
ksctl delete-cluster azure [flags]
Examples
ksctl delete azure --name demo --region eastus --storage store-local
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for azure
-n, --name string Cluster Name (default "demo")
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster - Use to delete a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.18 - ksctl_delete-cluster_civo
ksctl delete-cluster civo
Use to delete a Civo managed k3s cluster
Synopsis
Ksctl ascii [logo]
ksctl delete-cluster civo [flags]
Examples
ksctl delete civo --name demo --region LON1 --storage store-local
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for civo
-n, --name string Cluster Name (default "demo")
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster - Use to delete a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.19 - ksctl_delete-cluster_ha-aws
ksctl delete-cluster ha-aws
Use to delete a self-managed Highly Available cluster on AWS
Synopsis
Ksctl ascii [logo]
ksctl delete-cluster ha-aws [flags]
Examples
ksctl delete ha-aws --name demo --region us-east-1 --storage store-local
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for ha-aws
-n, --name string Cluster Name (default "demo")
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster - Use to delete a cluster
- ksctl delete-cluster ha-aws del-nodes - Use to remove worker nodes in self-managed Highly-Available cluster on Aws
Auto generated by spf13/cobra on 2-Dec-2024
4.20 - ksctl_delete-cluster_ha-aws_del-nodes
ksctl delete-cluster ha-aws del-nodes
Use to remove worker nodes in self-managed Highly-Available cluster on Aws
Synopsis
It is used to delete cluster with the given name from user
ksctl delete-cluster ha-aws del-nodes [flags]
Examples
ksctl delete ha-aws del-nodes -n demo -r us-east-1 -s store-local --noWP 1 # Here the noWP is the desired count of workernodes
Options
-h, --help help for del-nodes
-n, --name string Cluster Name (default "demo")
--noWP int Number of WorkerPlane Nodes (default -1)
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster ha-aws - Use to delete a self-managed Highly Available cluster on AWS
Auto generated by spf13/cobra on 2-Dec-2024
4.21 - ksctl_delete-cluster_ha-azure
ksctl delete-cluster ha-azure
Use to delete a self-managed Highly Available cluster on Azure
Synopsis
Ksctl ascii [logo]
ksctl delete-cluster ha-azure [flags]
Examples
ksctl delete ha-azure --name demo --region eastus --storage store-local
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for ha-azure
-n, --name string Cluster Name (default "demo")
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster - Use to delete a cluster
- ksctl delete-cluster ha-azure del-nodes - Use to remove worker nodes in self-managed Highly-Available cluster on Azure
Auto generated by spf13/cobra on 2-Dec-2024
4.22 - ksctl_delete-cluster_ha-azure_del-nodes
ksctl delete-cluster ha-azure del-nodes
Use to remove worker nodes in self-managed Highly-Available cluster on Azure
Synopsis
It is used to delete cluster with the given name from user
ksctl delete-cluster ha-azure del-nodes [flags]
Examples
ksctl delete ha-azure del-nodes -n demo -r eastus -s store-local --noWP 1 # Here the noWP is the desired count of workernodes
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for del-nodes
-n, --name string Cluster Name (default "demo")
--noWP int Number of WorkerPlane Nodes (default -1)
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster ha-azure - Use to delete a self-managed Highly Available cluster on Azure
Auto generated by spf13/cobra on 2-Dec-2024
4.23 - ksctl_delete-cluster_ha-civo
ksctl delete-cluster ha-civo
Use to delete a self-managed Highly Available cluster on Civo
Synopsis
Ksctl ascii [logo]
ksctl delete-cluster ha-civo [flags]
Examples
ksctl delete ha-civo --name demo --region LON1 --storage store-local
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for ha-civo
-n, --name string Cluster Name (default "demo")
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster - Use to delete a cluster
- ksctl delete-cluster ha-civo del-nodes - Use to remove worker nodes in self-managed Highly-Available cluster on Civo
Auto generated by spf13/cobra on 2-Dec-2024
4.24 - ksctl_delete-cluster_ha-civo_del-nodes
ksctl delete-cluster ha-civo del-nodes
Use to remove worker nodes in self-managed Highly-Available cluster on Civo
Synopsis
It is used to delete cluster with the given name from user
ksctl delete-cluster ha-civo del-nodes [flags]
Examples
ksctl delete ha-civo del-nodes -n demo -r LON1 -s store-local --noWP 1 # Here the noWP is the desired count of workernodes
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for del-nodes
-n, --name string Cluster Name (default "demo")
--noWP int Number of WorkerPlane Nodes (default -1)
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster ha-civo - Use to delete a self-managed Highly Available cluster on Civo
Auto generated by spf13/cobra on 2-Dec-2024
4.25 - ksctl_delete-cluster_local
ksctl delete-cluster local
Use to delete a kind cluster
Synopsis
Ksctl ascii [logo]
ksctl delete-cluster local [flags]
Examples
ksctl delete local --name demo --storage store-local
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for local
-n, --name string Cluster Name (default "demo")
-s, --storage string storage provider
-v, --verbose int for verbose output
-y, --yes approval to avoid showMsg (default true)
SEE ALSO
- ksctl delete-cluster - Use to delete a cluster
Auto generated by spf13/cobra on 2-Dec-2024
4.26 - ksctl_get-clusters
ksctl get-clusters
Use to get clusters
Synopsis
Ksctl ascii [logo]
ksctl get-clusters [flags]
Examples
ksctl get --provider all --storage store-local
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for get-clusters
-p, --provider string Provider
-s, --storage string storage provider
-v, --verbose int for verbose output
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 2-Dec-2024
4.27 - ksctl_info-cluster
ksctl info-cluster
Use to info cluster
Synopsis
Ksctl ascii [logo]
ksctl info-cluster [flags]
Examples
ksctl info --provider azure --name demo --region eastus --storage store-local
ksctl info -p ha-azure -n ha-demo-kubeadm -r eastus -s store-local --verbose -1
Options
--feature-flags string Experimental Features: Supported values with comma seperated: [autoscale]
-h, --help help for info-cluster
-n, --name string Cluster Name (default "demo")
-p, --provider string Provider
-r, --region string Region
-s, --storage string storage provider
-v, --verbose int for verbose output
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 2-Dec-2024
4.28 - ksctl_self-update
ksctl self-update
update the ksctl cli
Synopsis
Ksctl ascii [logo]
ksctl self-update [flags]
Options
-h, --help help for self-update
-s, --storage string storage provider
-v, --verbose for verbose output (default true)
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 2-Dec-2024
4.29 - ksctl_version
ksctl version
Print the version number of ksctl
ksctl version [flags]
Options
-h, --help help for version
SEE ALSO
- ksctl - CLI tool for managing multiple K8s clusters
Auto generated by spf13/cobra on 2-Dec-2024
5 - Contribution Guidelines
You can do almost all the tests in your local except e2e tests which requires you to provide cloud credentials
Provide a generic tasks for new and existing contributors
Types of changes
There are many ways to contribute to the ksctl project. Here are a few examples:
- New changes to docs: You can contribute by writing new documentation, fixing typos, or improving the clarity of existing documentation.
- New features: You can contribute by proposing new features, implementing new features, or fixing bugs.
- Cloud support: You can contribute by adding support for new cloud providers.
- Kubernetes distribution support: You can contribute by adding support for new Kubernetes distributions.
Phases a change / feature goes through
- Raise a issue regarding it (used for prioritizing)
- what all changes does it demands
- if all goes well you will be assigned
- If its about adding Cloud Support then usages of CloudFactory is needed and sperate the logic of vm, firewall, etc. to their respective files and do have a helper file for behind the scenes logic for ease of use
- If its about adding Distribution support do check its compatability with different cloud providers vm configs and firewall rules which needs to be done
Formating for PR & Issue subject line
Subject / Title
# Releated to enhancement
enhancement: <Title>
# Related to feature
feat: <Title>
# Related to Bug fix or other types of fixes
fix: <Title>
# Related to update
update: <Title>
Body
Follow the PR or Issue template add all the significant changes to the PR description
Commit messages
mention the detailed description in the git commits. what? why? How?
Each commit must be sign-off and should follow conventional commit guidelines.
Conventional Commits
The commit message should be structured as follows:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
For more detailed information on conventional commits, you can refer to the official Conventional Commits specification.
Sign-off
Each commit must be signed-off. You can do this by adding a sign-off line to your commit messages. When committing changes in your local branch, add the -S flag to the git commit command:
$ git commit -S -m "YOUR_COMMIT_MESSAGE"
# Creates a signed commit
You can find more comprehensive details on how to sign off git commits by referring to the GitHub section on signing commits.
Verification of Commit Signatures
You have the option to sign commits and tags locally, which adds a layer of assurance regarding the origin of your changes. GitHub designates commits or tags as either “Verified” or “Partially verified” if they possess a GPG, SSH, or S/MIME signature that is cryptographically valid.
GPG Commit Signature Verification
To sign commits using GPG and ensure their verification on GitHub, adhere to these steps:
- Check for existing GPG keys.
- Generate a new GPG key.
- Add the GPG key to your GitHub account.
- Inform Git about your signing key.
- Proceed to sign commits.
SSH Commit Signature Verification
To sign commits using SSH and ensure their verification on GitHub, follow these steps:
- Check for existing SSH keys.
- Generate a new SSH key.
- Add an SSH signing key to your GitHub account.
- Inform Git about your signing key.
- Proceed to sign commits.
S/MIME Commit Signature Verification
To sign commits using S/MIME and ensure their verification on GitHub, follow these steps:
- Inform Git about your signing key.
- Proceed to sign commits.
For more detailed instructions, refer to GitHub’s documentation on commit signature verification
Development
First you have to fork the ksctl repository. fork
cd <path> # to you directory where you want to clone ksctl
mkdir <directory name> # create a directory
cd <directory name> # go inside the directory
git clone https://github.com/${YOUR_GITHUB_USERNAME}/ksctl.git # clone you fork repository
cd ksctl # go inside the ksctl directory
git remote add upstream https://github.com/ksctl/ksctl.git # set upstream
git remote set-url --push upstream no_push # no push to upstream
Trying out code changes
Before submitting a code change, it is important to test your changes thoroughly. You can do this by running the unit tests and integration tests.
Submitting changes
Once you have tested your changes, you can submit them to the ksctl project by creating a pull request. Make sure you use the provided PR template
Getting help
If you need help contributing to the ksctl project, you can ask for help on the kubesimplify Discord server, ksctl-cli channel or else raise issue or discussion
Thank you for contributing!
We appreciate your contributions to the ksctl project!
Some of our contributors ksctl contributors
5.1 - Contribution Guidelines for CLI
Repository: ksctl/cli
How to Build from source
Linux
make install_linux # for linux
Mac OS
make install_macos # for macos
Windows
.\builder.ps1 # for windows
5.2 - Contribution Guidelines for Core
Repository: ksctl/ksctl
Project structure
pkg/
It contains the importable functionality of ksctl
- Controllers (as this will be the only way to interact with the ksctlcore
- Utility functions with consts and errors
- Logger
- Types
internal/
It contains the cloudProvider, K8sDistro, StorageDriver specific implementations
test/
It contains the e2e and e2e test helper code and also the mock test files
Test out both All Mock and Unit tests and lints
make test
Test out both All Unit tests
make unit_test_all
Test out both All Mock tests
make mock_all
for E2E tests on local
set the required token as ENV vars then
cd test/e2e
# then the syntax for running
go run . -op create -file azure/create.json
# for operations you can refer file test/e2e/consts.go
5.3 - Contribution Guidelines for Docs
Repository: ksctl/docs
How to Build from source
# Prequisites
npm install -D postcss
npm install -D postcss-cli
npm install -D autoprefixer
npm install hugo-extended
Install Dependencies
hugo serve
6 - Concepts
This section will help you to learn about the underlying system of Ksctl. It will help you to obtain a deeper understanding of how Ksctl works.
Sequence diagrams for 2 major operations
Create Cloud-Managed Clusters
sequenceDiagram participant cm as Manager Cluster Managed participant cc as Cloud Controller participant kc as Ksctl Kubernetes Controller cm->>cc: transfers specs from user or machine cc->>cc: to create the cloud infra (network, subnet, firewall, cluster) cc->>cm: 'kubeconfig' and other cluster access to the state cm->>kc: shares 'kubeconfig' kc->>kc: installs kubectl agent, stateimporter and controllers kc->>cm: status of creation
Create Self-Managed HA clusters
sequenceDiagram participant csm as Manager Cluster Self-Managed participant cc as Cloud Controller participant bc as Bootstrap Controller participant kc as Ksctl Kubernetes Controller csm->>cc: transfers specs from user or machine cc->>cc: to create the cloud infra (network, subnet, firewall, vms) cc->>csm: return state to be used by BootstrapController csm->>bc: transfers infra state like ssh key, pub IPs, etc bc->>bc: bootstrap the infra by either (k3s or kubeadm) bc->>csm: 'kubeconfig' and other cluster access to the state csm->>kc: shares 'kubeconfig' kc->>kc: installs kubectl agent, stateimporter and controllers kc->>csm: status of creation
6.1 - Cloud Controller
It is responsible for controlling the sequence of tasks for every cloud provider to be executed
6.2 - Core functionalities
Basic cluster operations
Create
- HA self-managed cluster (VM is provisioned and ssh into and configure them just like ansible)
- Managed (cloud provider creates the clusters and we get the kubeconfig in return)
Delete
- HA self managed cluster
- Managed cluster
Scaleup
- Only for ha cluster as the user has manual ability to increase the number of worknodes
- Example: if workernode 1 then it will create 2 then 3β¦
Scaledown
- Only for ha cluster as the user has manual ability to decrease the number of worknodes
- Example: if workernodes are 1, 2 then it will delete from the last to first aka 2 then 1
Switch
- It will just use the request from the user to get the kubeconfig from specific cluster and save to specific folder that is ~/.ksctl/kubeconfig
Get
- Both ha and manage cluster it will search folders in specific directory to get what all cluster have been created for specific provider
Example: Here for get request of azure it will scan the directory .ksctl/state/azure/ha and for managed as well to get all the folder names
6.3 - Core Manager
It is responsible for managing client requests and calls the corresponding controller
Types
ManagerClusterKsctl
Role
: Perform ksctl getCluster, switchCluster
ManagerClusterKubernetes
Role
: Perform ksctl addApplicationAndCrds
Currently to be used by machine to machine not by ksctl cli
ManagerClusterManaged
Role
: Perform ksctl createCluster, deleteCluster
ManagerClusterSelfManaged
Role
: Perform ksctl createCluster, deleteCluster, addWorkerNodes, delWorkerNodes
6.4 - Distribution Controller
It is responsible for controlling the execution sequence for configuring Cloud Resources wrt to the Kubernetes distribution choosen
7 - Contributors
Sponsors
Name | How |
---|---|
Azure | Azure Open Source Program Office
|
Civo | Provided Us with credits to run and test our project and were the first cloud provider we supported. |
Communities
Name | Social Mentions |
---|---|
Kubernetes Architect |
|
WeMakeDevs HacktoberFest | Mentioned our project in their Hacktoberfest event. Youtube Link |
Kubesimplify Community | We started from here and got a lot of support. Some of the mentions Youtube Link, Tweet etc. |
8 - Faq
General
What is ksctl?
Ksctl is a lightweight, easy-to-use tool that simplifies the process of managing Kubernetes clusters. It provides a unified interface for common cluster operations like create, delete, scaleup and down, and is designed to be simple, efficient, and developer-friendly.
What can I do with ksctl?
With ksctl, you can deploy Kubernetes clusters across any cloud provider, switch between providers seamlessly, and choose between managed and self-managed HA clusters. You can deploy clusters with a single command, without any complex configuration, and manage them with a unified interface that eliminates the need for provider-specific CLIs.
How does ksctl simplify cluster management?
Ksctl simplifies cluster management by providing a streamlined interface for common cluster operations like create, delete, scaleup and down. It eliminates the need for complex configuration and provider-specific CLIs, and provides a consistent experience across environments. With ksctl, developers can focus on building great applications without getting bogged down by the complexities of cluster management.
Who is ksctl for?
Ksctl is designed for developers, DevOps engineers, and anyone who needs to manage Kubernetes clusters. It is ideal for teams of all skill levels, from beginners to experts, and provides a simple, efficient, and developer-friendly way to deploy and manage clusters.
How does ksctl differ from other cluster management tools?
Ksctl is a lightweight, easy-to-use tool that simplifies the process of managing Kubernetes clusters. It provides a unified interface for common cluster operations like create, delete, scaleup and down, and is designed to be simple, efficient, and developer-friendly. Ksctl is not a full-fledged platform like Rancher, but rather a simple CLI tool that provides a streamlined interface for common cluster operations.
Comparisons
Ksctl vs Cluster API
- Simplicity vs Complexity: Cluster API uses a sophisticated set of CRDs (Custom Resource Definitions) to manage machines, machine sets, and deployments. In contrast, Ksctl adopts a minimalist approach, focusing on reducing complexity for developers and operators.
- Target Audience: Ksctl caters to users seeking a lightweight, user-friendly tool for quick cluster management tasks, particularly in development and testing environments. Cluster API is designed for production-grade use cases, emphasizing flexibility and integration with Kubernetes’ declarative model.
- Dependencies: Ksctl is a standalone CLI tool that does not require a running Kubernetes cluster, making it easy to set up and run anywhere. On the other hand, Cluster API requires a pre-existing Kubernetes cluster to operate.
- Feature Focus: Ksctl emphasizes speed and simplicity in managing cluster lifecycle operations (create, delete, scale). Cluster API provides deeper control and automation features suitable for enterprises managing complex Kubernetes ecosystems.
What is the difference between Ksctl and k3sup?
- Scope: Ksctl is a comprehensive tool for managing Kubernetes clusters across multiple environments from cloud managed Kubernetes flavour to K3s and kubeadm. K3sup, on the other hand, focuses primarily on bootstrapping lightweight k3s clusters.
- Features: Ksctl handles infrastructure provisioning, cluster scaling, and cloud-agnostic lifecycle management, whereas k3sup is limited to installing k3s clusters without managing the underlying infrastructure.
- Cloud Support: Ksctl provides a unified interface for managing clusters across different providers, making it suitable for multi-cloud strategies. K3sup is more limited and designed for standalone setups.
How does Ksctl compare to Rancher?
- Tool vs Platform: Ksctl is a streamlined CLI tool for cluster management. Rancher, by contrast, is a feature-rich platform offering cluster governance, monitoring, access control, and application management.
- Use Case: Ksctl is lightweight and ideal for developers needing quick, uncomplicated cluster management. Rancher is tailored for enterprise environments where centralized management and control of multiple clusters are essential.
- Operational Scope: Ksctl focuses on basic lifecycle operations (create, delete, scale). Rancher includes features like Helm chart deployment, RBAC integration, and advanced workload management.
What is the difference between Ksctl and k3d, Kind, or Minikube?
- Environment Scope: Ksctl is designed for both local and cloud-based Kubernetes cluster management. Tools like k3d, Kind, and Minikube are primarily for local development and testing purposes.
- Cluster Management: Ksctl can provision, scale, and delete clusters in cloud environments, whereas k3d, Kind, and Minikube focus on providing lightweight clusters for experimentation and local development.
- Infrastructure Management: Ksctl integrates with infrastructure provisioning, while the others rely on pre-existing local environments (e.g., Docker for k3d and Kind, or virtual machines for Minikube).
How does Ksctl compare to eksctl?
- Cloud Support: Ksctl is cloud-agnostic and supports multiple providers, making it suitable for multi-cloud setups. Eksctl, on the other hand, is tightly coupled with AWS and designed exclusively for managing EKS clusters.
- Features: Ksctl provides an all-in-one tool for provisioning infrastructure, managing the cluster lifecycle, and scaling across different environments. Eksctl is focused on streamlining EKS setup and optimizing AWS integrations like IAM, VPCs, and Load Balancers.
- Target Audience: Ksctl appeals to users seeking a flexible, multi-cloud solution. Eksctl is ideal for AWS-centric teams that require deep integration with AWS services.
9 - Features
Our Vision
Transform your Kubernetes experience with a tool that puts simplicity and efficiency first. Ksctl eliminates the complexity of cluster management, allowing developers to focus on what matters most β building great applications.
Key Features
π Universal Cloud Support
- Deploy clusters across any cloud provider
- Seamless switching between providers
- Support for both managed and self-managed clusters
- Freedom to choose your bootstrap provider (K3s or Kubeadm)
π Zero-to-Cluster Simplicity
- Single command cluster deployment
- No complex configuration required
- Automated setup and initialization
- Instant development environment readiness
- Local file-based or MongoDB storage options
- Single binary deployment thus light-weight and efficient
π οΈ Streamlined Management
- Unified interface for all operations
- Eliminates need for provider-specific CLIs
- Consistent experience across environments
- Simplified scaling and upgrades
π― Developer-Focused Design
- Near-zero learning curve
- Intuitive command structure
- No new configurations to learn
- Perfect for teams of all skill levels
- We have WASM workload support as well Refer
π Flexible Operation
- Self-managed cluster support
- Cloud provider managed offerings
- Multiple bootstrap provider options
- Seamless environment transitions
Technical Benefits
- Infrastructure Agnostic: Deploy anywhere, manage consistently
- Rapid Deployment: Bypass complex setup steps and day 0 tasks
- Future-Ready: Upcoming support for day 1 operations and Wasm
- Community-Driven: Active development and continuous improvements
10 - Ksctl Components
Components
- ksctl agent
- ksctl stateimporter
- ksctl application controller
Sequence diagram on how its deployed
flowchart TD Base(Ksctl Infra and Bootstrap) -->|Cluster is created| KC(Ksctl controller) KC -->|Creates| Storage{storageProvider=='local'} Storage -->|Yes| KSI(Ksctl Storage Importer) Storage -->|No| KA(Ksctl Agent) KSI -->KA KA -->|Health| D(Deploy other ksctl controllers)
10.1 - Ksctl Agent
It is a ksctl’s solution to infrastructure management and also kubernetes management.
Especially inside the kubertes cluster
It is a GRPC server running as a deployment. and a fleet of controllers will call it to perform certain operations. For instance, application installation via stack.application.ksctl.com/v1alpha
, etc.
It will be installed on all kubernetes cluster created via ksctl from >= v1.2.0
10.2 - Ksctl Application Controller
It helps in deploying applications using crd to help manage with installaztion, upgrades, downgrades, uninstallaztion. from one version to another and provide a single place of truth where to look for which applications are installed
Types
Stack
For defining a hetrogenous components we came up with a stack which contains M
number of components which are different applications with their versions
Info
this is current available on all clusters created by[email protected]
Note
It has a dependency onksctl agent
About Lifecycle of application stack
once you havekubectl apply
the stack it will start deploying the applications in the stack, if you want to upgrade the applications in the stack you can edit the stack and change the version of the application and apply the stack again, it will uninstall the previous version and install the new version. Basically it performs reinstall of the stack which might cause downtimeSupported Apps and CNI
Name | Type | Category | Ksctl_Name | More Info |
---|---|---|---|---|
Argo-CD | standard | CI/CD | standard-argocd | Link |
Argo-Rollouts | standard | CI/CD | standard-argorollouts | Link |
Istio | standard | Service Mesh | standard-istio | Link |
Cilium | standard | - | cilium | Link |
Flannel | standard | - | flannel | Link |
Kube-Prometheus | standard | Monitoring | standard-kubeprometheus | Link |
SpinKube | production | Wasm | production-spinkube | Link |
WasmEdge and Wasmtime | production | Wasm | production-kwasm | Link |
Note on wasm category apps
Only one of the app under the category wasm
can be installed at a time we you might need to uninstall one to get another running
also the current implementation of the wasm catorgoty apps annotate all the nodes with kwasm as true
Components in Stack
All the stack are a collection of components so when you are overriding the stack values you need to tell which component it belongs to and then specifiy the value in amap[string]any
formatArgo-CD
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argocd
spec:
stacks:
- stackId: standard-argocd
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argocd
spec:
stacks:
- stackId: standard-argocd
appType: app
overrides:
argocd:
version: <string> # version of the argocd
noUI: <bool> # to disable the UI
namespace: <string> # namespace to install argocd
namespaceInstall: <bool> # to install namespace specific argocd
Argo-Rollouts
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argorollouts
spec:
stacks:
- stackId: standard-argorollouts
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argorollouts
spec:
stacks:
- stackId: standard-argorollouts
appType: app
overrides:
argorollouts:
version: <string> # version of the argorollouts
namespace: <string> # namespace to install argocd
namespaceInstall: <bool> # to install namespace specific argocd
Istio
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: istio
spec:
stacks:
- stackId: standard-istio
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: istio
spec:
stacks:
- stackId: standard-istio
appType: app
overrides:
istio:
version: <string> # version of the istio
helmBaseChartOverridings: <map[string]any> # helm chart overridings, istio/base
helmIstiodChartOverridings: <map[string]any> # helm chart overridings, istio/istiod
Cilium
Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues
still we have cilium can be installed and only configuration available are version
, we are working towards how can we allow users to specify the overridings in the cluster creation
anyways here is how it is done
we can consider using a file spec instead of cmd parameter, until that is done you have to wait
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: cilium
spec:
stacks:
- stackId: cilium
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: cilium
spec:
stacks:
- stackId: cilium
appType: app
overrides:
cilium:
version: <string> # version of the cilium
ciliumChartOverridings: <map[string]any> # helm chart overridings, cilium
Flannel
Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues
still we have flannel can be installed and only configuration available are version
, we are working towards how can we allow users to specify the overridings in the cluster creation
we can consider using a file spec instead of cmd parameter, until that is done you have to wait
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: flannel
spec:
stacks:
- stackId: flannel
appType: cni
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: flannel
spec:
stacks:
- stackId: flannel
appType: cni
overrides:
flannel:
version: <string> # version of the flannel
Kube-Prometheus
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring
spec:
stacks:
- stackId: standard-kubeprometheus
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring
spec:
stacks:
- stackId: standard-kubeprometheus
appType: app
overrides:
kube-prometheus:
version: <string> # version of the kube-prometheus
helmKubePromChartOverridings: <map[string]any> # helm chart overridings, kube-prometheus
SpinKube
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-spinkube
spec:
stacks:
- stackId: production-spinkube
appType: app
Demo app
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
kubectl port-forward svc/simple-spinapp 8083:80
curl localhost:8083/hello
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-spinkube
spec:
stacks:
- stackId: production-wasmedge-kwasm
appType: app
overrides:
spinkube-operator:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
helmOperatorChartOverridings: <map[string]any> # helm chart overridings, spinkube-operator
spinkube-operator-shim-executor:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
spinkube-operator-runtime-class:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
spinkube-operator-crd:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
cert-manager:
version: <string>
certmanagerChartOverridings: <map[string]any> # helm chart overridings, cert-manager
kwasm-operator:
version: <string>
kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator
Kwasm
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-kwasm
spec:
stacks:
- stackId: production-kwasm
appType: app
Demo app(wasmedge)
---
apiVersion: v1
kind: Pod
metadata:
name: "myapp"
namespace: default
labels:
app: nice
spec:
runtimeClassName: wasmedge
containers:
- name: myapp
image: "docker.io/cr7258/wasm-demo-app:v1"
ports:
- containerPort: 8080
name: http
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nice
spec:
selector:
app: nice
type: ClusterIP
ports:
- name: nice
protocol: TCP
port: 8080
targetPort: 8080
Demo app(wasmtime)
apiVersion: batch/v1
kind: Job
metadata:
name: nice
namespace: default
labels:
app: nice
spec:
template:
metadata:
name: nice
labels:
app: nice
spec:
runtimeClassName: wasmtime
containers:
- name: nice
image: "meteatamel/hello-wasm:0.1"
restartPolicy: OnFailure
#### For wasmedge
# once up and running
kubectl port-forward svc/nice 8080:8080
# then you can curl the service
curl localhost:8080
#### For wasmtime
# just check the logs
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-wasmedge
spec:
stacks:
- stackId: production-kwasm
appType: app
overrides:
kwasm-operator:
version: <string>
kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator
Example usage
Lets deploy [email protected]
, [email protected]
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring-plus-gitops
spec:
components:
- appName: standard-argocd
appType: app
version: v2.9.12
- appName: standard-kubeprometheus
appType: app
version: "55.0.0"
You can see once its deployed it fetch and deploys them
Lets try to upgrade them to their latest versions
kubeclt edit stack monitoring-plus-gitops
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring-plus-gitops
spec:
components:
- appName: standard-argocd
appType: app
version: latest
- appName: standard-kubeprometheus
appType: app
version: latest
once edited it will uninstall the previous install and reinstalls the latest deployments
10.3 - Ksctl State-Importer
It is a helper deployment to transfer state information from one storage option to another.
It is used to transfer data in ~/.ksctl
location (provided the cluster is created via storageProvider: store-local
).
It utilizes the these 2 methods:
Export
: StorageFactory InterfaceImport
: StorageFactory Interface
so before the ksctl agent is deployed we first create this pod which in turn runs a http server having storageProvider: store-kubernetes
and uses storage.Import()
method
once we get 200 OK responses from the http server we remove the pod and move to ksctl agent deployment so that it can use the state file present in configmaps, secrets
Warning
If the storageType is external (mongodb), we don’t need this to be happening instead we create kubernetes secret where the external storage solution environment variable is set and also we need to customize the ksctl agent deployment11 - Kubernetes Distributions
K3s and Kubeadm only work for HA self-managed clusters
11.1 - K3s
K3s for HA Cluster on supported provider
K3s is used for self-managed clusters. Its a lightweight k8s distribution. We are using it as follows:
- controlplane (k3s server)
- workerplane (k3s agent)
- datastore (etcd members)
Info
Here the Default CNI is flannel11.2 - Kubeadm
Kubeadm for HA Cluster on supported provider
Kubeadm support is added with etcd as datastore
Info
Here the Default CNI is flannel12 - Maintainers
Maintainers
Name | Role | Github | Discord | |
---|---|---|---|---|
Dipanakar | Creator & Maintainer | Github | dipankardas | |
Praful | Maintainer | Github | praful_ | |
Saiyam Pathak | Creator & Architect | Github | saiyam |
13 - Roadmap
Current Status on Supported Providers and Next Features
Supported Providers
flowchart LR; classDef green color:#022e1f,fill:#00f500; classDef red color:#022e1f,fill:#f11111; classDef white color:#022e1f,fill:#fff; classDef black color:#fff,fill:#000; classDef blue color:#fff,fill:#00f; XX[ksctl]--CloudFactory-->web{Cloud Providers}; XX[ksctl]--DistroFactory-->web2{Distributions}; XX[ksctl]--StorageFactory-->web3{State Storage}; web--Civo-->civo{Types}; civo:::green--managed-->civom[Create & Delete]:::green; civo--HA-->civoha[Create & Delete]:::green; web--Local-Kind-->local{Types}; local:::green--managed-->localm[Create & Delete]:::green; local--HA-->localha[Create & Delete]:::black; web--AWS-->aws{Types}; aws:::green--managed-->awsm[Create & Delete]:::green; aws--HA-->awsha[Create & Delete]:::green; web--Azure-->az{Types}; az:::green--managed-->azsm[Create & Delete]:::green; az--HA-->azha[Create & Delete]:::green; web2--K3S-->k3s{Types}; k3s:::green--HA-->k3ha[Create & Delete]:::green; web2--Kubeadm-->kubeadm{Types}; kubeadm:::green--HA-->kubeadmha[Create & Delete]:::green; web3--Local-Store-->slocal{Local}:::green; web3--Remote-Store-->rlocal{Remote}:::green; rlocal--Provider-->mongo[MongoDB]:::green;
Next Features
All the below features will be moved to the Project Board and will be tracked there.
- Talos as the next Bootstrap provider
- Green software which can help you save energy and also better somehow
- WASM first class support feature
- ML features unikernels and better ML workload scalability
- Production stack for monitoring, security, to application specific application integrations like vault, kafka, etc.
- Health checks of various k8s cluster
- Role Based Access Control for any cluster
- Ability import any existing cluster and also to respect the existing state and not overwrite it with the new state from ksctl but to be able to manage only the resources which the tool has access
- add initial production ready for cert manager + ingress controller (nginx) + gateway api
- add initial production ready for monitoring (prometheus + grafana) tracing (jaeger) Opentelemtery support
- add initial production ready for Networking (cilium)
- add initial production ready for service mesh (istio)
- add support for Kubernetes migration like moving from one cloud provider to another
- add support Kubernetes Backup
- open telemetry support will lead to better observability by combining logs, metrics, and traces in one place and some amazing tools we can use to make the detection amazing with Alerting, suggestions, … from errors to suggestions based on some patterns
14 - Search Results
15 - Storage
storage providers
15.1 - External Storage
External MongoDB as a Storage provider
Refer : internal/storage/external/mongodb
Data to store and filtering it performs
- first it gets the cluster data / credentials data based on this filters
cluster_name
(for cluster)region
(for cluster)cloud_provider
(for cluster & credentials)cluster_type
(for cluster)- also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
- make sure the above things are specified before writing in the storage
How to use it
- you need to call the Init function to get the storage make sure you have the interface type variable as the caller
- before performing any operations you must call the Connect().
- for using methods: Read(), Write(), Delete() make sure you have called the Setup()
- for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
- for calling GetOneOrMoreClusters() you need simply specify the filter
- for calling AlreadyCreated() you just have to specify the func args
- Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
- Custom Storage Directory you would need to specify the env var
KSCTL_CUSTOM_DIR_ENABLED
the value must be directory names wit space separated - specify the Required ENV vars
export MONGODB_URI=""
Hint: mongodb://${username}:${password}@${domain}:${port} or mongo+atlas mongodb+srv://${username}:${password}@${domain}
Things to look for
make sure when you recieve return data from Read(). copy the address value to the storage pointer variable and not the address!
When any credentials are written, it will be stored in
- Database:
ksctl-{userid}-db
- Collection:
{cloud_provider}
- Document/Record:
raw bson data
with above specified data and filter fields
- Database:
When any clusterState is written, it gets stored in
- Database:
ksctl-{userid}-db
- Collection:
credentials
- Document/Record:
raw bson data
with above specified data and filter fields
- Database:
When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to
<some_dir>/.ksctl/kubeconfig
15.2 - Local Storage
Local as a Storage Provider
Refer: internal/storage/local
Data to store and filtering it performs
- first it gets the cluster data / credentials data based on this filters
cluster_name
(for cluster)region
(for cluster)cloud_provider
(for cluster & credentials)cluster_type
(for cluster)- also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
- make sure the above things are specified before writing in the storage
it is stored something like this
it will use almost the same construct.
* ClusterInfos => $USER_HOME/.ksctl/state/
|-- {cloud_provider}
|-- {cluster_type} aka (ha, managed)
|-- "{cluster_name} {region}"
|-- state.json
* CredentialInfo => $USER_HOME/.ksctl/credentials/{cloud_provider}.json
How to use it
- you need to call the Init function to get the storage make sure you have the interface type variable as the caller
- before performing any operations you must call the Connect().
- for using methods: Read(), Write(), Delete() make sure you have called the Setup()
- for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
- for calling GetOneOrMoreClusters() you need simply specify the filter
- for calling AlreadyCreated() you just have to specify the func args
- Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
- Custom Storage Directory you would need to specify the env var
KSCTL_CUSTOM_DIR_ENABLED
the value must be directory names wit space separated - it creates the configuration directories on your behalf
Things to look for
- make sure when you receive return data from Read(). copy the address value to the storage pointer variable and not the address!
- When any credentials are written, it will be stored in
<some_dir>/.ksctl/credentials/{cloud_provider}.json
- When any clusterState is written, it gets stored in
<some_dir>/.ksctl/state/{cloud_provider}/{cluster_type}/{cluster_name} {region}/state.json
- When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to
<some_dir>/.ksctl/kubeconfig