This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

Status: Technical Preview

This is the main branch documentation

1 - Develop Documentation

develop branch documentation

This is the main branch documentation

1.1 - Architecture

Overall architecture of Ksctl

Architecture diagrams

1.1.1 - Api Components

Learn how different components communicate with each other via API’s and automation scripts to serve you in best way possible.

Core Design Components

Design

Overview architecture of ksctl

light mode

Managed Cluster creation & deletion

light mode

High Available Cluster creation & deletion

light mode

1.2 - Getting Started

What does your user need to know to try your project?

Getting Started Documentation

Installation & Uninstallation Instructions

Ksctl CLI

Lets begin with installation of the tools their are various method

Single command method

curl -sfL https://get.ksctl.com | python3 -
bash <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
zsh <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)

From Source Code

make install_linux

# macOS on M1
make install_macos

# macOS on INTEL
make install_macos_intel

# For uninstalling
make uninstall

1.3 - Cloud Provider

Info about the cloud providers available

This Page includes more info about different cloud providers

1.3.1 - Amazon Web Services

Amazon Web Services

Aws support for HA and Managed Clusters

How these credentials are used by ksctl

  1. Environment Variables
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
  1. Using command line
ksctl cred

Current Features

Cluster features

Highly Available cluster

clusters which are managed by the user not by cloud provider

you can choose between k3s and kubeadm as your bootstrap tool

custom components being used

  • Etcd database VM
  • HAProxy loadbalancer VM for controlplane nodes
  • controlplane VMs
  • workerplane VMs

Managed Cluster Elastic Kubernetes Service

we provision Roles ksctl-* 2 for each cluster:

  • ksctl-<clustername>-wp-role for the EKS NodePool
  • ksctl-<clustername>-cp-role for the EKS controlplane

we utilize the iam:AssumeRole to assume the role and create the cluster

Policies aka permissions for the user

here is the policy and role which we are using

  1. iam-role-full-access(Custom Policy)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor6",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:GetInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:CreateRole",
                "iam:DeleteRole",
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:ListInstanceProfiles",
                "iam:AddRoleToInstanceProfile",
                "iam:ListInstanceProfilesForRole",
                "iam:PassRole",
                "iam:CreateServiceLinkedRole",
                "iam:DetachRolePolicy",
                "iam:DeleteRolePolicy",
                "iam:DeleteServiceLinkedRole",
                "iam:GetRolePolicy",
                "iam:SetSecurityTokenServicePreferences"
            ],
            "Resource": [
                "arn:aws:iam::*:role/ksctl-*",
                "arn:aws:iam::*:instance-profile/*"
            ]
        }
    ]
}
  1. eks-full-access(Custom Policy)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "eks:ListNodegroups",
                "eks:ListClusters",
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}
  1. AmazonEC2FullAccess(Aws)
  2. IAMReadOnlyAccess(Aws)

1.3.2 - Azure

Azure Cloud Provider

Azure support for HA and Managed Clusters

Azure Subscription ID

subscription id using your subscription

azure-subscription

Azure Tenant ID

Azure Dashboard

Azure Dashboard contains all the credentials required

azure-dashboard

lets get the tenant id from the Azure

Azure Client ID

it represents the id of app created

Azure Client Secret

it represents the secret associated with the app in order to use it

create app secret

after-click

copy-secret

Assign Role to your app

head over to subscriptions page and click Access Control (IAM) select the Role Assignment and then click Add > Add Role Assignment create a new role and when selecting the identity specify the name of the app Here you can customize the role this app has

role-assign-app

How these credentials are used by ksctl

  1. Environment Variables
export AZURE_TENANT_ID=""
export AZURE_SUBSCRIPTION_ID=""
export AZURE_CLIENT_ID=""
export AZURE_CLIENT_SECRET=""
  1. Using command line
ksctl cred

Current Features

Cluster features

Highly Available cluster

clusters which are managed by the user not by cloud provider

you can choose between k3s and kubeadm as your bootstrap tool

custom components being used

  • Etcd database VM
  • HAProxy loadbalancer VM for controlplane nodes
  • controlplane VMs
  • workerplane VMs

Managed Cluster

clusters which are managed by the cloud provider

Other capabilities

Create, Update, Delete, Switch

1.3.3 - Civo

Civo Cloud Provider

Civo support for HA and Managed Clusters

Getting credentials

under settings look for the profile

copy the credentials

How to add credentials to ksctl

  1. Environment Variables
export CIVO_TOKEN=""
  1. Using command line
ksctl cred

Current Features

Cluster features

Highly Available cluster

clusters which are managed by the user not by cloud provider

you can choose between k3s and kubeadm as your bootstrap tool

custom components being used

  • Etcd database VM
  • HAProxy loadbalancer instance for controlplane nodes
  • controlplane instances
  • workerplane instances

Managed Cluster

clusters which are managed by the cloud provider

Other capabilities

Create, Update, Delete, Switch

1.3.4 - Google Cloud Platform

Google Cloud Platform

Gcp support for HA and Managed Clusters

1.3.5 - Local

Local Provider

It creates cluster on the host machine utilizing kind

Current features

currently using Kind Kubernetes in Docker

1.4 - Reference

Low level reference docs for your project.

1.4.1 - Reference for command line reference

A short lead description about this content page. It can be bold or italic and can be split over multiple paragraphs.

for the Ksctl cli

CLI Command Reference

Docs are available now in cli repo Here are the links for the documentation files

Markdown format RichText format

1.5 - Contribution Guidelines

How to contribute to the docs

You can do almost all the tests in your local except e2e tests which requires you to provide cloud credentials

Provide a generic tasks for new and existing contributors

Types of changes

There are many ways to contribute to the ksctl project. Here are a few examples:

  • New changes to docs: You can contribute by writing new documentation, fixing typos, or improving the clarity of existing documentation.
  • New features: You can contribute by proposing new features, implementing new features, or fixing bugs.
  • Cloud support: You can contribute by adding support for new cloud providers.
  • Kubernetes distribution support: You can contribute by adding support for new Kubernetes distributions.

Phases a change / feature goes through

  1. Raise a issue regarding it (used for prioritizing)
  2. what all changes does it demands
  3. if all goes well you will be assigned
  4. If its about adding Cloud Support then usages of CloudFactory is needed and sperate the logic of vm, firewall, etc. to their respective files and do have a helper file for behind the scenes logic for ease of use
  5. If its about adding Distribution support do check its compatability with different cloud providers vm configs and firewall rules which needs to be done

Formating for PR & Issue subject line

Subject / Title

# Releated to enhancement
enhancement: <Title>

# Related to feature
feat: <Title>

# Related to Bug fix or other types of fixes
fix: <Title>

# Related to update
update: <Title>

Body

Follow the PR or Issue template add all the significant changes to the PR description

Commit messages

mention the detailed description in the git commits. what? why? How?

Each commit must be sign-off and should follow conventional commit guidelines.

Conventional Commits

The commit message should be structured as follows:

<type>(optional scope): <description>

[optional body]

[optional footer(s)]

For more detailed information on conventional commits, you can refer to the official Conventional Commits specification.

Sign-off

Each commit must be signed-off. You can do this by adding a sign-off line to your commit messages. When committing changes in your local branch, add the -S flag to the git commit command:

$ git commit -S -m "YOUR_COMMIT_MESSAGE"
# Creates a signed commit

You can find more comprehensive details on how to sign off git commits by referring to the GitHub section on signing commits.

Verification of Commit Signatures

You have the option to sign commits and tags locally, which adds a layer of assurance regarding the origin of your changes. GitHub designates commits or tags as either “Verified” or “Partially verified” if they possess a GPG, SSH, or S/MIME signature that is cryptographically valid.

GPG Commit Signature Verification

To sign commits using GPG and ensure their verification on GitHub, adhere to these steps:

  • Check for existing GPG keys.
  • Generate a new GPG key.
  • Add the GPG key to your GitHub account.
  • Inform Git about your signing key.
  • Proceed to sign commits.

SSH Commit Signature Verification

To sign commits using SSH and ensure their verification on GitHub, follow these steps:

  • Check for existing SSH keys.
  • Generate a new SSH key.
  • Add an SSH signing key to your GitHub account.
  • Inform Git about your signing key.
  • Proceed to sign commits.

S/MIME Commit Signature Verification

To sign commits using S/MIME and ensure their verification on GitHub, follow these steps:

  • Inform Git about your signing key.
  • Proceed to sign commits.

For more detailed instructions, refer to GitHub’s documentation on commit signature verification

Development

First you have to fork the ksctl repository. fork

cd <path> # to you directory where you want to clone ksctl
mkdir <directory name> # create a directory
cd <directory name> # go inside the directory
git clone https://github.com/${YOUR_GITHUB_USERNAME}/ksctl.git # clone you fork repository
cd ksctl # go inside the ksctl directory
git remote add upstream https://github.com/ksctl/ksctl.git # set upstream
git remote set-url --push upstream no_push # no push to upstream

Trying out code changes

Before submitting a code change, it is important to test your changes thoroughly. You can do this by running the unit tests and integration tests.

Submitting changes

Once you have tested your changes, you can submit them to the ksctl project by creating a pull request. Make sure you use the provided PR template

Getting help

If you need help contributing to the ksctl project, you can ask for help on the kubesimplify Discord server, ksctl-cli channel or else raise issue or discussion

Thank you for contributing!

We appreciate your contributions to the ksctl project!

Some of our contributors ksctl contributors

1.5.1 - Contribution Guidelines for CLI

How to contribute to the ksctl-cli

Repository: ksctl/cli

How to Build from source

Linux

make install_linux # for linux

Mac OS

make install_macos # for macos

Windows

.\builder.ps1 # for windows

1.5.2 - Contribution Guidelines for Core

How to contribute to the ksctl

Repository: ksctl/ksctl

Test out both All Mock and Unit tests and lints

make test

Test out both All Unit tests

make unit_test_all

Test out both All Mock tests

make mock_all

for E2E tests on local

set the required token as ENV vars then

cd test/e2e

# then the syntax for running
go run . -op create -file azure/create.json

# for operations you can refer file test/e2e/consts.go

1.5.3 - Contribution Guidelines for Docs

How to contribute to the ksctl-docs

Repository: ksctl/docs

How to Build from source

# Prequisites
npm install -D postcss
npm install -D postcss-cli
npm install -D autoprefixer
npm install hugo-extended

Install Dependencies

hugo serve

1.6 - Concepts

Concepts around ksctl core

This section will help you to learn about the underlying system of Ksctl. It will help you to obtain a deeper understanding of how Ksctl works.

Sequence diagrams for 2 major operations

Create Cloud-Managed Clusters

sequenceDiagram
    participant cm as Manager Cluster Managed
    participant cc as Cloud Controller
    participant kc as Ksctl Kubernetes Controller
    cm->>cc: transfers specs from user or machine
    cc->>cc: to create the cloud infra (network, subnet, firewall, cluster)
    cc->>cm: 'kubeconfig' and other cluster access to the state
    cm->>kc: shares 'kubeconfig'
    kc->>kc: installs kubectl agent, stateimporter and controllers
    kc->>cm: status of creation

Create Self-Managed HA clusters

sequenceDiagram
    participant csm as Manager Cluster Self-Managed
    participant cc as Cloud Controller
    participant bc as Bootstrap Controller
    participant kc as Ksctl Kubernetes Controller
    csm->>cc: transfers specs from user or machine
    cc->>cc: to create the cloud infra (network, subnet, firewall, vms)
    cc->>csm: return state to be used by BootstrapController
    csm->>bc: transfers infra state like ssh key, pub IPs, etc
    bc->>bc: bootstrap the infra by either (k3s or kubeadm)
    bc->>csm: 'kubeconfig' and other cluster access to the state
    csm->>kc: shares 'kubeconfig'
    kc->>kc: installs kubectl agent, stateimporter and controllers
    kc->>csm: status of creation

1.6.1 - Cloud Controller

The Component of Ksctl responsible for creating and managing clusters for different Cloud platforms.

It is responsible for controlling the sequence of tasks for every cloud provider to be executed

1.6.2 - Core Manager

The Component of Ksctl responsible for managing Cloud controller and Distribution controller. It has multiple types of managers

It is responsible for managing client requests and calls the corresponding controller

Types

ManagerClusterKsctl

Role: Perform ksctl getCluster, switchCluster

ManagerClusterKubernetes

Role: Perform ksctl addApplicationAndCrds Currently to be used by machine to machine not by ksctl cli

ManagerClusterManaged

Role: Perform ksctl createCluster, deleteCluster

ManagerClusterSelfManaged

Role: Perform ksctl createCluster, deleteCluster, addWorkerNodes, delWorkerNodes

1.6.3 - Distribution Controller

The Component of Ksctl responsible for selecting the type of Bootstrap solution (Kubeadm or K3s).

It is responsible for controlling the execution sequence for configuring Cloud Resources wrt to the Kubernetes distribution choosen

1.7 - Ksctl Components

Place of all the documentation for the Operators used specifically for k8s clusters

Components

  • ksctl agent
  • ksctl stateimporter
  • ksctl application controller

Sequence diagram on how its deployed

flowchart TD
Base(Ksctl Infra and Bootstrap) -->|Cluster is created| KC(Ksctl controller)
KC -->|Creates| Storage{storageProvider=='local'}
Storage -->|Yes| KSI(Ksctl Storage Importer)
Storage -->|No| KA(Ksctl Agent)
KSI -->KA
KA -->|Health| D(Deploy other ksctl controllers)

1.7.1 - Ksctl Agent

Documentation on ksctl agent

It is a ksctl’s solution to infrastructure management and also kubernetes management.

Especially inside the kubertes cluster

It is a GRPC server running as a deployment. and a fleet of controllers will call it to perform certain operations. For instance, application installation via stack.application.ksctl.com/v1alpha, etc.

It will be installed on all kubernetes cluster created via ksctl from >= v1.2.0

1.7.2 - Ksctl Application Controller

Documentation on ksctl application controller

It helps in deploying applications using crd to help manage with installaztion, upgrades, downgrades, uninstallaztion. from one version to another and provide a single place of truth where to look for which applications are installed

Types

Stack

For defining a hetrogenous components we came up with a stack which contains M number of components which are different applications with their versions

Supported Apps and CNI

NameTypeCategoryKsctl_NameMore Info
Argo-CDstandardCI/CDstandard-argocdLink
Argo-RolloutsstandardCI/CDstandard-argorolloutsLink
IstiostandardService Meshstandard-istioLink
Ciliumstandard-ciliumLink
Flannelstandard-flannelLink
Kube-PrometheusstandardMonitoringstandard-kubeprometheusLink
SpinKubeproductionWasmproduction-spinkubeLink
WasmEdge and WasmtimeproductionWasmproduction-kwasmLink

Argo-CD

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argocd
spec:
	stacks:
	- stackId: standard-argocd
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argocd
spec:
	stacks:
	- stackId: standard-argocd
		appType: app
		overrides:
			argocd:
				version: <string> # version of the argocd
				noUI: <bool> # to disable the UI
				namespace: <string> # namespace to install argocd
				namespaceInstall: <bool> # to install namespace specific argocd

Argo-Rollouts

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argorollouts
spec:
	stacks:
	- stackId: standard-argorollouts
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argorollouts
spec:
	stacks:
	- stackId: standard-argorollouts
		appType: app
		overrides:
			argorollouts:
				version: <string> # version of the argorollouts
				namespace: <string> # namespace to install argocd
				namespaceInstall: <bool> # to install namespace specific argocd

Istio

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: istio
spec:
	stacks:
	- stackId: standard-istio
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: istio
spec:
	stacks:
	- stackId: standard-istio
		appType: app
		overrides:
			istio:
				version: <string> # version of the istio
				helmBaseChartOverridings: <map[string]any> # helm chart overridings, istio/base
				helmIstiodChartOverridings: <map[string]any> # helm chart overridings, istio/istiod

Cilium

Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues

still we have cilium can be installed and only configuration available are version, we are working towards how can we allow users to specify the overridings in the cluster creation

anyways here is how it is done

we can consider using a file spec instead of cmd parameter, until that is done you have to wait

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: cilium
spec:
	stacks:
	- stackId: cilium
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: cilium
spec:
	stacks:
	- stackId: cilium
		appType: app
		overrides:
			cilium:
				version: <string> # version of the cilium
				ciliumChartOverridings: <map[string]any> # helm chart overridings, cilium

Flannel

Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues

still we have flannel can be installed and only configuration available are version, we are working towards how can we allow users to specify the overridings in the cluster creation

we can consider using a file spec instead of cmd parameter, until that is done you have to wait

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: flannel
spec:
	stacks:
	- stackId: flannel
		appType: cni

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: flannel
spec:
	stacks:
	- stackId: flannel
		appType: cni
		overrides:
			flannel:
				version: <string> # version of the flannel

Kube-Prometheus

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring
spec:
	stacks:
	- stackId: standard-kubeprometheus
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring
spec:
	stacks:
	- stackId: standard-kubeprometheus
		appType: app
		overrides:
			kube-prometheus:
				version: <string> # version of the kube-prometheus
				helmKubePromChartOverridings: <map[string]any> # helm chart overridings, kube-prometheus

SpinKube

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-spinkube
spec:
	stacks:
	- stackId: production-spinkube
		appType: app

Demo app

kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
kubectl port-forward svc/simple-spinapp 8083:80
curl localhost:8083/hello

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-spinkube
spec:
	stacks:
	- stackId: production-wasmedge-kwasm
		appType: app
		overrides:
			spinkube-operator:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
				helmOperatorChartOverridings: <map[string]any> # helm chart overridings, spinkube-operator

			spinkube-operator-shim-executor:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

			spinkube-operator-runtime-class:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

			spinkube-operator-crd:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

			cert-manager:
				version: <string>
				certmanagerChartOverridings: <map[string]any> # helm chart overridings, cert-manager

			kwasm-operator:
				version: <string>
				kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator

Kwasm

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-kwasm
spec:
	stacks:
	- stackId: production-kwasm
		appType: app

Demo app(wasmedge)

---
apiVersion: v1
kind: Pod
metadata:
	name: "myapp"
	namespace: default
	labels:
		app: nice
spec:
	runtimeClassName: wasmedge
	containers:
	- name: myapp
		image: "docker.io/cr7258/wasm-demo-app:v1"
		ports:
		- containerPort: 8080
			name: http
	restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
	name: nice
spec:
	selector:
		app: nice
	type: ClusterIP
	ports:
	- name: nice
		protocol: TCP
		port: 8080
		targetPort: 8080

Demo app(wasmtime)

apiVersion: batch/v1
kind: Job
metadata:
  name: nice
  namespace: default
  labels:
    app: nice
spec:
  template:
    metadata:
      name: nice
      labels:
        app: nice
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: nice
        image: "meteatamel/hello-wasm:0.1"
      restartPolicy: OnFailure
#### For wasmedge
# once up and running
kubectl port-forward svc/nice 8080:8080

# then you can curl the service
curl localhost:8080
#### For wasmtime
# just check the logs

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-wasmedge
spec:
	stacks:
	- stackId: production-kwasm
		appType: app
		overrides:
			kwasm-operator:
				version: <string>
				kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator

Example usage

Lets deploy [email protected], [email protected]

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring-plus-gitops
spec:
	components:
		- appName: standard-argocd
			appType: app
			version: v2.9.12

		- appName: standard-kubeprometheus
			appType: app
			version: "55.0.0"

You can see once its deployed it fetch and deploys them

Lets try to upgrade them to their latest versions

kubeclt edit stack monitoring-plus-gitops
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring-plus-gitops
spec:
	components:
		- appName: standard-argocd
			appType: app
			version: latest

		- appName: standard-kubeprometheus
			appType: app
			version: latest

once edited it will uninstall the previous install and reinstalls the latest deployments

1.7.3 - Ksctl State-Importer

Documentation on ksctl stateimporter

It is a helper deployment to transfer state information from one storage option to another.

It is used to transfer data in ~/.ksctl location (provided the cluster is created via storageProvider: store-local).

It utilizes the these 2 methods:

so before the ksctl agent is deployed we first create this pod which in turn runs a http server having storageProvider: store-kubernetes and uses storage.Import() method

once we get 200 OK responses from the http server we remove the pod and move to ksctl agent deployment so that it can use the state file present in configmaps, secrets

1.8 - Kubernetes Distributions

Various Kubernetes Distributions

K3s and Kubeadm only work for HA self-managed clusters

1.8.1 - K3s

K3s Kubernetes Distributions

K3s for HA Cluster on supported provider

K3s is used for self-managed clusters. Its a lightweight k8s distribution. We are using it as follows:

  • controlplane (k3s server)
  • workerplane (k3s agent)
  • datastore (etcd members)

1.8.2 - Kubeadm

Kubeadm Kubernetes Distributions

Kubeadm for HA Cluster on supported provider

Kubeadm support is added with etcd as datastore

1.9 - Storage

What does your user need to know to try your project?

storage providers

1.9.1 - External Storage

What does your user need to know to try your project?

External MongoDB as a Storage provider

Refer : internal/storage/external/mongodb

Data to store and filtering it performs

  1. first it gets the cluster data / credentials data based on this filters
    • cluster_name (for cluster)
    • region (for cluster)
    • cloud_provider (for cluster & credentials)
    • cluster_type (for cluster)
    • also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
  2. make sure the above things are specified before writing in the storage

How to use it

  1. you need to call the Init function to get the storage make sure you have the interface type variable as the caller
  2. before performing any operations you must call the Connect().
  3. for using methods: Read(), Write(), Delete() make sure you have called the Setup()
  4. for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
  5. for calling GetOneOrMoreClusters() you need simply specify the filter
  6. for calling AlreadyCreated() you just have to specify the func args
  7. Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
  8. Custom Storage Directory you would need to specify the env var KSCTL_CUSTOM_DIR_ENABLED the value must be directory names wit space separated
  9. specify the Required ENV vars
    • export MONGODB_URI=""

    Hint: mongodb://${username}:${password}@${domain}:${port} or mongo+atlas mongodb+srv://${username}:${password}@${domain}

Things to look for

  1. make sure when you recieve return data from Read(). copy the address value to the storage pointer variable and not the address!

  2. When any credentials are written, it will be stored in

    • Database: ksctl-{userid}-db
    • Collection: {cloud_provider}
    • Document/Record: raw bson data with above specified data and filter fields
  3. When any clusterState is written, it gets stored in

    • Database: ksctl-{userid}-db
    • Collection: credentials
    • Document/Record: raw bson data with above specified data and filter fields
  4. When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to <some_dir>/.ksctl/kubeconfig

1.9.2 - Local Storage

What does your user need to know to try your project?

Local as a Storage Provider

Refer: internal/storage/local

Data to store and filtering it performs

  1. first it gets the cluster data / credentials data based on this filters
    • cluster_name (for cluster)
    • region (for cluster)
    • cloud_provider (for cluster & credentials)
    • cluster_type (for cluster)
    • also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
  2. make sure the above things are specified before writing in the storage

it is stored something like this

 it will use almost the same construct.
 * ClusterInfos => $USER_HOME/.ksctl/state/
	 |-- {cloud_provider}
		|-- {cluster_type} aka (ha, managed)
			|-- "{cluster_name} {region}"
				|-- state.json
 * CredentialInfo => $USER_HOME/.ksctl/credentials/{cloud_provider}.json

How to use it

  1. you need to call the Init function to get the storage make sure you have the interface type variable as the caller
  2. before performing any operations you must call the Connect().
  3. for using methods: Read(), Write(), Delete() make sure you have called the Setup()
  4. for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
  5. for calling GetOneOrMoreClusters() you need simply specify the filter
  6. for calling AlreadyCreated() you just have to specify the func args
  7. Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
  8. Custom Storage Directory you would need to specify the env var KSCTL_CUSTOM_DIR_ENABLED the value must be directory names wit space separated
  9. it creates the configuration directories on your behalf

Things to look for

  1. make sure when you receive return data from Read(). copy the address value to the storage pointer variable and not the address!
  2. When any credentials are written, it will be stored in <some_dir>/.ksctl/credentials/{cloud_provider}.json
  3. When any clusterState is written, it gets stored in <some_dir>/.ksctl/state/{cloud_provider}/{cluster_type}/{cluster_name} {region}/state.json
  4. When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to <some_dir>/.ksctl/kubeconfig

2 - Stable Documentation

stable branch documentation

This is the latest branch documentation

2.1 - Architecture

Overall architecture of Ksctl

Architecture diagrams

2.1.1 - Api Components

Learn how different components communicate with each other via API’s and automation scripts to serve you in best way possible.

Core Design Components

Design

Overview architecture of ksctl

light mode

Managed Cluster creation & deletion

light mode

High Available Cluster creation & deletion

light mode

2.2 - Getting Started

What does your user need to know to try your project?

Getting Started Documentation

Installation & Uninstallation Instructions

Ksctl CLI

Lets begin with installation of the tools their are various method

Single command method

curl -sfL https://get.ksctl.com | python3 -
bash <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
zsh <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)

From Source Code

make install_linux

# macOS on M1
make install_macos

# macOS on INTEL
make install_macos_intel

# For uninstalling
make uninstall

2.3 - Cloud Provider

Info about the cloud providers available

This Page includes more info about different cloud providers

2.3.1 - Amazon Web Services

Amazon Web Services

Aws support for HA and Managed Clusters

How these credentials are used by ksctl

  1. Environment Variables
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
  1. Using command line
ksctl cred

Current Features

Cluster features

Highly Available cluster

clusters which are managed by the user not by cloud provider

you can choose between k3s and kubeadm as your bootstrap tool

custom components being used

  • Etcd database VM
  • HAProxy loadbalancer VM for controlplane nodes
  • controlplane VMs
  • workerplane VMs

Managed Cluster Elastic Kubernetes Service

we provision Roles ksctl-* 2 for each cluster:

  • ksctl-<clustername>-wp-role for the EKS NodePool
  • ksctl-<clustername>-cp-role for the EKS controlplane

we utilize the iam:AssumeRole to assume the role and create the cluster

Policies aka permissions for the user

here is the policy and role which we are using

  1. iam-role-full-access(Custom Policy)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor6",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:GetInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:CreateRole",
                "iam:DeleteRole",
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:ListInstanceProfiles",
                "iam:AddRoleToInstanceProfile",
                "iam:ListInstanceProfilesForRole",
                "iam:PassRole",
                "iam:CreateServiceLinkedRole",
                "iam:DetachRolePolicy",
                "iam:DeleteRolePolicy",
                "iam:DeleteServiceLinkedRole",
                "iam:GetRolePolicy",
                "iam:SetSecurityTokenServicePreferences"
            ],
            "Resource": [
                "arn:aws:iam::*:role/ksctl-*",
                "arn:aws:iam::*:instance-profile/*"
            ]
        }
    ]
}
  1. eks-full-access(Custom Policy)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "eks:ListNodegroups",
                "eks:ListClusters",
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}
  1. AmazonEC2FullAccess(Aws)
  2. IAMReadOnlyAccess(Aws)

2.3.2 - Azure

Azure Cloud Provider

Azure support for HA and Managed Clusters

Azure Subscription ID

subscription id using your subscription

azure-subscription

Azure Tenant ID

Azure Dashboard

Azure Dashboard contains all the credentials required

azure-dashboard

lets get the tenant id from the Azure

Azure Client ID

it represents the id of app created

Azure Client Secret

it represents the secret associated with the app in order to use it

create app secret

after-click

copy-secret

Assign Role to your app

head over to subscriptions page and click Access Control (IAM) select the Role Assignment and then click Add > Add Role Assignment create a new role and when selecting the identity specify the name of the app Here you can customize the role this app has

role-assign-app

How these credentials are used by ksctl

  1. Environment Variables
export AZURE_TENANT_ID=""
export AZURE_SUBSCRIPTION_ID=""
export AZURE_CLIENT_ID=""
export AZURE_CLIENT_SECRET=""
  1. Using command line
ksctl cred

Current Features

Cluster features

Highly Available cluster

clusters which are managed by the user not by cloud provider

you can choose between k3s and kubeadm as your bootstrap tool

custom components being used

  • Etcd database VM
  • HAProxy loadbalancer VM for controlplane nodes
  • controlplane VMs
  • workerplane VMs

Managed Cluster

clusters which are managed by the cloud provider

Other capabilities

Create, Update, Delete, Switch

2.3.3 - Civo

Civo Cloud Provider

Civo support for HA and Managed Clusters

Getting credentials

under settings look for the profile

copy the credentials

How to add credentials to ksctl

  1. Environment Variables
export CIVO_TOKEN=""
  1. Using command line
ksctl cred

Current Features

Cluster features

Highly Available cluster

clusters which are managed by the user not by cloud provider

you can choose between k3s and kubeadm as your bootstrap tool

custom components being used

  • Etcd database VM
  • HAProxy loadbalancer instance for controlplane nodes
  • controlplane instances
  • workerplane instances

Managed Cluster

clusters which are managed by the cloud provider

Other capabilities

Create, Update, Delete, Switch

2.3.4 - Google Cloud Platform

Google Cloud Platform

Gcp support for HA and Managed Clusters

2.3.5 - Local

Local Provider

It creates cluster on the host machine utilizing kind

Current features

currently using Kind Kubernetes in Docker

2.4 - Reference

Low level reference docs for your project.

2.4.1 - Reference for command line reference

A short lead description about this content page. It can be bold or italic and can be split over multiple paragraphs.

for the Ksctl cli

CLI Command Reference

Docs are available now in cli repo Here are the links for the documentation files

Markdown format RichText format

2.5 - Contribution Guidelines

How to contribute to the docs

You can do almost all the tests in your local except e2e tests which requires you to provide cloud credentials

Provide a generic tasks for new and existing contributors

Types of changes

There are many ways to contribute to the ksctl project. Here are a few examples:

  • New changes to docs: You can contribute by writing new documentation, fixing typos, or improving the clarity of existing documentation.
  • New features: You can contribute by proposing new features, implementing new features, or fixing bugs.
  • Cloud support: You can contribute by adding support for new cloud providers.
  • Kubernetes distribution support: You can contribute by adding support for new Kubernetes distributions.

Phases a change / feature goes through

  1. Raise a issue regarding it (used for prioritizing)
  2. what all changes does it demands
  3. if all goes well you will be assigned
  4. If its about adding Cloud Support then usages of CloudFactory is needed and sperate the logic of vm, firewall, etc. to their respective files and do have a helper file for behind the scenes logic for ease of use
  5. If its about adding Distribution support do check its compatability with different cloud providers vm configs and firewall rules which needs to be done

Formating for PR & Issue subject line

Subject / Title

# Releated to enhancement
enhancement: <Title>

# Related to feature
feat: <Title>

# Related to Bug fix or other types of fixes
fix: <Title>

# Related to update
update: <Title>

Body

Follow the PR or Issue template add all the significant changes to the PR description

Commit messages

mention the detailed description in the git commits. what? why? How?

Each commit must be sign-off and should follow conventional commit guidelines.

Conventional Commits

The commit message should be structured as follows:

<type>(optional scope): <description>

[optional body]

[optional footer(s)]

For more detailed information on conventional commits, you can refer to the official Conventional Commits specification.

Sign-off

Each commit must be signed-off. You can do this by adding a sign-off line to your commit messages. When committing changes in your local branch, add the -S flag to the git commit command:

$ git commit -S -m "YOUR_COMMIT_MESSAGE"
# Creates a signed commit

You can find more comprehensive details on how to sign off git commits by referring to the GitHub section on signing commits.

Verification of Commit Signatures

You have the option to sign commits and tags locally, which adds a layer of assurance regarding the origin of your changes. GitHub designates commits or tags as either “Verified” or “Partially verified” if they possess a GPG, SSH, or S/MIME signature that is cryptographically valid.

GPG Commit Signature Verification

To sign commits using GPG and ensure their verification on GitHub, adhere to these steps:

  • Check for existing GPG keys.
  • Generate a new GPG key.
  • Add the GPG key to your GitHub account.
  • Inform Git about your signing key.
  • Proceed to sign commits.

SSH Commit Signature Verification

To sign commits using SSH and ensure their verification on GitHub, follow these steps:

  • Check for existing SSH keys.
  • Generate a new SSH key.
  • Add an SSH signing key to your GitHub account.
  • Inform Git about your signing key.
  • Proceed to sign commits.

S/MIME Commit Signature Verification

To sign commits using S/MIME and ensure their verification on GitHub, follow these steps:

  • Inform Git about your signing key.
  • Proceed to sign commits.

For more detailed instructions, refer to GitHub’s documentation on commit signature verification

Development

First you have to fork the ksctl repository. fork

cd <path> # to you directory where you want to clone ksctl
mkdir <directory name> # create a directory
cd <directory name> # go inside the directory
git clone https://github.com/${YOUR_GITHUB_USERNAME}/ksctl.git # clone you fork repository
cd ksctl # go inside the ksctl directory
git remote add upstream https://github.com/ksctl/ksctl.git # set upstream
git remote set-url --push upstream no_push # no push to upstream

Trying out code changes

Before submitting a code change, it is important to test your changes thoroughly. You can do this by running the unit tests and integration tests.

Submitting changes

Once you have tested your changes, you can submit them to the ksctl project by creating a pull request. Make sure you use the provided PR template

Getting help

If you need help contributing to the ksctl project, you can ask for help on the kubesimplify Discord server, ksctl-cli channel or else raise issue or discussion

Thank you for contributing!

We appreciate your contributions to the ksctl project!

Some of our contributors ksctl contributors

2.5.1 - Contribution Guidelines for CLI

How to contribute to the ksctl-cli

Repository: ksctl/cli

How to Build from source

Linux

make install_linux # for linux

Mac OS

make install_macos # for macos

Windows

.\builder.ps1 # for windows

2.5.2 - Contribution Guidelines for Core

How to contribute to the ksctl

Repository: ksctl/ksctl

Test out both All Mock and Unit tests and lints

make test

Test out both All Unit tests

make unit_test_all

Test out both All Mock tests

make mock_all

for E2E tests on local

set the required token as ENV vars then

cd test/e2e

# then the syntax for running
go run . -op create -file azure/create.json

# for operations you can refer file test/e2e/consts.go

2.5.3 - Contribution Guidelines for Docs

How to contribute to the ksctl-docs

Repository: ksctl/docs

How to Build from source

# Prequisites
npm install -D postcss
npm install -D postcss-cli
npm install -D autoprefixer
npm install hugo-extended

Install Dependencies

hugo serve

2.6 - Concepts

Concepts around ksctl core

This section will help you to learn about the underlying system of Ksctl. It will help you to obtain a deeper understanding of how Ksctl works.

Sequence diagrams for 2 major operations

Create Cloud-Managed Clusters

sequenceDiagram
    participant cm as Manager Cluster Managed
    participant cc as Cloud Controller
    participant kc as Ksctl Kubernetes Controller
    cm->>cc: transfers specs from user or machine
    cc->>cc: to create the cloud infra (network, subnet, firewall, cluster)
    cc->>cm: 'kubeconfig' and other cluster access to the state
    cm->>kc: shares 'kubeconfig'
    kc->>kc: installs kubectl agent, stateimporter and controllers
    kc->>cm: status of creation

Create Self-Managed HA clusters

sequenceDiagram
    participant csm as Manager Cluster Self-Managed
    participant cc as Cloud Controller
    participant bc as Bootstrap Controller
    participant kc as Ksctl Kubernetes Controller
    csm->>cc: transfers specs from user or machine
    cc->>cc: to create the cloud infra (network, subnet, firewall, vms)
    cc->>csm: return state to be used by BootstrapController
    csm->>bc: transfers infra state like ssh key, pub IPs, etc
    bc->>bc: bootstrap the infra by either (k3s or kubeadm)
    bc->>csm: 'kubeconfig' and other cluster access to the state
    csm->>kc: shares 'kubeconfig'
    kc->>kc: installs kubectl agent, stateimporter and controllers
    kc->>csm: status of creation

2.6.1 - Cloud Controller

The Component of Ksctl responsible for creating and managing clusters for different Cloud platforms.

It is responsible for controlling the sequence of tasks for every cloud provider to be executed

2.6.2 - Core Manager

The Component of Ksctl responsible for managing Cloud controller and Distribution controller. It has multiple types of managers

It is responsible for managing client requests and calls the corresponding controller

Types

ManagerClusterKsctl

Role: Perform ksctl getCluster, switchCluster

ManagerClusterKubernetes

Role: Perform ksctl addApplicationAndCrds Currently to be used by machine to machine not by ksctl cli

ManagerClusterManaged

Role: Perform ksctl createCluster, deleteCluster

ManagerClusterSelfManaged

Role: Perform ksctl createCluster, deleteCluster, addWorkerNodes, delWorkerNodes

2.6.3 - Distribution Controller

The Component of Ksctl responsible for selecting the type of Bootstrap solution (Kubeadm or K3s).

It is responsible for controlling the execution sequence for configuring Cloud Resources wrt to the Kubernetes distribution choosen

2.7 - Ksctl Components

Place of all the documentation for the Operators used specifically for k8s clusters

Components

  • ksctl agent
  • ksctl stateimporter
  • ksctl application controller

Sequence diagram on how its deployed

flowchart TD
Base(Ksctl Infra and Bootstrap) -->|Cluster is created| KC(Ksctl controller)
KC -->|Creates| Storage{storageProvider=='local'}
Storage -->|Yes| KSI(Ksctl Storage Importer)
Storage -->|No| KA(Ksctl Agent)
KSI -->KA
KA -->|Health| D(Deploy other ksctl controllers)

2.7.1 - Ksctl Agent

Documentation on ksctl agent

It is a ksctl’s solution to infrastructure management and also kubernetes management.

Especially inside the kubertes cluster

It is a GRPC server running as a deployment. and a fleet of controllers will call it to perform certain operations. For instance, application installation via stack.application.ksctl.com/v1alpha, etc.

It will be installed on all kubernetes cluster created via ksctl from >= v1.2.0

2.7.2 - Ksctl Application Controller

Documentation on ksctl application controller

It helps in deploying applications using crd to help manage with installaztion, upgrades, downgrades, uninstallaztion. from one version to another and provide a single place of truth where to look for which applications are installed

Types

Stack

For defining a hetrogenous components we came up with a stack which contains M number of components which are different applications with their versions

Supported Apps and CNI

NameTypeCategoryKsctl_NameMore Info
Argo-CDstandardCI/CDstandard-argocdLink
Argo-RolloutsstandardCI/CDstandard-argorolloutsLink
IstiostandardService Meshstandard-istioLink
Ciliumstandard-ciliumLink
Flannelstandard-flannelLink
Kube-PrometheusstandardMonitoringstandard-kubeprometheusLink
SpinKubeproductionWasmproduction-spinkubeLink
WasmEdge and WasmtimeproductionWasmproduction-kwasmLink

Argo-CD

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argocd
spec:
	stacks:
	- stackId: standard-argocd
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argocd
spec:
	stacks:
	- stackId: standard-argocd
		appType: app
		overrides:
			argocd:
				version: <string> # version of the argocd
				noUI: <bool> # to disable the UI
				namespace: <string> # namespace to install argocd
				namespaceInstall: <bool> # to install namespace specific argocd

Argo-Rollouts

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argorollouts
spec:
	stacks:
	- stackId: standard-argorollouts
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: argorollouts
spec:
	stacks:
	- stackId: standard-argorollouts
		appType: app
		overrides:
			argorollouts:
				version: <string> # version of the argorollouts
				namespace: <string> # namespace to install argocd
				namespaceInstall: <bool> # to install namespace specific argocd

Istio

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: istio
spec:
	stacks:
	- stackId: standard-istio
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: istio
spec:
	stacks:
	- stackId: standard-istio
		appType: app
		overrides:
			istio:
				version: <string> # version of the istio
				helmBaseChartOverridings: <map[string]any> # helm chart overridings, istio/base
				helmIstiodChartOverridings: <map[string]any> # helm chart overridings, istio/istiod

Cilium

Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues

still we have cilium can be installed and only configuration available are version, we are working towards how can we allow users to specify the overridings in the cluster creation

anyways here is how it is done

we can consider using a file spec instead of cmd parameter, until that is done you have to wait

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: cilium
spec:
	stacks:
	- stackId: cilium
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: cilium
spec:
	stacks:
	- stackId: cilium
		appType: app
		overrides:
			cilium:
				version: <string> # version of the cilium
				ciliumChartOverridings: <map[string]any> # helm chart overridings, cilium

Flannel

Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues

still we have flannel can be installed and only configuration available are version, we are working towards how can we allow users to specify the overridings in the cluster creation

we can consider using a file spec instead of cmd parameter, until that is done you have to wait

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: flannel
spec:
	stacks:
	- stackId: flannel
		appType: cni

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: flannel
spec:
	stacks:
	- stackId: flannel
		appType: cni
		overrides:
			flannel:
				version: <string> # version of the flannel

Kube-Prometheus

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring
spec:
	stacks:
	- stackId: standard-kubeprometheus
		appType: app

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring
spec:
	stacks:
	- stackId: standard-kubeprometheus
		appType: app
		overrides:
			kube-prometheus:
				version: <string> # version of the kube-prometheus
				helmKubePromChartOverridings: <map[string]any> # helm chart overridings, kube-prometheus

SpinKube

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-spinkube
spec:
	stacks:
	- stackId: production-spinkube
		appType: app

Demo app

kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
kubectl port-forward svc/simple-spinapp 8083:80
curl localhost:8083/hello

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-spinkube
spec:
	stacks:
	- stackId: production-wasmedge-kwasm
		appType: app
		overrides:
			spinkube-operator:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
				helmOperatorChartOverridings: <map[string]any> # helm chart overridings, spinkube-operator

			spinkube-operator-shim-executor:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

			spinkube-operator-runtime-class:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

			spinkube-operator-crd:
				version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator

			cert-manager:
				version: <string>
				certmanagerChartOverridings: <map[string]any> # helm chart overridings, cert-manager

			kwasm-operator:
				version: <string>
				kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator

Kwasm

How to use it (Basic Usage)

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-kwasm
spec:
	stacks:
	- stackId: production-kwasm
		appType: app

Demo app(wasmedge)

---
apiVersion: v1
kind: Pod
metadata:
	name: "myapp"
	namespace: default
	labels:
		app: nice
spec:
	runtimeClassName: wasmedge
	containers:
	- name: myapp
		image: "docker.io/cr7258/wasm-demo-app:v1"
		ports:
		- containerPort: 8080
			name: http
	restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
	name: nice
spec:
	selector:
		app: nice
	type: ClusterIP
	ports:
	- name: nice
		protocol: TCP
		port: 8080
		targetPort: 8080

Demo app(wasmtime)

apiVersion: batch/v1
kind: Job
metadata:
  name: nice
  namespace: default
  labels:
    app: nice
spec:
  template:
    metadata:
      name: nice
      labels:
        app: nice
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: nice
        image: "meteatamel/hello-wasm:0.1"
      restartPolicy: OnFailure
#### For wasmedge
# once up and running
kubectl port-forward svc/nice 8080:8080

# then you can curl the service
curl localhost:8080
#### For wasmtime
# just check the logs

Overrides available

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: wasm-wasmedge
spec:
	stacks:
	- stackId: production-kwasm
		appType: app
		overrides:
			kwasm-operator:
				version: <string>
				kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator

Example usage

Lets deploy [email protected], [email protected]

apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring-plus-gitops
spec:
	components:
		- appName: standard-argocd
			appType: app
			version: v2.9.12

		- appName: standard-kubeprometheus
			appType: app
			version: "55.0.0"

You can see once its deployed it fetch and deploys them

Lets try to upgrade them to their latest versions

kubeclt edit stack monitoring-plus-gitops
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
	name: monitoring-plus-gitops
spec:
	components:
		- appName: standard-argocd
			appType: app
			version: latest

		- appName: standard-kubeprometheus
			appType: app
			version: latest

once edited it will uninstall the previous install and reinstalls the latest deployments

2.7.3 - Ksctl State-Importer

Documentation on ksctl stateimporter

It is a helper deployment to transfer state information from one storage option to another.

It is used to transfer data in ~/.ksctl location (provided the cluster is created via storageProvider: store-local).

It utilizes the these 2 methods:

so before the ksctl agent is deployed we first create this pod which in turn runs a http server having storageProvider: store-kubernetes and uses storage.Import() method

once we get 200 OK responses from the http server we remove the pod and move to ksctl agent deployment so that it can use the state file present in configmaps, secrets

2.8 - Kubernetes Distributions

Various Kubernetes Distributions

K3s and Kubeadm only work for HA self-managed clusters

2.8.1 - K3s

K3s Kubernetes Distributions

K3s for HA Cluster on supported provider

K3s is used for self-managed clusters. Its a lightweight k8s distribution. We are using it as follows:

  • controlplane (k3s server)
  • workerplane (k3s agent)
  • datastore (etcd members)

2.8.2 - Kubeadm

Kubeadm Kubernetes Distributions

Kubeadm for HA Cluster on supported provider

Kubeadm support is added with etcd as datastore

2.9 - Storage

What does your user need to know to try your project?

storage providers

2.9.1 - External Storage

What does your user need to know to try your project?

External MongoDB as a Storage provider

Refer : internal/storage/external/mongodb

Data to store and filtering it performs

  1. first it gets the cluster data / credentials data based on this filters
    • cluster_name (for cluster)
    • region (for cluster)
    • cloud_provider (for cluster & credentials)
    • cluster_type (for cluster)
    • also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
  2. make sure the above things are specified before writing in the storage

How to use it

  1. you need to call the Init function to get the storage make sure you have the interface type variable as the caller
  2. before performing any operations you must call the Connect().
  3. for using methods: Read(), Write(), Delete() make sure you have called the Setup()
  4. for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
  5. for calling GetOneOrMoreClusters() you need simply specify the filter
  6. for calling AlreadyCreated() you just have to specify the func args
  7. Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
  8. Custom Storage Directory you would need to specify the env var KSCTL_CUSTOM_DIR_ENABLED the value must be directory names wit space separated
  9. specify the Required ENV vars
    • export MONGODB_URI=""

    Hint: mongodb://${username}:${password}@${domain}:${port} or mongo+atlas mongodb+srv://${username}:${password}@${domain}

Things to look for

  1. make sure when you recieve return data from Read(). copy the address value to the storage pointer variable and not the address!

  2. When any credentials are written, it will be stored in

    • Database: ksctl-{userid}-db
    • Collection: {cloud_provider}
    • Document/Record: raw bson data with above specified data and filter fields
  3. When any clusterState is written, it gets stored in

    • Database: ksctl-{userid}-db
    • Collection: credentials
    • Document/Record: raw bson data with above specified data and filter fields
  4. When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to <some_dir>/.ksctl/kubeconfig

2.9.2 - Local Storage

What does your user need to know to try your project?

Local as a Storage Provider

Refer: internal/storage/local

Data to store and filtering it performs

  1. first it gets the cluster data / credentials data based on this filters
    • cluster_name (for cluster)
    • region (for cluster)
    • cloud_provider (for cluster & credentials)
    • cluster_type (for cluster)
    • also when the state of the cluster has recieved the stable desired state mark the IsCompleted flag in the specific cloud_provider struct to indicate its done
  2. make sure the above things are specified before writing in the storage

it is stored something like this

 it will use almost the same construct.
 * ClusterInfos => $USER_HOME/.ksctl/state/
	 |-- {cloud_provider}
		|-- {cluster_type} aka (ha, managed)
			|-- "{cluster_name} {region}"
				|-- state.json
 * CredentialInfo => $USER_HOME/.ksctl/credentials/{cloud_provider}.json

How to use it

  1. you need to call the Init function to get the storage make sure you have the interface type variable as the caller
  2. before performing any operations you must call the Connect().
  3. for using methods: Read(), Write(), Delete() make sure you have called the Setup()
  4. for calling ReadCredentials(), WriteCredentials() you can use it directly just need to specify the cloud provider you want to write
  5. for calling GetOneOrMoreClusters() you need simply specify the filter
  6. for calling AlreadyCreated() you just have to specify the func args
  7. Don’t forget to call the storage.Kill() when you want to stop the complte execution. it guarantees that it will wait till all the pending operations on the storage are completed
  8. Custom Storage Directory you would need to specify the env var KSCTL_CUSTOM_DIR_ENABLED the value must be directory names wit space separated
  9. it creates the configuration directories on your behalf

Things to look for

  1. make sure when you receive return data from Read(). copy the address value to the storage pointer variable and not the address!
  2. When any credentials are written, it will be stored in <some_dir>/.ksctl/credentials/{cloud_provider}.json
  3. When any clusterState is written, it gets stored in <some_dir>/.ksctl/state/{cloud_provider}/{cluster_type}/{cluster_name} {region}/state.json
  4. When you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to <some_dir>/.ksctl/kubeconfig

3 - Features

Features of ksctl

Our Vision

Transform your Kubernetes experience with a tool that puts simplicity and efficiency first. Ksctl eliminates the complexity of cluster management, allowing developers to focus on what matters most – building great applications.

Key Features

🌐 Universal Cloud Support

  • Deploy clusters across any cloud provider
  • Seamless switching between providers
  • Support for both managed and self-managed clusters
  • Freedom to choose your bootstrap provider (K3s or Kubeadm)

πŸš€ Zero-to-Cluster Simplicity

  • Single command cluster deployment
  • No complex configuration required
  • Automated setup and initialization
  • Instant development environment readiness

πŸ’° Cost-Efficient Architecture

  • No additional infrastructure requirements
  • Local file-based or MongoDB storage options
  • Single binary deployment
  • Minimal resource overhead

πŸ› οΈ Streamlined Management

  • Unified interface for all operations
  • Eliminates need for provider-specific CLIs
  • Consistent experience across environments
  • Simplified scaling and upgrades

🎯 Developer-Focused Design

  • Near-zero learning curve
  • Intuitive command structure
  • No new configurations to learn
  • Perfect for teams of all skill levels

πŸ”„ Flexible Operation

  • Self-managed cluster support
  • Cloud provider managed offerings
  • Multiple bootstrap provider options
  • Seamless environment transitions

Technical Benefits

  • Infrastructure Agnostic: Deploy anywhere, manage consistently
  • Rapid Deployment: Bypass complex setup steps and day 0 tasks
  • Future-Ready: Upcoming support for day 1 operations and Wasm
  • Production-Grade: Built for both development and production environments
  • Community-Driven: Active development and continuous improvements

4 - Maintainers

What does your user need to know to try your project?

Maintainers

NameRoleTwitterGithubDiscord
DipanakarCreator & MaintainerTwitterGithubdipankardas
PrafulMaintainerTwitterGithubpraful_

5 - Roadmap

What does your user need to know to try your project?

Current Status on Supported Providers and Next Features

Supported Providers

Done
Not Started
No Plans
Backlog
flowchart LR;
  classDef green color:#022e1f,fill:#00f500;
  classDef red color:#022e1f,fill:#f11111;
  classDef white color:#022e1f,fill:#fff;
  classDef black color:#fff,fill:#000;
  classDef blue color:#fff,fill:#00f;

  XX[ksctl]--CloudFactory-->web{Cloud Providers};
  XX[ksctl]--DistroFactory-->web2{Distributions};
  XX[ksctl]--StorageFactory-->web3{State Storage};

  web--Civo-->civo{Types};
  civo:::green--managed-->civom[Create & Delete]:::green;
  civo--HA-->civoha[Create & Delete]:::green;

  web--Local-Kind-->local{Types};
  local:::green--managed-->localm[Create & Delete]:::green;
  local--HA-->localha[Create & Delete]:::black;

  web--AWS-->aws{Types};
  aws:::green--managed-->awsm[Create & Delete]:::green;
  aws--HA-->awsha[Create & Delete]:::green;

  web--Azure-->az{Types};
  az:::green--managed-->azsm[Create & Delete]:::green;
  az--HA-->azha[Create & Delete]:::green;

  web2--K3S-->k3s{Types};
  k3s:::green--HA-->k3ha[Create & Delete]:::green;

  web2--Kubeadm-->kubeadm{Types};
  kubeadm:::green--HA-->kubeadmha[Create & Delete]:::green;

  web3--Local-Store-->slocal{Local}:::green;
  web3--Remote-Store-->rlocal{Remote}:::green;
  rlocal--Provider-->mongo[MongoDB]:::green;

Next Features

  • Talos as the next Bootstrap provider
  • Green software which can help you save energy and also better somehow
  • WASM first class support feature
  • ML features unikernels and better ML workload scalability
  • Production stack for monitoring, security, to application specific application integrations like vault, kafka, etc.
  • Health checks of various k8s cluster
  • Role Based Access Control for any cluster
  • Ability import any existing cluster and also to respect the existing state and not overwrite it with the new state from ksctl but to be able to manage only the resources which the tool has access
  • add initial production ready for cert manager + ingress controller (nginx) + gateway api
  • add initial production ready for monitoring (prometheus + grafana) tracing (jaeger) Opentelemtery support
  • add initial production ready for Networking (cilium)
  • add initial production ready for service mesh (istio)

6 - Search Results