Status: Technical Preview
This is the main
branch documentation
This is the multi-page printable view of this section. Click here to print.
Status: Technical Preview
This is the main
branch documentation
This is the main
branch documentation
Architecture diagrams
Core Design Components
Getting Started Documentation
Lets begin with installation of the tools their are various method
curl -sfL https://get.ksctl.com | python3 -
bash <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
zsh <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
make install_linux
# macOS on M1
make install_macos
# macOS on INTEL
make install_macos_intel
# For uninstalling
make uninstall
This Page includes more info about different cloud providers
Aws support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
ksctl cred
clusters which are managed by the user not by cloud provider
you can choose between k3s and kubeadm as your bootstrap tool
custom components being used
we provision Roles ksctl-*
2 for each cluster:
ksctl-<clustername>-wp-role
for the EKS NodePoolksctl-<clustername>-cp-role
for the EKS controlplanewe utilize the iam:AssumeRole to assume the role and create the cluster
here is the policy and role which we are using
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor6",
"Effect": "Allow",
"Action": [
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:CreateRole",
"iam:DeleteRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:ListInstanceProfiles",
"iam:AddRoleToInstanceProfile",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:CreateServiceLinkedRole",
"iam:DetachRolePolicy",
"iam:DeleteRolePolicy",
"iam:DeleteServiceLinkedRole",
"iam:GetRolePolicy",
"iam:SetSecurityTokenServicePreferences"
],
"Resource": [
"arn:aws:iam::*:role/ksctl-*",
"arn:aws:iam::*:instance-profile/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListNodegroups",
"eks:ListClusters",
"eks:*"
],
"Resource": "*"
}
]
}
The Kubeconfig generated after you ran
ksctl switch aws --name here-you-go --region us-east-1
we are using sst token to authenticate with the cluster, so the kubeconfig is valid for 15 minutes
once you see that there is a error of unauthorized then you need to re-run the above command
Azure support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
subscription id using your subscription
Azure Dashboard contains all the credentials required
lets get the tenant id from the Azure
it represents the id of app created
it represents the secret associated with the app in order to use it
head over to subscriptions page and click Access Control (IAM) select the Role Assignment and then click Add > Add Role Assignment create a new role and when selecting the identity specify the name of the app Here you can customize the role this app has
export AZURE_TENANT_ID=""
export AZURE_SUBSCRIPTION_ID=""
export AZURE_CLIENT_ID=""
export AZURE_CLIENT_SECRET=""
ksctl cred
clusters which are managed by the user not by cloud provider
you can choose between k3s and kubeadm as your bootstrap tool
custom components being used
clusters which are managed by the cloud provider
Managed cluster: till now it’s not supported
HA cluster
Civo support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
export CIVO_TOKEN=""
ksctl cred
clusters which are managed by the user not by cloud provider
you can choose between k3s and kubeadm as your bootstrap tool
custom components being used
clusters which are managed by the cloud provider
Managed cluster: till now it’s not supported
HA cluster
Gcp support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
It creates cluster on the host machine utilizing kind
currently using Kind Kubernetes in Docker
for the Ksctl cli
Docs are available now in cli repo Here are the links for the documentation files
Markdown format RichText format
You can do almost all the tests in your local except e2e tests which requires you to provide cloud credentials
Provide a generic tasks for new and existing contributors
There are many ways to contribute to the ksctl project. Here are a few examples:
Phases a change / feature goes through
# Releated to enhancement
enhancement: <Title>
# Related to feature
feat: <Title>
# Related to Bug fix or other types of fixes
fix: <Title>
# Related to update
update: <Title>
Follow the PR or Issue template add all the significant changes to the PR description
mention the detailed description in the git commits. what? why? How?
Each commit must be sign-off and should follow conventional commit guidelines.
The commit message should be structured as follows:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
For more detailed information on conventional commits, you can refer to the official Conventional Commits specification.
Each commit must be signed-off. You can do this by adding a sign-off line to your commit messages. When committing changes in your local branch, add the -S flag to the git commit command:
$ git commit -S -m "YOUR_COMMIT_MESSAGE"
# Creates a signed commit
You can find more comprehensive details on how to sign off git commits by referring to the GitHub section on signing commits.
You have the option to sign commits and tags locally, which adds a layer of assurance regarding the origin of your changes. GitHub designates commits or tags as either “Verified” or “Partially verified” if they possess a GPG, SSH, or S/MIME signature that is cryptographically valid.
GPG Commit Signature Verification
To sign commits using GPG and ensure their verification on GitHub, adhere to these steps:
SSH Commit Signature Verification
To sign commits using SSH and ensure their verification on GitHub, follow these steps:
S/MIME Commit Signature Verification
To sign commits using S/MIME and ensure their verification on GitHub, follow these steps:
For more detailed instructions, refer to GitHub’s documentation on commit signature verification
First you have to fork the ksctl repository. fork
cd <path> # to you directory where you want to clone ksctl
mkdir <directory name> # create a directory
cd <directory name> # go inside the directory
git clone https://github.com/${YOUR_GITHUB_USERNAME}/ksctl.git # clone you fork repository
cd ksctl # go inside the ksctl directory
git remote add upstream https://github.com/ksctl/ksctl.git # set upstream
git remote set-url --push upstream no_push # no push to upstream
Before submitting a code change, it is important to test your changes thoroughly. You can do this by running the unit tests and integration tests.
Once you have tested your changes, you can submit them to the ksctl project by creating a pull request. Make sure you use the provided PR template
If you need help contributing to the ksctl project, you can ask for help on the kubesimplify Discord server, ksctl-cli channel or else raise issue or discussion
We appreciate your contributions to the ksctl project!
Some of our contributors ksctl contributors
Repository: ksctl/cli
make install_linux # for linux
make install_macos # for macos
.\builder.ps1 # for windows
Repository: ksctl/ksctl
make test
make unit_test_all
make mock_all
set the required token as ENV vars then
cd test/e2e
# then the syntax for running
go run . -op create -file azure/create.json
# for operations you can refer file test/e2e/consts.go
Repository: ksctl/docs
# Prequisites
npm install -D postcss
npm install -D postcss-cli
npm install -D autoprefixer
npm install hugo-extended
hugo serve
This section will help you to learn about the underlying system of Ksctl. It will help you to obtain a deeper understanding of how Ksctl works.
sequenceDiagram participant cm as Manager Cluster Managed participant cc as Cloud Controller participant kc as Ksctl Kubernetes Controller cm->>cc: transfers specs from user or machine cc->>cc: to create the cloud infra (network, subnet, firewall, cluster) cc->>cm: 'kubeconfig' and other cluster access to the state cm->>kc: shares 'kubeconfig' kc->>kc: installs kubectl agent, stateimporter and controllers kc->>cm: status of creation
sequenceDiagram participant csm as Manager Cluster Self-Managed participant cc as Cloud Controller participant bc as Bootstrap Controller participant kc as Ksctl Kubernetes Controller csm->>cc: transfers specs from user or machine cc->>cc: to create the cloud infra (network, subnet, firewall, vms) cc->>csm: return state to be used by BootstrapController csm->>bc: transfers infra state like ssh key, pub IPs, etc bc->>bc: bootstrap the infra by either (k3s or kubeadm) bc->>csm: 'kubeconfig' and other cluster access to the state csm->>kc: shares 'kubeconfig' kc->>kc: installs kubectl agent, stateimporter and controllers kc->>csm: status of creation
It is responsible for controlling the sequence of tasks for every cloud provider to be executed
It is responsible for managing client requests and calls the corresponding controller
Role
: Perform ksctl getCluster, switchCluster
Role
: Perform ksctl addApplicationAndCrds
Currently to be used by machine to machine not by ksctl cli
Role
: Perform ksctl createCluster, deleteCluster
Role
: Perform ksctl createCluster, deleteCluster, addWorkerNodes, delWorkerNodes
It is responsible for controlling the execution sequence for configuring Cloud Resources wrt to the Kubernetes distribution choosen
flowchart TD Base(Ksctl Infra and Bootstrap) -->|Cluster is created| KC(Ksctl controller) KC -->|Creates| Storage{storageProvider=='local'} Storage -->|Yes| KSI(Ksctl Storage Importer) Storage -->|No| KA(Ksctl Agent) KSI -->KA KA -->|Health| D(Deploy other ksctl controllers)
It is a ksctl’s solution to infrastructure management and also kubernetes management.
Especially inside the kubertes cluster
It is a GRPC server running as a deployment. and a fleet of controllers will call it to perform certain operations. For instance, application installation via stack.application.ksctl.com/v1alpha
, etc.
It will be installed on all kubernetes cluster created via ksctl from >= v1.2.0
It helps in deploying applications using crd to help manage with installaztion, upgrades, downgrades, uninstallaztion. from one version to another and provide a single place of truth where to look for which applications are installed
For defining a hetrogenous components we came up with a stack which contains M
number of components which are different applications with their versions
[email protected]
ksctl agent
kubectl apply
the stack it will start deploying the applications in the stack, if you want to upgrade the applications in the stack you can edit the stack and change the version of the application and apply the stack again, it will uninstall the previous version and install the new version. Basically it performs reinstall of the stack which might cause downtimeName | Type | Category | Ksctl_Name | More Info |
---|---|---|---|---|
Argo-CD | standard | CI/CD | standard-argocd | Link |
Argo-Rollouts | standard | CI/CD | standard-argorollouts | Link |
Istio | standard | Service Mesh | standard-istio | Link |
Cilium | standard | - | cilium | Link |
Flannel | standard | - | flannel | Link |
Kube-Prometheus | standard | Monitoring | standard-kubeprometheus | Link |
SpinKube | production | Wasm | production-spinkube | Link |
WasmEdge and Wasmtime | production | Wasm | production-kwasm | Link |
Only one of the app under the category wasm
can be installed at a time we you might need to uninstall one to get another running
also the current implementation of the wasm catorgoty apps annotate all the nodes with kwasm as true
map[string]any
formatHow to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argocd
spec:
stacks:
- stackId: standard-argocd
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argocd
spec:
stacks:
- stackId: standard-argocd
appType: app
overrides:
argocd:
version: <string> # version of the argocd
noUI: <bool> # to disable the UI
namespace: <string> # namespace to install argocd
namespaceInstall: <bool> # to install namespace specific argocd
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argorollouts
spec:
stacks:
- stackId: standard-argorollouts
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argorollouts
spec:
stacks:
- stackId: standard-argorollouts
appType: app
overrides:
argorollouts:
version: <string> # version of the argorollouts
namespace: <string> # namespace to install argocd
namespaceInstall: <bool> # to install namespace specific argocd
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: istio
spec:
stacks:
- stackId: standard-istio
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: istio
spec:
stacks:
- stackId: standard-istio
appType: app
overrides:
istio:
version: <string> # version of the istio
helmBaseChartOverridings: <map[string]any> # helm chart overridings, istio/base
helmIstiodChartOverridings: <map[string]any> # helm chart overridings, istio/istiod
Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues
still we have cilium can be installed and only configuration available are version
, we are working towards how can we allow users to specify the overridings in the cluster creation
anyways here is how it is done
we can consider using a file spec instead of cmd parameter, until that is done you have to wait
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: cilium
spec:
stacks:
- stackId: cilium
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: cilium
spec:
stacks:
- stackId: cilium
appType: app
overrides:
cilium:
version: <string> # version of the cilium
ciliumChartOverridings: <map[string]any> # helm chart overridings, cilium
Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues
still we have flannel can be installed and only configuration available are version
, we are working towards how can we allow users to specify the overridings in the cluster creation
we can consider using a file spec instead of cmd parameter, until that is done you have to wait
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: flannel
spec:
stacks:
- stackId: flannel
appType: cni
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: flannel
spec:
stacks:
- stackId: flannel
appType: cni
overrides:
flannel:
version: <string> # version of the flannel
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring
spec:
stacks:
- stackId: standard-kubeprometheus
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring
spec:
stacks:
- stackId: standard-kubeprometheus
appType: app
overrides:
kube-prometheus:
version: <string> # version of the kube-prometheus
helmKubePromChartOverridings: <map[string]any> # helm chart overridings, kube-prometheus
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-spinkube
spec:
stacks:
- stackId: production-spinkube
appType: app
Demo app
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
kubectl port-forward svc/simple-spinapp 8083:80
curl localhost:8083/hello
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-spinkube
spec:
stacks:
- stackId: production-wasmedge-kwasm
appType: app
overrides:
spinkube-operator:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
helmOperatorChartOverridings: <map[string]any> # helm chart overridings, spinkube-operator
spinkube-operator-shim-executor:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
spinkube-operator-runtime-class:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
spinkube-operator-crd:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
cert-manager:
version: <string>
certmanagerChartOverridings: <map[string]any> # helm chart overridings, cert-manager
kwasm-operator:
version: <string>
kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-kwasm
spec:
stacks:
- stackId: production-kwasm
appType: app
Demo app(wasmedge)
---
apiVersion: v1
kind: Pod
metadata:
name: "myapp"
namespace: default
labels:
app: nice
spec:
runtimeClassName: wasmedge
containers:
- name: myapp
image: "docker.io/cr7258/wasm-demo-app:v1"
ports:
- containerPort: 8080
name: http
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nice
spec:
selector:
app: nice
type: ClusterIP
ports:
- name: nice
protocol: TCP
port: 8080
targetPort: 8080
Demo app(wasmtime)
apiVersion: batch/v1
kind: Job
metadata:
name: nice
namespace: default
labels:
app: nice
spec:
template:
metadata:
name: nice
labels:
app: nice
spec:
runtimeClassName: wasmtime
containers:
- name: nice
image: "meteatamel/hello-wasm:0.1"
restartPolicy: OnFailure
#### For wasmedge
# once up and running
kubectl port-forward svc/nice 8080:8080
# then you can curl the service
curl localhost:8080
#### For wasmtime
# just check the logs
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-wasmedge
spec:
stacks:
- stackId: production-kwasm
appType: app
overrides:
kwasm-operator:
version: <string>
kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator
Lets deploy [email protected]
, [email protected]
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring-plus-gitops
spec:
components:
- appName: standard-argocd
appType: app
version: v2.9.12
- appName: standard-kubeprometheus
appType: app
version: "55.0.0"
You can see once its deployed it fetch and deploys them
Lets try to upgrade them to their latest versions
kubeclt edit stack monitoring-plus-gitops
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring-plus-gitops
spec:
components:
- appName: standard-argocd
appType: app
version: latest
- appName: standard-kubeprometheus
appType: app
version: latest
once edited it will uninstall the previous install and reinstalls the latest deployments
It is a helper deployment to transfer state information from one storage option to another.
It is used to transfer data in ~/.ksctl
location (provided the cluster is created via storageProvider: store-local
).
It utilizes the these 2 methods:
Export
: StorageFactory InterfaceImport
: StorageFactory Interfaceso before the ksctl agent is deployed we first create this pod which in turn runs a http server having storageProvider: store-kubernetes
and uses storage.Import()
method
once we get 200 OK responses from the http server we remove the pod and move to ksctl agent deployment so that it can use the state file present in configmaps, secrets
K3s and Kubeadm only work for HA self-managed clusters
K3s for HA Cluster on supported provider
K3s is used for self-managed clusters. Its a lightweight k8s distribution. We are using it as follows:
Kubeadm for HA Cluster on supported provider
Kubeadm support is added with etcd as datastore
storage providers
External MongoDB as a Storage provider
Refer : internal/storage/external/mongodb
cluster_name
(for cluster)region
(for cluster)cloud_provider
(for cluster & credentials)cluster_type
(for cluster)KSCTL_CUSTOM_DIR_ENABLED
the value must be directory names wit space separatedexport MONGODB_URI=""
Hint: mongodb://${username}:${password}@${domain}:${port} or mongo+atlas mongodb+srv://${username}:${password}@${domain}
make sure when you recieve return data from Read(). copy the address value to the storage pointer variable and not the address!
When any credentials are written, it will be stored in
ksctl-{userid}-db
{cloud_provider}
raw bson data
with above specified data and filter fieldsWhen any clusterState is written, it gets stored in
ksctl-{userid}-db
credentials
raw bson data
with above specified data and filter fieldsWhen you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to <some_dir>/.ksctl/kubeconfig
Local as a Storage Provider
Refer: internal/storage/local
cluster_name
(for cluster)region
(for cluster)cloud_provider
(for cluster & credentials)cluster_type
(for cluster)it is stored something like this
it will use almost the same construct.
* ClusterInfos => $USER_HOME/.ksctl/state/
|-- {cloud_provider}
|-- {cluster_type} aka (ha, managed)
|-- "{cluster_name} {region}"
|-- state.json
* CredentialInfo => $USER_HOME/.ksctl/credentials/{cloud_provider}.json
KSCTL_CUSTOM_DIR_ENABLED
the value must be directory names wit space separated<some_dir>/.ksctl/credentials/{cloud_provider}.json
<some_dir>/.ksctl/state/{cloud_provider}/{cluster_type}/{cluster_name} {region}/state.json
<some_dir>/.ksctl/kubeconfig
This is the latest
branch documentation
Architecture diagrams
Core Design Components
Getting Started Documentation
Lets begin with installation of the tools their are various method
curl -sfL https://get.ksctl.com | python3 -
bash <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
zsh <(curl -s https://raw.githubusercontent.com/ksctl/cli/main/scripts/uninstall.sh)
make install_linux
# macOS on M1
make install_macos
# macOS on INTEL
make install_macos_intel
# For uninstalling
make uninstall
This Page includes more info about different cloud providers
Aws support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
ksctl cred
clusters which are managed by the user not by cloud provider
you can choose between k3s and kubeadm as your bootstrap tool
custom components being used
we provision Roles ksctl-*
2 for each cluster:
ksctl-<clustername>-wp-role
for the EKS NodePoolksctl-<clustername>-cp-role
for the EKS controlplanewe utilize the iam:AssumeRole to assume the role and create the cluster
here is the policy and role which we are using
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor6",
"Effect": "Allow",
"Action": [
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:CreateRole",
"iam:DeleteRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:ListInstanceProfiles",
"iam:AddRoleToInstanceProfile",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:CreateServiceLinkedRole",
"iam:DetachRolePolicy",
"iam:DeleteRolePolicy",
"iam:DeleteServiceLinkedRole",
"iam:GetRolePolicy",
"iam:SetSecurityTokenServicePreferences"
],
"Resource": [
"arn:aws:iam::*:role/ksctl-*",
"arn:aws:iam::*:instance-profile/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListNodegroups",
"eks:ListClusters",
"eks:*"
],
"Resource": "*"
}
]
}
The Kubeconfig generated after you ran
ksctl switch aws --name here-you-go --region us-east-1
we are using sst token to authenticate with the cluster, so the kubeconfig is valid for 15 minutes
once you see that there is a error of unauthorized then you need to re-run the above command
Azure support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
subscription id using your subscription
Azure Dashboard contains all the credentials required
lets get the tenant id from the Azure
it represents the id of app created
it represents the secret associated with the app in order to use it
head over to subscriptions page and click Access Control (IAM) select the Role Assignment and then click Add > Add Role Assignment create a new role and when selecting the identity specify the name of the app Here you can customize the role this app has
export AZURE_TENANT_ID=""
export AZURE_SUBSCRIPTION_ID=""
export AZURE_CLIENT_ID=""
export AZURE_CLIENT_SECRET=""
ksctl cred
clusters which are managed by the user not by cloud provider
you can choose between k3s and kubeadm as your bootstrap tool
custom components being used
clusters which are managed by the cloud provider
Managed cluster: till now it’s not supported
HA cluster
Civo support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
export CIVO_TOKEN=""
ksctl cred
clusters which are managed by the user not by cloud provider
you can choose between k3s and kubeadm as your bootstrap tool
custom components being used
clusters which are managed by the cloud provider
Managed cluster: till now it’s not supported
HA cluster
Gcp support for HA and Managed Clusters
we need credentials to access clusters
these are confidential information so shouldn’t be shared with anyone
It creates cluster on the host machine utilizing kind
currently using Kind Kubernetes in Docker
for the Ksctl cli
Docs are available now in cli repo Here are the links for the documentation files
Markdown format RichText format
You can do almost all the tests in your local except e2e tests which requires you to provide cloud credentials
Provide a generic tasks for new and existing contributors
There are many ways to contribute to the ksctl project. Here are a few examples:
Phases a change / feature goes through
# Releated to enhancement
enhancement: <Title>
# Related to feature
feat: <Title>
# Related to Bug fix or other types of fixes
fix: <Title>
# Related to update
update: <Title>
Follow the PR or Issue template add all the significant changes to the PR description
mention the detailed description in the git commits. what? why? How?
Each commit must be sign-off and should follow conventional commit guidelines.
The commit message should be structured as follows:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
For more detailed information on conventional commits, you can refer to the official Conventional Commits specification.
Each commit must be signed-off. You can do this by adding a sign-off line to your commit messages. When committing changes in your local branch, add the -S flag to the git commit command:
$ git commit -S -m "YOUR_COMMIT_MESSAGE"
# Creates a signed commit
You can find more comprehensive details on how to sign off git commits by referring to the GitHub section on signing commits.
You have the option to sign commits and tags locally, which adds a layer of assurance regarding the origin of your changes. GitHub designates commits or tags as either “Verified” or “Partially verified” if they possess a GPG, SSH, or S/MIME signature that is cryptographically valid.
GPG Commit Signature Verification
To sign commits using GPG and ensure their verification on GitHub, adhere to these steps:
SSH Commit Signature Verification
To sign commits using SSH and ensure their verification on GitHub, follow these steps:
S/MIME Commit Signature Verification
To sign commits using S/MIME and ensure their verification on GitHub, follow these steps:
For more detailed instructions, refer to GitHub’s documentation on commit signature verification
First you have to fork the ksctl repository. fork
cd <path> # to you directory where you want to clone ksctl
mkdir <directory name> # create a directory
cd <directory name> # go inside the directory
git clone https://github.com/${YOUR_GITHUB_USERNAME}/ksctl.git # clone you fork repository
cd ksctl # go inside the ksctl directory
git remote add upstream https://github.com/ksctl/ksctl.git # set upstream
git remote set-url --push upstream no_push # no push to upstream
Before submitting a code change, it is important to test your changes thoroughly. You can do this by running the unit tests and integration tests.
Once you have tested your changes, you can submit them to the ksctl project by creating a pull request. Make sure you use the provided PR template
If you need help contributing to the ksctl project, you can ask for help on the kubesimplify Discord server, ksctl-cli channel or else raise issue or discussion
We appreciate your contributions to the ksctl project!
Some of our contributors ksctl contributors
Repository: ksctl/cli
make install_linux # for linux
make install_macos # for macos
.\builder.ps1 # for windows
Repository: ksctl/ksctl
make test
make unit_test_all
make mock_all
set the required token as ENV vars then
cd test/e2e
# then the syntax for running
go run . -op create -file azure/create.json
# for operations you can refer file test/e2e/consts.go
Repository: ksctl/docs
# Prequisites
npm install -D postcss
npm install -D postcss-cli
npm install -D autoprefixer
npm install hugo-extended
hugo serve
This section will help you to learn about the underlying system of Ksctl. It will help you to obtain a deeper understanding of how Ksctl works.
sequenceDiagram participant cm as Manager Cluster Managed participant cc as Cloud Controller participant kc as Ksctl Kubernetes Controller cm->>cc: transfers specs from user or machine cc->>cc: to create the cloud infra (network, subnet, firewall, cluster) cc->>cm: 'kubeconfig' and other cluster access to the state cm->>kc: shares 'kubeconfig' kc->>kc: installs kubectl agent, stateimporter and controllers kc->>cm: status of creation
sequenceDiagram participant csm as Manager Cluster Self-Managed participant cc as Cloud Controller participant bc as Bootstrap Controller participant kc as Ksctl Kubernetes Controller csm->>cc: transfers specs from user or machine cc->>cc: to create the cloud infra (network, subnet, firewall, vms) cc->>csm: return state to be used by BootstrapController csm->>bc: transfers infra state like ssh key, pub IPs, etc bc->>bc: bootstrap the infra by either (k3s or kubeadm) bc->>csm: 'kubeconfig' and other cluster access to the state csm->>kc: shares 'kubeconfig' kc->>kc: installs kubectl agent, stateimporter and controllers kc->>csm: status of creation
It is responsible for controlling the sequence of tasks for every cloud provider to be executed
It is responsible for managing client requests and calls the corresponding controller
Role
: Perform ksctl getCluster, switchCluster
Role
: Perform ksctl addApplicationAndCrds
Currently to be used by machine to machine not by ksctl cli
Role
: Perform ksctl createCluster, deleteCluster
Role
: Perform ksctl createCluster, deleteCluster, addWorkerNodes, delWorkerNodes
It is responsible for controlling the execution sequence for configuring Cloud Resources wrt to the Kubernetes distribution choosen
flowchart TD Base(Ksctl Infra and Bootstrap) -->|Cluster is created| KC(Ksctl controller) KC -->|Creates| Storage{storageProvider=='local'} Storage -->|Yes| KSI(Ksctl Storage Importer) Storage -->|No| KA(Ksctl Agent) KSI -->KA KA -->|Health| D(Deploy other ksctl controllers)
It is a ksctl’s solution to infrastructure management and also kubernetes management.
Especially inside the kubertes cluster
It is a GRPC server running as a deployment. and a fleet of controllers will call it to perform certain operations. For instance, application installation via stack.application.ksctl.com/v1alpha
, etc.
It will be installed on all kubernetes cluster created via ksctl from >= v1.2.0
It helps in deploying applications using crd to help manage with installaztion, upgrades, downgrades, uninstallaztion. from one version to another and provide a single place of truth where to look for which applications are installed
For defining a hetrogenous components we came up with a stack which contains M
number of components which are different applications with their versions
[email protected]
ksctl agent
kubectl apply
the stack it will start deploying the applications in the stack, if you want to upgrade the applications in the stack you can edit the stack and change the version of the application and apply the stack again, it will uninstall the previous version and install the new version. Basically it performs reinstall of the stack which might cause downtimeName | Type | Category | Ksctl_Name | More Info |
---|---|---|---|---|
Argo-CD | standard | CI/CD | standard-argocd | Link |
Argo-Rollouts | standard | CI/CD | standard-argorollouts | Link |
Istio | standard | Service Mesh | standard-istio | Link |
Cilium | standard | - | cilium | Link |
Flannel | standard | - | flannel | Link |
Kube-Prometheus | standard | Monitoring | standard-kubeprometheus | Link |
SpinKube | production | Wasm | production-spinkube | Link |
WasmEdge and Wasmtime | production | Wasm | production-kwasm | Link |
Only one of the app under the category wasm
can be installed at a time we you might need to uninstall one to get another running
also the current implementation of the wasm catorgoty apps annotate all the nodes with kwasm as true
map[string]any
formatHow to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argocd
spec:
stacks:
- stackId: standard-argocd
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argocd
spec:
stacks:
- stackId: standard-argocd
appType: app
overrides:
argocd:
version: <string> # version of the argocd
noUI: <bool> # to disable the UI
namespace: <string> # namespace to install argocd
namespaceInstall: <bool> # to install namespace specific argocd
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argorollouts
spec:
stacks:
- stackId: standard-argorollouts
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: argorollouts
spec:
stacks:
- stackId: standard-argorollouts
appType: app
overrides:
argorollouts:
version: <string> # version of the argorollouts
namespace: <string> # namespace to install argocd
namespaceInstall: <bool> # to install namespace specific argocd
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: istio
spec:
stacks:
- stackId: standard-istio
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: istio
spec:
stacks:
- stackId: standard-istio
appType: app
overrides:
istio:
version: <string> # version of the istio
helmBaseChartOverridings: <map[string]any> # helm chart overridings, istio/base
helmIstiodChartOverridings: <map[string]any> # helm chart overridings, istio/istiod
Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues
still we have cilium can be installed and only configuration available are version
, we are working towards how can we allow users to specify the overridings in the cluster creation
anyways here is how it is done
we can consider using a file spec instead of cmd parameter, until that is done you have to wait
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: cilium
spec:
stacks:
- stackId: cilium
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: cilium
spec:
stacks:
- stackId: cilium
appType: app
overrides:
cilium:
version: <string> # version of the cilium
ciliumChartOverridings: <map[string]any> # helm chart overridings, cilium
Currently we cannot install via the ksctl crd as cni are needed to be installed when configuring otherwise it will cause network issues
still we have flannel can be installed and only configuration available are version
, we are working towards how can we allow users to specify the overridings in the cluster creation
we can consider using a file spec instead of cmd parameter, until that is done you have to wait
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: flannel
spec:
stacks:
- stackId: flannel
appType: cni
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: flannel
spec:
stacks:
- stackId: flannel
appType: cni
overrides:
flannel:
version: <string> # version of the flannel
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring
spec:
stacks:
- stackId: standard-kubeprometheus
appType: app
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring
spec:
stacks:
- stackId: standard-kubeprometheus
appType: app
overrides:
kube-prometheus:
version: <string> # version of the kube-prometheus
helmKubePromChartOverridings: <map[string]any> # helm chart overridings, kube-prometheus
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-spinkube
spec:
stacks:
- stackId: production-spinkube
appType: app
Demo app
kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
kubectl port-forward svc/simple-spinapp 8083:80
curl localhost:8083/hello
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-spinkube
spec:
stacks:
- stackId: production-wasmedge-kwasm
appType: app
overrides:
spinkube-operator:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
helmOperatorChartOverridings: <map[string]any> # helm chart overridings, spinkube-operator
spinkube-operator-shim-executor:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
spinkube-operator-runtime-class:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
spinkube-operator-crd:
version: <string> # version of the spinkube-operator-shim-executor are same for shim-execuator, runtime-class, shim-executor-crd, spinkube-operator
cert-manager:
version: <string>
certmanagerChartOverridings: <map[string]any> # helm chart overridings, cert-manager
kwasm-operator:
version: <string>
kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator
How to use it (Basic Usage)
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-kwasm
spec:
stacks:
- stackId: production-kwasm
appType: app
Demo app(wasmedge)
---
apiVersion: v1
kind: Pod
metadata:
name: "myapp"
namespace: default
labels:
app: nice
spec:
runtimeClassName: wasmedge
containers:
- name: myapp
image: "docker.io/cr7258/wasm-demo-app:v1"
ports:
- containerPort: 8080
name: http
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nice
spec:
selector:
app: nice
type: ClusterIP
ports:
- name: nice
protocol: TCP
port: 8080
targetPort: 8080
Demo app(wasmtime)
apiVersion: batch/v1
kind: Job
metadata:
name: nice
namespace: default
labels:
app: nice
spec:
template:
metadata:
name: nice
labels:
app: nice
spec:
runtimeClassName: wasmtime
containers:
- name: nice
image: "meteatamel/hello-wasm:0.1"
restartPolicy: OnFailure
#### For wasmedge
# once up and running
kubectl port-forward svc/nice 8080:8080
# then you can curl the service
curl localhost:8080
#### For wasmtime
# just check the logs
Overrides available
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: wasm-wasmedge
spec:
stacks:
- stackId: production-kwasm
appType: app
overrides:
kwasm-operator:
version: <string>
kwasmOperatorChartOverridings: <map[string]any> # helm chart overridings, kwasm/kwasm-operator
Lets deploy [email protected]
, [email protected]
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring-plus-gitops
spec:
components:
- appName: standard-argocd
appType: app
version: v2.9.12
- appName: standard-kubeprometheus
appType: app
version: "55.0.0"
You can see once its deployed it fetch and deploys them
Lets try to upgrade them to their latest versions
kubeclt edit stack monitoring-plus-gitops
apiVersion: application.ksctl.com/v1alpha1
kind: Stack
metadata:
name: monitoring-plus-gitops
spec:
components:
- appName: standard-argocd
appType: app
version: latest
- appName: standard-kubeprometheus
appType: app
version: latest
once edited it will uninstall the previous install and reinstalls the latest deployments
It is a helper deployment to transfer state information from one storage option to another.
It is used to transfer data in ~/.ksctl
location (provided the cluster is created via storageProvider: store-local
).
It utilizes the these 2 methods:
Export
: StorageFactory InterfaceImport
: StorageFactory Interfaceso before the ksctl agent is deployed we first create this pod which in turn runs a http server having storageProvider: store-kubernetes
and uses storage.Import()
method
once we get 200 OK responses from the http server we remove the pod and move to ksctl agent deployment so that it can use the state file present in configmaps, secrets
K3s and Kubeadm only work for HA self-managed clusters
K3s for HA Cluster on supported provider
K3s is used for self-managed clusters. Its a lightweight k8s distribution. We are using it as follows:
Kubeadm for HA Cluster on supported provider
Kubeadm support is added with etcd as datastore
storage providers
External MongoDB as a Storage provider
Refer : internal/storage/external/mongodb
cluster_name
(for cluster)region
(for cluster)cloud_provider
(for cluster & credentials)cluster_type
(for cluster)KSCTL_CUSTOM_DIR_ENABLED
the value must be directory names wit space separatedexport MONGODB_URI=""
Hint: mongodb://${username}:${password}@${domain}:${port} or mongo+atlas mongodb+srv://${username}:${password}@${domain}
make sure when you recieve return data from Read(). copy the address value to the storage pointer variable and not the address!
When any credentials are written, it will be stored in
ksctl-{userid}-db
{cloud_provider}
raw bson data
with above specified data and filter fieldsWhen any clusterState is written, it gets stored in
ksctl-{userid}-db
credentials
raw bson data
with above specified data and filter fieldsWhen you do Switch aka getKubeconfig it fetches the kubeconfig from the point 3 and stores it to <some_dir>/.ksctl/kubeconfig
Local as a Storage Provider
Refer: internal/storage/local
cluster_name
(for cluster)region
(for cluster)cloud_provider
(for cluster & credentials)cluster_type
(for cluster)it is stored something like this
it will use almost the same construct.
* ClusterInfos => $USER_HOME/.ksctl/state/
|-- {cloud_provider}
|-- {cluster_type} aka (ha, managed)
|-- "{cluster_name} {region}"
|-- state.json
* CredentialInfo => $USER_HOME/.ksctl/credentials/{cloud_provider}.json
KSCTL_CUSTOM_DIR_ENABLED
the value must be directory names wit space separated<some_dir>/.ksctl/credentials/{cloud_provider}.json
<some_dir>/.ksctl/state/{cloud_provider}/{cluster_type}/{cluster_name} {region}/state.json
<some_dir>/.ksctl/kubeconfig
Transform your Kubernetes experience with a tool that puts simplicity and efficiency first. Ksctl eliminates the complexity of cluster management, allowing developers to focus on what matters most β building great applications.
π Universal Cloud Support
π Zero-to-Cluster Simplicity
π° Cost-Efficient Architecture
π οΈ Streamlined Management
π― Developer-Focused Design
π Flexible Operation
Name | Role | Github | Discord | |
---|---|---|---|---|
Dipanakar | Creator & Maintainer | Github | dipankardas | |
Praful | Maintainer | Github | praful_ |
Current Status on Supported Providers and Next Features
flowchart LR; classDef green color:#022e1f,fill:#00f500; classDef red color:#022e1f,fill:#f11111; classDef white color:#022e1f,fill:#fff; classDef black color:#fff,fill:#000; classDef blue color:#fff,fill:#00f; XX[ksctl]--CloudFactory-->web{Cloud Providers}; XX[ksctl]--DistroFactory-->web2{Distributions}; XX[ksctl]--StorageFactory-->web3{State Storage}; web--Civo-->civo{Types}; civo:::green--managed-->civom[Create & Delete]:::green; civo--HA-->civoha[Create & Delete]:::green; web--Local-Kind-->local{Types}; local:::green--managed-->localm[Create & Delete]:::green; local--HA-->localha[Create & Delete]:::black; web--AWS-->aws{Types}; aws:::green--managed-->awsm[Create & Delete]:::green; aws--HA-->awsha[Create & Delete]:::green; web--Azure-->az{Types}; az:::green--managed-->azsm[Create & Delete]:::green; az--HA-->azha[Create & Delete]:::green; web2--K3S-->k3s{Types}; k3s:::green--HA-->k3ha[Create & Delete]:::green; web2--Kubeadm-->kubeadm{Types}; kubeadm:::green--HA-->kubeadmha[Create & Delete]:::green; web3--Local-Store-->slocal{Local}:::green; web3--Remote-Store-->rlocal{Remote}:::green; rlocal--Provider-->mongo[MongoDB]:::green;