Getting started
Get started using Claudie¶
Prerequisites¶
Before you begin, please make sure you have the following prerequisites installed and set up:
-
Claudie needs to be installed on an existing Kubernetes cluster, referred to as the Management Cluster, which it uses to manage the clusters it provisions. For testing, you can use ephemeral clusters like Minikube or Kind. However, for production environments, we recommend using a more resilient solution since Claudie maintains the state of the infrastructure it creates.
-
Claudie requires the installation of cert-manager in your Management Cluster. To install cert-manager, use the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
Supported providers¶
Supported Provider | Node Pools | DNS |
---|---|---|
AWS | ||
Azure | ||
GCP | ||
OCI | ||
Hetzner | ||
Cloudflare | N/A | |
GenesisCloud | N/A |
For adding support for other cloud providers, open an issue or propose a PR.
Install Claudie¶
- Deploy Claudie to the Management Cluster:
kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml
To further harden claudie, you may want to deploy our pre-defined network policies:
# for clusters using cilium as their CNI
kubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml
# other
kubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml
Deploy your cluster¶
-
Create Kubernetes Secret resource for your provider configuration.
kubectl create secret generic example-aws-secret-1 \ --namespace=mynamespace \ --from-literal=accesskey='myAwsAccessKey' \ --from-literal=secretkey='myAwsSecretKey'
Check the supported providers for input manifest examples. For an input manifest spanning all supported hyperscalers checkout out this example.
-
Deploy InputManifest resource which Claudie uses to create infrastructure, include the created secret in
.spec.providers
as follows:kubectl apply -f - <<EOF apiVersion: claudie.io/v1beta1 kind: InputManifest metadata: name: examplemanifest labels: app.kubernetes.io/part-of: claudie spec: providers: - name: aws-1 providerType: aws secretRef: name: example-aws-secret-1 # reference the secret name namespace: mynamespace # reference the secret namespace nodePools: dynamic: - name: control-aws providerSpec: name: aws-1 region: eu-central-1 zone: eu-central-1a count: 1 serverType: t3.medium image: ami-0965bd5ba4d59211c - name: compute-1-aws providerSpec: name: aws-1 region: eu-west-3 zone: eu-west-3a count: 2 serverType: t3.medium image: ami-029c608efaef0b395 storageDiskSize: 50 kubernetes: clusters: - name: aws-cluster version: 1.27.0 network: 192.168.2.0/24 pools: control: - control-aws compute: - compute-1-aws EOF
Deleting existing InputManifest resource deletes provisioned infrastructure!
Connect to your cluster¶
Claudie outputs base64 encoded kubeconfig secret <cluster-name>-<cluster-hash>-kubeconfig
in the namespace where it is deployed:
- Recover kubeconfig of your cluster by running:
kubectl get secrets -n claudie -l claudie.io/output=kubeconfig -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > your_kubeconfig.yaml
- Use your new kubeconfig:
kubectl get pods -A --kubeconfig=your_kubeconfig.yaml
Cleanup¶
- To remove your cluster and its associated infrastructure, delete the cluster definition block from the InputManifest:
kubectl apply -f - <<EOF apiVersion: claudie.io/v1beta1 kind: InputManifest metadata: name: examplemanifest labels: app.kubernetes.io/part-of: claudie spec: providers: - name: aws-1 providerType: aws secretRef: name: example-aws-secret-1 # reference the secret name namespace: mynamespace # reference the secret namespace nodePools: dynamic: - name: control-aws providerSpec: name: aws-1 region: eu-central-1 zone: eu-central-1a count: 1 serverType: t3.medium image: ami-0965bd5ba4d59211c - name: compute-1-aws providerSpec: name: aws-1 region: eu-west-3 zone: eu-west-3a count: 2 serverType: t3.medium image: ami-029c608efaef0b395 storageDiskSize: 50 kubernetes: clusters: # - name: aws-cluster # version: 1.27.0 # network: 192.168.2.0/24 # pools: # control: # - control-aws # compute: # - compute-1-aws EOF
-
To delete all clusters defined in the input manifest, delete the InputManifest. This triggers the deletion process, removing the infrastructure and all data associated with the manifest.
kubectl delete inputmanifest examplemanifest