Skip to content

Getting started

Get started using Claudie

Prerequisites

Before you begin, please make sure you have the following prerequisites installed and set up:

  1. Claudie needs to be installed on an existing Kubernetes cluster, referred to as the Management Cluster, which it uses to manage the clusters it provisions. For testing, you can use ephemeral clusters like Minikube or Kind. However, for production environments, we recommend using a more resilient solution since Claudie maintains the state of the infrastructure it creates.

  2. Claudie requires the installation of cert-manager in your Management Cluster. To install cert-manager, use the following command:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
    

Supported providers

Supported Provider Node Pools DNS
AWS ✔ ✔
Azure ✔ ✔
GCP ✔ ✔
OCI ✔ ✔
Hetzner ✔ ✔
Cloudflare N/A ✔

For adding support for other cloud providers, open an issue or propose a PR.

Install Claudie

  1. Download and extract Claudie manifests from our release page:

    wget https://github.com/berops/claudie/releases/latest/download/claudie.zip && unzip claudie.zip -d claudie
    

  2. Deploy Claudie to the Management Cluster:

    kubectl apply -k claudie
    

Deploy your cluster

  1. Create Kubernetes Secret resource for your provider configuration.

    kubectl create secret generic example-aws-secret-1 \
      --namespace=mynamespace \
      --from-literal=accesskey='myAwsAccessKey' \
      --from-literal=secretkey='myAwsSecretKey'
    

    Check the supported providers for input manifest examples. For an input manifest spanning all supported hyperscalers checkout out this example.

  2. Deploy InputManifest resource which Claudie uses to create infrastructure, include the created secret in .spec.providers as follows:

    kubectl apply -f - <<EOF
    apiVersion: claudie.io/v1beta1
    kind: InputManifest
    metadata:
      name: examplemanifest
    spec:
      providers:
          - name: aws-1
          providerType: aws
          secretRef:
              name: example-aws-secret-1 # reference the secret name
              namespace: mynamespace     # reference the secret namespace
      nodePools:
          dynamic:
          - name: control-aws
              providerSpec:
                name: aws-1
                region: eu-central-1
                zone: eu-central-1a
              count: 1
              serverType: t3.medium
              image: ami-0965bd5ba4d59211c
          - name: compute-1-aws
              providerSpec:
                name: aws-1
                region: eu-central-2
                zone: eu-central-2a
              count: 2
              serverType: t3.medium
              image: ami-0965bd5ba4d59211c
              storageDiskSize: 50
      kubernetes:
          clusters:
          - name: aws-cluster
              version: v1.24.0
              network: 192.168.2.0/24
              pools:
                control:
                    - control-aws
                compute:
                    - compute-1-aws        
    EOF
    

    Deleting existing InputManifest resource deletes provisioned infrastructure!

Connect to your cluster

Claudie outputs base64 encoded kubeconfig secret <cluster-name>-<cluster-hash>-kubeconfig in the namespace where it is deployed:

  1. Recover kubeconfig of your cluster by running:
    kubectl get secrets -n claudie -l claudie.io/output=kubeconfig -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > your_kubeconfig.yaml
    
  2. Use your new kubeconfig:
    kubectl get pods -A --kubeconfig=your_kubeconfig.yaml
    

Cleanup

  1. To remove your cluster and its associated infrastructure, delete the cluster definition block from the InputManifest:
    kubectl apply -f - <<EOF
    apiVersion: claudie.io/v1beta1
    kind: InputManifest
    metadata:
      name: examplemanifest
    spec:
      providers:
          - name: aws-1
          providerType: aws
          secretRef:
              name: example-aws-secret-1 # reference the secret name
              namespace: mynamespace     # reference the secret namespace
      nodePools:
          dynamic:
          - name: control-aws
              providerSpec:
                name: aws-1
                region: eu-central-1
                zone: eu-central-1a
              count: 1
              serverType: t3.medium
              image: ami-0965bd5ba4d59211c
          - name: compute-1-aws
              providerSpec:
                name: aws-1
                region: eu-central-2
                zone: eu-central-2a
              count: 2
              serverType: t3.medium
              image: ami-0965bd5ba4d59211c
              storageDiskSize: 50
      kubernetes:
        clusters:
    #      - name: aws-cluster
    #          version: v1.24.0
    #          network: 192.168.2.0/24
    #          pools:
    #            control:
    #                - control-aws
    #            compute:
    #                - compute-1-aws         
    EOF
    
  2. To delete all clusters defined in the input manifest, delete the InputManifest. This triggers the deletion process, removing the infrastructure and all data associated with the manifest.

    kubectl delete inputmanifest examplemanifest