Virtual Kubelets

StackPath's Virtual Kubelet (VK) provider allows you to leverage the power of Kubernetes (K8s) to deploy and manage your applications across StackPath's expansive Edge Compute network, increasing scalability and reliability, while decreasing latency.

This guide will explain how to create and configure StackPath Edge Compute containers using Virtual Kubelet. 

Getting Started

The following are required before you can start using the StackPath Edge Compute Virtual Kubelet provider:

Creating a Virtual Kubelet Pod

The instructions below explain how to deploy a Kubernetes deployment for StackPath's Virtual Kubelet Provider using Kustomize.


  1. Confirm that Kustomize that installed in your environment by running the kustomize version command. If you haven't already installed Kustomize, follow the instructions here.

    To find the Kustomize version embedded in recent versions of kubectl, run kubectl version:

    kubectl version --short --client  
    Client Version: v1.26.0  
    Kustomize Version: v4.5.7
  2. Clone this repository to your local environment.

  3. Navigate to the base directory, which contains the base Virtual Kubelet deployment:

    bash cd deployment/kustomize/base
  4. Follow this guide to obtain StackPath API credentials and update the file with your StackPath account, Stack, client, and secret IDs:

    SP_STACK_ID = {your-stack-id}  
    SP_CLIENT_ID = {your-client-id}  
    SP_CLIENT_SECRET = {your-client-secret}
  5. To deploy the Virtual Kubelet resources, run the following command:

    kubectl apply -k

    This will create the Virtual Kubelet deployment in your Kubernetes cluster. Please note that a secret will be generated from the file specified in the secretGenerator section of the kustomization.yaml file. This secret contains the values of the environment variables specified in the file.

Updating Resources

To customize the Virtual Kubelet deployment, create an overlay directory (vk-deployment-updated in this example) within the overlays directory with a kustomization.yaml file that specifies the changes you want to make.

├── base  
│   ├── cluster-role.yaml  
│   ├──  
│   ├── kustomization.yaml  
│   ├── namespace.yaml  
│   ├── service-account.yaml  
│   └── vk-deployment.yaml  
└── overlays  
    └── vk-deployment-updated  
        └── kustomization.yaml

Create the following kustomization.yaml file under the overlay directory to create a Virtual Kubelet in a namespace other than the default one while updating the values of SP_CITY_CODE and SP_STACK_ID environment variables. We will be using sp-atl as the location for this example.

- ../../base

namespace: sp-atl

- name:
newTag: 0.0.2

- name: sp-vk-location
behavior: replace

- name: sp-vk-secrets
behavior: merge
- SP_STACK_ID= <another_stack_id>
  • resources references the base resources that are inherited by this overlay, which includes a default Virtual Kubelet deployment configuration.
  • namespace specifies that the Virtual Kubelet deployment will be created in the sp-atl namespace.
  • images is used to define the version of the StackPath Virtual Kubelet image to be used.
  • configMapGenerator replaces the existing value of SP_CITY_CODE with ATL, which specifies the geographic location of the Edge Compute infrastructure.
  • secretGenerator merges the existing file with a new SP_STACK_ID value of <another_stack_id>. This updates the StackPath Stack ID specified in

To deploy overlays, run the following command:

kubectl apply -k overlays/vk-deployment-updated

Creating a Workload

Now that you've created a Virtual Kubelet pod after completing the steps above, you're ready to move on to the next step. Once this pod is running, you can then create a standard pod and StackPath workload.

To use the Virtual Kubelet deployment to deploy workloads in the StackPath Edge Compute infrastructure, configure your pods to use the toleration and type: virtual-kubelet node selector.

Here is an example configuration that will create the simplest possible container in the sp-atl namespace by providing only a name (my-pod) and image (my-image):

apiVersion: v1 
kind: Pod 
 name: my-pod 
 namespace: sp-atl 
 - name: my-container 
   image: my-image 


You can customize your workload by adding more configurations under the specs field, as if you were using the StackPath API.

Here is what a more standard workload configuration's YAML file would look like:

apiVersion: v1
kind: Pod
  name: webserver
  namespace: vk-sp
    - name: webserver
      image: nginx:latest
        - "example1"
        - "example2"
        - "nginx"
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: VAR
          value: val
          memory: "1Gi"
          cpu: "250m"
          memory: "4Gi"
          cpu: "2"

        - mountPath: "/disk-1"
          name: volume-1
          port: 80
        initialDelaySeconds: 5
        periodSeconds: 10
          path: /
          port: 80
            - name: Custom-Header
              value: Example
        initialDelaySeconds: 5
        periodSeconds: 10
        successThreshold: 2
        timeoutSeconds: 10
        failureThreshold: 1

    - name: volume-1
          size: "2Gi"

    - key:
      operator: Equal
      value: stackpath
      effect: NoSchedule
  nodeSelector: agent
    type: virtual-kubelet

Using the example above, let's create a workload. The name of our YAML file is my_example_pod.yaml. It's located in our sp/testing folder. Using kubectl, run the following command:

kubectl apply -f sp/testing/my_example_pod.yaml

This command creates a Virtual Kubelet container workload using the configuration defined in our YAML file.

Validating our Workload

We can confirm that this workload has been created and is running properly by checking the status of the workload. First, get all workloads to retrieve the appropriate workload ID, then use this ID to get more detailed information on the workload.