Virtual Kubelet
StackPath's Virtual Kubelet provider allows you to leverage the power of Kubernetes (K8s) to seamlessly deploy and manage your applications across StackPath's expansive Edge Compute network from the control plane of your choice, increasing scalability and reliability, while decreasing latency.
This guide will explain how to create and configure StackPath Edge Compute containers using Virtual Kubelet.
Getting Started
The following are required before you can start using the StackPath Edge Compute Virtual Kubelet provider:
- A Kubernetes cluster
- A StackPath account
- API Credentials
Creating a Virtual Kubelet Pod
The instructions below explain how to deploy a Kubernetes deployment for StackPath's Virtual Kubelet Provider using Kustomize.
Usage
-
Confirm that Kustomize is installed in your environment by running the
kustomize version
command. If you haven't already installed Kustomize, follow the instructions here.To find the Kustomize version embedded in recent versions of kubectl, run
kubectl version
:kubectl version --short --client Client Version: v1.26.0 Kustomize Version: v4.5.7
-
Clone this repository to your local environment.
-
Navigate to the base directory, which contains the base Virtual Kubelet deployment:
bash cd deployment/kustomize/base
-
Follow this guide to obtain StackPath API credentials and update the
config.properties
file with your StackPath account, Stack, client, and secret IDs:SP_STACK_ID = {your-stack-id} SP_CLIENT_ID = {your-client-id} SP_CLIENT_SECRET = {your-client-secret}
-
To deploy the Virtual Kubelet resources, run the following command:
kubectl apply -k
This will create the Virtual Kubelet deployment in your Kubernetes cluster. Please note that a secret will be generated from the
config.properties
file specified in thesecretGenerator
section of thekustomization.yaml
file. This secret contains the values of the environment variables specified in theconfig.properties
file.
Updating Resources
To customize the Virtual Kubelet deployment, create an overlay directory (vk-deployment-updated
in this example) within the overlays
directory with a kustomization.yaml
file that specifies the changes you want to make.
.
├── base
│ ├── cluster-role.yaml
│ ├── config.properties
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ ├── service-account.yaml
│ └── vk-deployment.yaml
└── overlays
└── vk-deployment-updated
└── kustomization.yaml
Create the following kustomization.yaml
file under the overlay directory to create a Virtual Kubelet in a namespace other than the default one while updating the values of SP_CITY_CODE
and SP_STACK_ID
environment variables. We will be using sp-atl
as the location for this example.
resources:
- ../../base
namespace: sp-atl
images:
- name: stackpath.com/virtual-kubelet
newTag: 0.0.2
configMapGenerator:
- name: sp-vk-location
behavior: replace
literals:
- SP_CITY_CODE=ATL
secretGenerator:
- name: sp-vk-secrets
behavior: merge
literals:
- SP_STACK_ID= <another_stack_id>
resources
references the base resources that are inherited by this overlay, which includes a default Virtual Kubelet deployment configuration.namespace
specifies that the Virtual Kubelet deployment will be created in thesp-atl
namespace.images
is used to define the version of the StackPath Virtual Kubelet image to be used.configMapGenerator
replaces the existing value ofSP_CITY_CODE
withATL
, which specifies the geographic location of the Edge Compute infrastructure.secretGenerator
merges the existingconfig.properties
file with a newSP_STACK_ID
value of<another_stack_id>
. This updates the StackPath Stack ID specified in config.properties.
To deploy overlay, run the following command:
kubectl apply -k overlays/vk-deployment-updated
Creating a Workload
Now that you've created a Virtual Kubelet pod using the steps above, you're ready to move on to the next step. Once this pod is running, you can then create a standard pod and StackPath workload.
To use the Virtual Kubelet deployment to deploy workloads in the StackPath Edge Compute infrastructure, configure your pods to use the virtual-kubelet.io/provider
toleration and type: virtual-kubelet
node selector.
Here is an example configuration that will create the simplest possible container in the sp-atl
namespace by providing only a name (my-pod
) and image (my-image
):
apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: sp-atl
spec:
containers:
- name: my-container
image: my-image
tolerations:
Here's a tip:
You can customize your workload by adding more configurations under the
specs
field, as if you were using the StackPath API.
Here is what a more standard workload configuration's YAML file would look like:
apiVersion: v1
kind: Pod
metadata:
name: webserver
namespace: vk-sp
spec:
containers:
- name: webserver
image: nginx:latest
args:
- "example1"
- "example2"
command:
- "nginx"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
env:
- name: VAR
value: val
resources:
requests:
memory: "1Gi"
cpu: "250m"
limits:
memory: "4Gi"
cpu: "2"
volumeMounts:
- mountPath: "/disk-1"
name: volume-1
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
httpHeaders:
- name: Custom-Header
value: Example
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 2
timeoutSeconds: 10
failureThreshold: 1
volumes:
- name: volume-1
csi:
driver: virtual-kubelet.storage.compute.edgeengine.io
volumeAttributes:
size: "2Gi"
tolerations:
- key: virtual-kubelet.io/provider
operator: Equal
value: stackpath
effect: NoSchedule
nodeSelector:
kubernetes.io/role: agent
type: virtual-kubelet
Using the example above, let's create a workload. The name of our YAML file is my_example_pod.yaml
. It's located in our sp/testing
folder. Using kubectl, run the following command:
kubectl apply -f sp/testing/my_example_pod.yaml
This command creates a Virtual Kubelet container workload using the configuration defined in our YAML file.
Validating our Workload
We can confirm that this workload has been created and is running properly by checking the status of the workload. First, get all workloads to retrieve the appropriate workload ID, then use this ID to get more detailed information on the workload.
Enabling Remote Management for Pods
To enable remote management for pods, you can use the workload.platform.stackpath.net/remote-management
annotation in the pod definition metadata. By setting this annotation to true
, the remote management capabilities for the containers listed in the pod will be enabled.
To enable remote management, add the following annotation to your pod definition metadata:
annotations:
workload.platform.stackpath.net/remote-management: "true"
By default, if this annotation is not provided or set to false
, remote management will be disabled.
For more information on Edge Compute Workload Metadata and other terms related to StackPath Edge Compute, please refer to Learn Edge Compute Terms.
Enabling remote management should be done with caution and only for trusted pods or in controlled environments where appropriate security measures are in place.
Supported PodSpec File Fields
The following is a comprehensive list of supported fields in the PodSpec file when using StackPath's Virtual Kubelet Provider for Edge Compute:
- shareProcessNamespace: Allows multiple containers in a pod to share the same process namespace.
- hostAliases: Specifies custom host-to-IP mappings for the pod.
- dnsConfig: Configures DNS settings for the pod.
- securityContext: Defines security-related settings for the containers in the pod, including permissions and access levels.
- runAsUser: Specifies the user ID that runs the container.
- runAsGroup: Specifies the primary group ID of the container.
- runAsNonRoot: Ensures that the container does not run as root.
- supplementalGroups: Lists additional group IDs applied to the container.
- sysctls: Configures kernel parameters for the container.
- containers: Specifies the main containers in the pod.
- name: Specifies the name of the container.
- image: Specifies the container image.
- command: Specifies the command to be run in the container.
- args: Specifies the arguments to be passed to the container command.
- ports: Configures ports for the container.
- env: Sets environment variables for the container.
- resources: Specifies the resource requirements and limits for the container.
- securityContext (Container-specific):
- runAsUser: Specifies the user ID that runs the container.
- runAsGroup: Specifies the primary group ID of the container.
- runAsNonRoot: Ensures that the container does not run as root.
- allowPrivilegeEscalation: Allows privilege escalation for the container.
- capabilities: Specifies Linux capabilities for the container.
- volumeMounts: Mounts volumes into the container.
- startupProbe: Configures the startup probe for the container.
- livenessProbe: Configures the liveness probe for the container.
- readinessProbe: Configures the readiness probe for the container.
- lifecycle:
- postStart: Executed after the container starts.
- preStop: Executed before the container is terminated due to any reason.
- imagePullPolicy: Specifies when to pull the container image (we currently support Always and IfNotPresented)
- workingDir: Sets the working directory inside the container.
- terminationMessagePath: Specifies the path to the container termination message.
- terminationMessagePolicy: Specifies how the termination message should be populated.
- initContainers: Defines one or more containers that should run before the main containers in the pod (supports same fields as containers)
- volumes: Configures volumes to be used in the pod.
Updated about 1 month ago