Kubernetes
Kubernetes is the leading technology for the deployment and orchestration of containerized workloads in cloud-native environments.
Tower streamlines the deployment of Nextflow pipelines into Kubernetes, both for cloud-based and on-prem clusters.
The following instructions create a Tower compute environment for a generic Kubernetes distribution. See Amazon EKS or Google GKE for EKS and GKE compute environment instructions.
Cluster preparation
To prepare your Kubernetes cluster for the deployment of Nextflow pipelines using Tower, this guide assumes that you have already created the cluster and that you have administrative privileges.
-
Verify the connection to your Kubernetes cluster:
kubectl cluster-info
-
Create the Tower launcher:
kubectl apply -f https://help.tower.nf/latest/_templates/k8s/tower-launcher.yml
This command creates a service account called
tower-launcher-sa
and the associated role bindings, all contained in a namespace calledtower-nf
. Tower uses the service account to launch Nextflow pipelines. Use this service account name when setting up the compute environment for this Kubernetes cluster in Tower. -
Create persistent storage. Tower requires a
ReadWriteMany
persistent volume claim (PVC) mounted to all nodes where workflow pods will be dispatched.You can use any storage solution that supports the
ReadWriteMany
access mode. The setup of this storage is beyond the scope of these instructions — the right solution for you will depend on what is available for your infrastructure or cloud vendor (NFS, GlusterFS, CephFS, Amazon FSx). Ask your cluster administrator for more information.-
Example PVC backed by local storage: tower-scratch-local.yml
-
Example PVC backed by NFS server: tower-scratch-nfs.yml
Apply the appropriate PVC configuration to your cluster:
kubectl apply -f <PVC_YAML_FILE>.
-
Compute environment
-
In a workspace, select Compute environments and then New environment.
-
Enter a descriptive name for this environment, e.g., "K8s cluster".
-
Select Kubernetes as the target platform.
-
From the Credentials drop-down, select existing Kubernetes credentials, or select + to add new credentials. If you have existing credentials, skip to step 7.
-
Enter a name, e.g., "K8s Credentials".
-
Enter the Service account token.
Obtain the token with the following command:
SECRET=$(kubectl get secrets | grep <SERVICE-ACCOUNT-NAME> | cut -f1 -d ' ')
kubectl describe secret $SECRET | grep -E '^token' | cut -f2 -d':' | tr -d '\t'Replace
<SERVICE-ACCOUNT-NAME>
with the name of the service account created in the cluster preparation instructions (default:tower-launcher-sa
). -
Enter the Control plane URL.
Obtain the control plane URL with the following command:
kubectl cluster-info
It can also be found in your
~/.kube/config
file under theserver
field corresponding to your cluster. -
Specify the SSL certificate to authenticate your connection.
Find the certificate data in your
~/.kube/config
file. It is thecertificate-authority-data
field corresponding to your cluster. -
Specify the Namespace created in the cluster preparation instructions, which is
tower-nf
by default. -
Specify the Head service account created in the cluster preparation instructions, which is
tower-launcher-sa
by default. -
Specify the Storage claim created in the cluster preparation instructions, which serves as a scratch filesystem for Nextflow pipelines. The storage claim is called
tower-scratch
in each of the provided examples. -
Apply Resource labels to the cloud resources consumed by this compute environment. Workspace default resource labels are prefilled.
-
Expand Staging options to include optional pre- or post-run Bash scripts that execute before or after the Nextflow pipeline execution in your environment.
-
You can use the Environment variables option to specify custom environment variables for the Head job and/or Compute jobs.
-
Configure any advanced options described below, as needed.
-
Select Create to finalize the compute environment setup.
Jump to the documentation for launching pipelines.
Advanced options
-
The Storage mount path is the file system path where the Storage claim is mounted (default:
/scratch
). -
The Work directory is the file system path used as a working directory by Nextflow pipelines. It must be the storage mount path (default) or a subdirectory of it.
-
The Compute service account is the service account used by Nextflow to submit tasks (default: the
default
account in the given namespace). -
The Pod cleanup policy determines when to delete terminated pods.
-
You can use Custom head pod specs to provide custom options for the Nextflow workflow pod (
nodeSelector
,affinity
, etc). For example:spec:
nodeSelector:
disktype: ssd -
You can use Custom service pod specs to provide custom options for the compute environment pod. See above for an example.
-
You can use Head Job CPUs and Head Job memory to specify the hardware resources allocated to the Nextflow workflow pod.