Workflow Controller Security¶
This has three parts.
The controller has permission (via Kubernetes RBAC + its config map) with either all namespaces (cluster-scope install) or a single managed namespace (namespace-install), notably:
- List/get/update workflows, and cron-workflows.
- Create/get/delete pods, PVCs, and PDBs.
- List/get template, config maps, service accounts, and secrets.
Users minimally need permission to create/read workflows. The controller will then create workflow pods (config maps etc) on behalf of the users, even if the user does not have permission to do this themselves. The controller will only create workflow pods in the workflow's namespace.
A way to think of this is that, if the user has permission to create a workflow in a namespace, then it is OK to create pods or anything else for them in that namespace.
If the user only has permission to create workflows, then they will be typically unable to configure other necessary resources such as config maps, or view the outcome of their workflow. This is useful when the user is a service.
If you allow users to create workflows in the controller's namespace (typically
argo), it may be possible for users to modify the controller itself. In a namespace-install the managed namespace should therefore not be the controller's namespace.
You can typically further restrict what a user can do to just being able to submit workflows from templates using the workflow requriments feature.
Workflow Pod Permissions¶
Workflow pods run using either:
- The service account declared in the workflow spec.
There is no restriction on which service account in a namespace may be used.
This service account typically needs the following permissions:
- Get/watch/patch pods.
- Get/watch pod logs.
Different service accounts should be used if a workflow pod needs to have elevated permissions, e.g. to create other resources.
The main container will have the service account token mounted , allowing the main container to patch pods (amongst other permissions). Set
automountServiceAccountToken to false to prevent this. See fields.
By default, workflows pods run as
root. To further secure workflow pods, set the workflow pod security context.
You should configure the controller with the correct workflow executor for your trade off between security and scalabily.
These settings can be set by default using workflow defaults.
Argo Server Security¶
Argo Server implements security in three layers.
Firstly, you should enable transport layer security to ensure your data cannot be read in transit.
Secondly, you should enable an authentication mode to ensure that you do not run workflows from unknown users.
Finally, you should configure the
argo-server role and role binding with the correct permissions.
You can achieve this by configuring the
argo-server role (example with only read access (i.e. only
Argo Workflows requires various levels of network access depending on configuration and the features enabled. The following describes the different workflow components and their network access needs, to help provide guidance on how to configure the argo namespace in a secure manner (e.g. NetworkPolicies).
The argo server is commonly exposed to end-users to provide users with a user interface for visualizing and managing their workflows. It must also be exposed if leveraging webhooks to trigger workflows. Both of these use cases require that the argo-server Service to be exposed for ingress traffic (e.g. with an Ingress object or load balancer). Note that the Argo UI is also available to be accessed by running the server locally (i.e.
argo server) using local kubeconfig credentials, and visiting the UI over https://localhost:2746.
The argo server additionally has a feature to allow downloading of artifacts through the user interface. This feature requires that the argo-server be given egress access to the underlying artifact provider (e.g. S3, GCS, MinIO, Arfactory) in order to download and stream the artifact.
The workflow-controller Deployment exposes a Prometheus metrics endpoint (workflow-controller-metrics:9090) so that a Prometheus server can periodically scrape for controller level metrics. Since prometheus is typically running in a separate namespace, the argo namespace should be configured to allow cross-namespace ingress access to the workflow-controller-metrics Service.
A persistent store can be configured for either archiving or offloading workflows. If either of these features are enabled, both the workflow-controller and argo-server Deployments will need egress network access to the external database used for archiving/offloading.