Integrate Self-Managed Enterprise Edition with HashiCorp Vault
Overview
This guide explains how to configure a self-managed Harness platform to source Harness service secrets directly from HashiCorp Vault, instead of storing them as Kubernetes Secrets.
Key benefits
- Eliminate Kubernetes Secrets: Secrets are never stored in kubernetes secrets, avoiding the risk of secrets being exposed through commands such as
kubectl get secrets. - Fetch secrets at runtime: Secrets are retrieved from Vault when pods start and are made available to application containers before execution.
- Centralize secret management: Continue using your existing Vault instance, policies, and workflows without duplicating secrets in Kubernetes.
How the integration works
The integration operates in two stages:
- One-time setup: You run a command-line tool that reads your Harness Helm chart and automatically creates the required secrets in your Vault instance.
- Runtime behavior: When Harness pods start, an init container authenticates with Vault, fetches the secrets required by that pod, and makes them available to the application container before it starts.
Supported Modules
- CI
- CD
- Chaos
Prerequisites
Before you start, make sure you have the following in place:
- A running HashiCorp Vault instance, either inside the Kubernetes cluster or on an external server with network access to the Kubernetes API
- A Vault token with the necessary permissions to create and read secrets
- The Harness Helm chart and its associated override values file
Configure HashiCorp Vault
Before generating secrets, you need to configure HashiCorp Vault with the required secret engine, policies, and authentication method.
Step 1. Enable the Key-Value Version 2 secrets engine
This setup also works with the KV v1 secrets engine. If KV v1 is already enabled at the desired path, no additional changes are required.
Enable the KV v2 secrets engine at the harness-engine/ path by running the following command:
vault secrets enable -path=harness-engine kv-v2
Step 2. Create a generator policy and token
The secrets generator requires a Vault token with permission to write secrets. Start by creating a policy file named harness-generator-policy.hcl:
path "harness-engine/data/harness/*" {
capabilities = ["create", "update", "read"]
}
path "harness-engine/harness/*" {
capabilities = ["create", "update", "read"]
}
path "harness-engine/metadata/harness/*" {
capabilities = ["list", "read"]
}
path "sys/mounts" {
capabilities = ["read"]
}
path "sys/mounts/harness-engine" {
capabilities = ["create", "read", "update"]
}
Apply the policy and generate a token:
vault policy write harness-generator harness-generator-policy.hcl
vault token create -policy=harness-generator -ttl=24h
Store this token securely. You will need it later when running the generator tool.
Step 3. Configure Kubernetes authentication
For production environments, configure Kubernetes authentication so that Harness pods can securely authenticate with Vault at runtime.
3.1 Enable Kubernetes authentication
vault auth enable kubernetes
3.2 Configure the Kubernetes auth method
Choose the configuration based on where your Vault instance runs.
Option A: Vault runs inside the Kubernetes cluster
When Vault runs as a pod in the same cluster, it auto-discovers the Kubernetes CA certificate and service account token from the pod's filesystem.
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc:443"
Option B: Vault runs outside the Kubernetes cluster
When Vault runs on an external VM or server, set disable_local_ca_jwt=true so that Vault uses each client's service account token for TokenReview validation.
vault write auth/kubernetes/config \
kubernetes_host="https://<kubernetes-api-endpoint>:443" \
disable_local_ca_jwt=true
Replace <kubernetes-api-endpoint> with your cluster's API server address.
Ensure the Vault server can reach the Kubernetes API endpoint on port 443. For GKE clusters with authorized networks enabled, add the Vault server's IP to the authorized networks list. For EKS or AKS, verify that security groups or network security rules allow traffic from the Vault server to the API server on port 443.
If your cluster API server does not have publicly trusted TLS certificates (for example, self-managed clusters using kubeadm, k3s, or RKE), provide the cluster CA certificate by setting kubernetes_ca_cert in the Vault Kubernetes auth config.
Since disable_local_ca_jwt=true causes Vault to use each client's own service account token for TokenReview, all Harness service accounts that authenticate with Vault must have the system:auth-delegator ClusterRole:
# vault-token-reviewer-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-token-reviewer-binding-harness
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: harness-default
namespace: <harness-namespace>
- kind: ServiceAccount
name: harness-looker
namespace: <harness-namespace>
- kind: ServiceAccount
name: harness-manager
namespace: <harness-namespace>
- kind: ServiceAccount
name: harness-platform-service
namespace: <harness-namespace>
- kind: ServiceAccount
name: harness-queue-service
namespace: <harness-namespace>
- kind: ServiceAccount
name: harness-serviceaccount
namespace: <harness-namespace>
- kind: ServiceAccount
name: ng-manager
namespace: <harness-namespace>
- kind: ServiceAccount
name: template-service
namespace: <harness-namespace>
kubectl apply -f vault-token-reviewer-binding.yaml
Add service accounts for optional modules (such as Chaos) as listed in step 3.4.
3.3 Create a runtime policy
Create a read-only policy file named harness-runtime-policy.hcl. This policy allows Harness pods to read secrets from Vault at runtime.
path "harness-engine/data/harness/*" {
capabilities = ["read"]
}
path "harness-engine/harness/*" {
capabilities = ["read"]
}
path "harness-engine/metadata/harness/*" {
capabilities = ["list", "read"]
}
# Database secrets engine (for DB dynamic credentials)
path "database/creds/*" {
capabilities = ["read"]
}
Apply the policy:
vault policy write harness-runtime harness-runtime-policy.hcl
3.4 Create a Kubernetes auth role for Harness
Create a Kubernetes auth role that allows Harness pods running in the harness namespace to authenticate with Vault and read secrets during startup:
vault write auth/kubernetes/role/harness \
bound_service_account_names=harness-default \
bound_service_account_namespaces=<harness-namespace> \
policies=harness-runtime \
ttl=12h
Add the following Kubernetes service accounts to the role based on the modules you are installing.
CI/CD uses the harness-default service account.
Platform module (mandatory):
harness-lookerharness-managerharness-platform-serviceharness-queue-serviceharness-serviceaccountng-managertemplate-service
Chaos module:
chaos-linux-ifc-sachaos-linux-ifs-sachaos-machine-ifc-sachaos-machine-ifs-sa
Set up Vault-backed secrets for Harness SMP
In the previous steps, you configured HashiCorp Vault with the required secrets engine, policies, and authentication methods. With Vault now ready, you can generate the secrets required by Harness SMP and configure Harness to retrieve them directly from Vault at runtime.
This section walks you through generating those secrets and updating your Harness configuration accordingly.
Step 1: Download the secrets generator tool
First, fetch the latest version:
export VSM_VERSION=$(curl -s https://app.harness.io/public/shared/vault-secrets-generator/versions.txt | tail -1)
echo "Latest version: $VSM_VERSION"
To see all available versions:
curl -s https://app.harness.io/public/shared/vault-secrets-generator/versions.txt
Then download the binary that matches your operating system and architecture:
Linux (AMD64)
curl -L -o vaultSecretGenerator \
https://app.harness.io/public/shared/vault-secrets-generator/${VSM_VERSION}/vaultSecretGenerator-linux-amd64
chmod +x vaultSecretGenerator
Linux (ARM64)
curl -L -o vaultSecretGenerator \
https://app.harness.io/public/shared/vault-secrets-generator/${VSM_VERSION}/vaultSecretGenerator-linux-arm64
chmod +x vaultSecretGenerator
macOS (AMD64)
curl -L -o vaultSecretGenerator \
https://app.harness.io/public/shared/vault-secrets-generator/${VSM_VERSION}/vaultSecretGenerator-darwin-amd64
chmod +x vaultSecretGenerator
macOS (ARM64 / Apple Silicon)
curl -L -o vaultSecretGenerator \
https://app.harness.io/public/shared/vault-secrets-generator/${VSM_VERSION}/vaultSecretGenerator-darwin-arm64
chmod +x vaultSecretGenerator
Windows (AMD64)
curl -L -o vaultSecretGenerator \
https://app.harness.io/public/shared/vault-secrets-generator/${VSM_VERSION}/vaultSecretGenerator-windows-amd64
Windows (ARM64)
curl -L -o vaultSecretGenerator \
https://app.harness.io/public/shared/vault-secrets-generator/${VSM_VERSION}/vaultSecretGenerator-windows-arm64
chmod +x vaultSecretGenerator
Step 2: Prepare your Vault environment
Before proceeding, verify that the following Vault configuration steps from the prerequisites are complete:
- A generator token with write permissions has been created
- Runtime authentication with read permission is configured (Kubernetes authentication is recommended)
Step 3: Configure the Vault connection for the generator
The secrets generator reads Vault connection details from environment variables. Set the following environment variables:
export VAULT_ADDR=https://<vault-address>:8200
export VAULT_TOKEN=<vault-token-with-generator-policy>
export VAULT_ENGINE=harness-engine
export VAULT_BASE_PATH=harness
Step 4: Generate Harness secrets in Vault
Export the generator token and run the secrets generator:
./vaultSecretGenerator generate harness /path/to/harness-helm-chart -f /path/to/override.yaml
This command reads the configuration from override.yaml, scans the Harness Helm chart for required secrets (such as database credentials, API keys, and certificates), generates secure secret values, and uploads the secrets to Vault under the harness-engine/harness/* path.
Step 5: Add required secrets to Vault
Before proceeding, manually add the following secrets to your Vault instance under the appropriate paths:
- Add license-related secrets
| Vault Path | Description |
|---|---|
<engine>/<basePath>/looker/env-secrets/LICENSE_FILE | Looker license file |
<engine>/<basePath>/looker/env-secrets/LICENSE_KEY | Looker license key |
<engine>/<basePath>/harness-manager/env-secrets/LICENSE_INFO | CG license key |
<engine>/<basePath>/ng-manager/env-secrets/SMP_LICENSE | NG license key |
- Add log service storage credentials
| Vault Path | Description |
|---|---|
<engine>/<basePath>/log-service/env-secrets/LOG_SERVICE_S3_ACCESS_KEY_ID | Access key for S3-compatible storage |
<engine>/<basePath>/log-service/env-secrets/LOG_SERVICE_S3_SECRET_ACCESS_KEY | Secret access key for S3-compatible storage |
These credentials apply to both AWS S3 and external S3-compatible storage such as MinIO.
- Verify paths and permissions
- Ensure all secret paths match the configured engine and base path.
- Confirm Vault policies allow the Harness services to read these secrets.
Step 5: Configure Vault authentication for Harness
Update override.yaml to match the authentication method to be used by the Harness pods.
1: Kubernetes authentication (recommended)
global:
externalSecretsLoader:
enabled: true
provider: vault
vault:
address: "https://<vault-address>:8200"
engine: "harness-engine"
basePath: "harness"
auth:
method: "kubernetes"
role: "harness" # role created in vault
- Token authentication (not recommended for production)
This method stores your Vault token in plaintext in the ConfigMap resource. This is not recommended for production use, as it exposes your Vault token in plaintext.
global:
externalSecretsLoader:
enabled: true
provider: vault
vault:
address: "http://vault.vault.svc.cluster.local:8200"
engine: "harness-engine"
basePath: "harness"
auth:
method: "token"
token: "hvs.your-token-here" #Plaintext token
Step 6: Configure Database Secrets
1. Database Credentials
Databases (mongoDB, redis, postgres etc.) has to be managed externally or self-managed as harness SMP internal databases need to create DB secrets in kubernetes.
1.1 Static Credentials (Recommended for External Database Providers)
For external databases, customers can choose to use static credentials stored in Vault’s KV secrets engine.
-
Setup: Customers must manually create and manage the database credentials in their Vault instance.
Example paths:
Username Paths Password Paths <engine>/<basePath>/mongo/username<engine>/<basePath>/mongo/password<engine>/<basePath>/redis/username<engine>/<basePath>/redis/password<engine>/<basePath>/postgres/username<engine>/<basePath>/postgres/password -
Configuration: Customers must explicitly override the Vault path in
values.yamlto point to the location where the static credentials are stored.Example configuration:
global:
externalSecretsLoader:
databases:
mongo:
useDatabaseSecretsEngine: false
redis:
useDatabaseSecretsEngine: false
postgres:
useDatabaseSecretsEngine: false
timescaledb:
useDatabaseSecretsEngine: false
overridePath: "harness/postgres" -
Behavior: At runtime, the secrets loader fetches credentials from the user-provided Vault KV path.
1.2 Dynamic Credentials (Recommended for Self Hosted External Databases)
Harness also supports Vault’s Database Secrets Engine for external databases, which generates short-lived, dynamic credentials on demand. To enable dynamic credentials:
- Configure the Database Secrets Engine and role in Vault.
- Update
override.yamlto instruct the loader to use the Database Secrets Engine instead of static KV secrets.
Example configuration:
global:
externalSecretsLoader:
databases:
# MongoDB (external)
mongo:
useDatabaseSecretsEngine: "true"
databaseRole: "harness-mongo-role"
For internal databases managed by Harness, database credentials are automatically generated and managed by Harness. No manual Vault configuration or overrides are required.
2. Database SSL Certificates (Optional)
If your databases require SSL/TLS, the generator can also help manage the necessary certificates. Before running the generator, add the following section to your override.yaml:
global:
# Database SSL configuration (optional)
databases:
postgres:
ssl:
enabled: true
caFileKey: "caFile" # Key name in Vault for CA certificate
trustStoreKey: "trustStore" # Key name in Vault for TrustStore
trustStorePasswordKey: "trustStorePassword"
Certificate handling behavior
- During secret generation, the tool checks
harness-engine/harness/{dbType}for the required certificate keys. If any are missing, it provides instructions for uploading the certificates to Vault. - At runtime, the loader retrieves the certificates from Vault and mounts them into the application container.
Step 7: Deploy Harness
Deploy or upgrade Harness using the updated override file:
helm install harness harness/harness -f override.yaml -n harness
# Upgrade existing installation
helm upgrade harness harness/harness -f override.yaml -n harness
Step 8: Verify the Integration
Verify that all pods have successfully transitioned to the Running state:
kubectl get pods -n harness
Updating secrets and rotation
Secrets are injected into pods only at startup through an init container. As a result, changes made to secrets in Vault are not automatically reflected in running pods. To apply updated values, you must restart the affected pods. This ensures that the pods use the latest values from Vault.
1. Manual secret updates (static secrets)
If you manually update a secret in Vault, for example, when rotating an API key or updating a certificate, you must restart the relevant pods so they can retrieve the new values.
Step 1: Update the secret in Vault
vault kv put harness-engine/harness/platform-service/env-secrets API_KEY="new-value"
Step 2: Restart the pods
Trigger a rollout restart so the init container runs again and fetches the updated secrets. Restart a specific deployment:
kubectl rollout restart deployment platform-service -n harness
Restart all deployments in the namespace (for example, if a shared secret was updated):
kubectl rollout restart deployment -n harness
2. Dynamic database credentials (TTL expiry)
When using dynamic database credentials through the Vault database secrets engine, credentials are issued with a defined time-to-live (TTL).
- Behavior: Vault generates database credentials that are valid for the configured TTL (for example, 24 hours).
- Expiry: Once the TTL expires, the credentials become invalid and the application loses database connectivity.
- Rotation: To rotate credentials, the pod must be restarted before the TTL expires so that new credentials can be fetched.
Any updates to these secrets must be managed by the customer. Harness does not currently support automatic restarts or secret change watchers.
Advanced configuration
Generator tool options
The vaultSecretGenerator tool supports the following command format:
./vaultSecretGenerator generate [OPTIONS] <release-name> <chart-path> -f <values-file>
Required arguments
<release-name>: Name of your Harness release (for example,harness)<chart-path>: Path to the Harness Helm chart directory-f <values-file>: Override values file (can be specified multiple times)
Environment variables
VAULT_TOKEN: Vault token with write permissions (required)
Optional flags
-
--input-json <file>: Path to a JSON file containing pre-generated secrets. When specified, secret generation is skipped and existing secrets are uploaded to Vault. -
--output-json <file>: Saves generated secrets to a JSON file for backup or migration purposes.infoVault connection details—including the address, engine, base path, and authentication method—are read from
override.yamlunderglobal.externalSecretsLoader.vault, not from command-line flags.
Example
-
Basic usage
export VAULT_TOKEN="hvs.your-token-here"
./vaultSecretGenerator generate harness ./harness-chart -f override.yaml -
Using multiple override files
export VAULT_TOKEN="hvs.your-token-here"
./vaultSecretGenerator generate harness ./harness-chart \
-f base-values.yaml \
-f override.yaml \
-f secrets-config.yaml
How secrets are organized in Vault
The Harness secrets generator stores secrets in Vault using a predictable path structure, making it easier to manage, audit, and troubleshoot secrets across services.
| Secret type | Vault path | Example | Description |
|---|---|---|---|
| Environment variables | harness-engine/harness/{service}/env-secrets | harness-engine/harness/platform-service/env-secrets | Key-value pairs injected into the container as environment variables |
| File-based secrets | harness-engine/harness/{service}/file-secrets | harness-engine/harness/gateway/file-secrets | File-based secrets such as SSL certificates and JKS keystores |
| Database credentials | harness-engine/harness/{db-name} | harness-engine/harness/timescaledb | Static database credentials (when not using dynamic secrets) |
Configuration options
The following example shows all supported configuration options for using Vault-backed secrets with Harness. This configuration is required to load secrets from Vault at runtime.
You can configure the following options in your override.yaml file:
global:
externalSecretsLoader:
enabled: true
provider: vault
vault:
address: "http://vault.vault.svc.cluster.local:8200"
engine: "secret"
basePath: "harness"
auth:
method: "kubernetes" # Supported values: kubernetes, token, approle
role: "harness" # Required for Kubernetes authentication
# token: "hvs.xxx" # Token authentication (development only)
# Database configuration
databases:
timescaledb:
useDatabaseSecretsEngine: "true"
databaseRole: "harness-timescaledb-role"
engine: ""
overridePath: ""
mongo:
useDatabaseSecretsEngine: "true"
databaseRole: "harness-mongo-role"
engine: ""
overridePath: ""
redis:
useDatabaseSecretsEngine: "true"
databaseRole: "harness-redis-role"
engine: ""
overridePath: ""
How secrets are loaded
When a pod starts, an init container is responsible for retrieving secrets from Vault and making them available to the application container. The process follows these steps:
- Authenticate to Vault: The init container authenticates to Vault using the configured authentication method.
- Fetch environment variable secrets: The init container fetches environment variable secrets from
harness-engine/harness/{service}/env-secrets. - Fetch file-based secrets: The init container fetches file-based secrets from
harness-engine/harness/{service}/file-secrets. - Generate database credentials: If the database secrets engine is enabled, the init container generates database credentials from
database/creds/{role}. - Write secrets to a shared volume: The init container writes secrets to a shared volume:
- Environment variables are written to
/opt/harness/secrets/.env. - File-based secrets are written to
/opt/harness/secrets/files/{filename}.
- Environment variables are written to
- Exit successfully: The init container exits successfully, allowing the main application container to start.
File type detection
The secrets loader automatically detects file-based secrets based on their file extensions. Detected files are written to /opt/harness/secrets/files/ with permissions set to 0600 (read/write access for the owner, no access for group or others) and owned by the root user and group. Supported file extensions include .crt, .pem, .key, .jks, and .p12.
Troubleshooting
This section covers common issues you may encounter when using Vault-backed secrets with Harness and how to resolve them.
Common init container issues
| Error or symptom | Likely cause | Solution |
|---|---|---|
| Permission denied | Vault policy or role misconfiguration | Check the secrets loader logs. Verify that the Vault policy allows read access on harness-engine/data/... and list access on harness-engine/metadata/.... Also confirm the Kubernetes auth role and service account bindings are correct. |
| Secret not found | Secret missing or incorrect Vault path | Verify that the secret exists in Vault. Re-run the secrets generator if needed. Ensure the service name matches the expected Vault path. |
| Connection refused or no such host | Network or configuration issue | Confirm that the Vault service is running and reachable. Check the vault.address value in override.yaml. Test connectivity using curl from within the cluster if possible. |
Pod stuck in Init:0/1 | Any of the above issues | Inspect the init container logs using kubectl logs <pod> -c secrets-loader. Review pod events with kubectl describe pod <pod>. |
Frequently asked questions
1. Can I use this integration with secret managers other than Vault?
Currently, only HashiCorp Vault is supported. Support for additional secret managers may be introduced in future releases.
2. What happens if Vault is unavailable when a pod restarts?
The init container will fail and the pod will not start. Kubernetes will retry based on the configured restart policy. For production environments, ensure Vault is deployed in a highly available configuration.
3. How do I update a secret after deployment?
Update the secret in Vault and restart the affected pods so they can retrieve the new values:
vault kv put harness-engine/harness/platform-service/env-secrets API_KEY="new-value"
kubectl rollout restart deployment platform-service -n harness
4. What is the minimum Harness Helm chart version required?
The external secrets loader requires Harness Helm chart version 0.36.0 or later.