Cross-Cloud Secret Synchronization: AWS Secrets Manager and OCI Vault in a Production Multi-Cloud Setup

One of the most overlooked problems in multi-cloud environments is secrets management across providers. Teams usually solve it badly: they store the same secret in both clouds manually, forget to rotate one of them, and find out during an outage that the credentials have been out of sync for three months.

In this post I will walk through building an automated secrets synchronization pipeline between AWS Secrets Manager and OCI Vault. When a secret rotates in AWS, the pipeline detects the rotation event, retrieves the new value, and pushes it into OCI Vault automatically. Everything is built with Terraform, an AWS Lambda function, and OCI IAM. No manual steps after the initial deployment.

This is a pattern I have used in environments where the database layer runs on OCI (leveraging Oracle Database pricing and performance) while the application layer runs on AWS. Both sides need the same database credentials, and both sides need to stay in sync without human intervention.

Architecture

The flow works like this:

AWS Secrets Manager rotation event fires via EventBridge, which triggers a Lambda function. The Lambda retrieves the new secret value, authenticates to OCI using an API key stored in its own environment (not hardcoded), and calls the OCI Vault API to update the corresponding secret version. OCI Vault stores the new value and makes it available to workloads running in OCI.

Prerequisites

Before starting you need:

  • AWS account with permissions to manage Secrets Manager, Lambda, EventBridge, and IAM
  • OCI tenancy with permissions to manage Vault, Keys, and IAM policies
  • Terraform 1.5 or later
  • Python 3.11 for the Lambda function
  • An existing OCI Vault and master encryption key (or we will create one)

Step 1: OCI Vault and IAM Setup

Start with OCI. We need a Vault, a master key, and an IAM user whose API key the Lambda will use to authenticate.

hcl

# OCI Vault
resource "oci_kms_vault" "app_vault" {
compartment_id = var.compartment_id
display_name = "multi-cloud-secrets-vault"
vault_type = "DEFAULT"
}
# Master Encryption Key inside the Vault
resource "oci_kms_key" "secrets_key" {
compartment_id = var.compartment_id
display_name = "secrets-master-key"
management_endpoint = oci_kms_vault.app_vault.management_endpoint
key_shape {
algorithm = "AES"
length = 32
}
}
# IAM user for cross-cloud access
resource "oci_identity_user" "sync_user" {
compartment_id = var.tenancy_ocid
name = "aws-secrets-sync-user"
description = "Service user for AWS Lambda to push secrets into OCI Vault"
email = "sync-user@internal.example.com"
}
# API key for the sync user (you will generate the actual key pair separately)
resource "oci_identity_api_key" "sync_user_key" {
user_id = oci_identity_user.sync_user.id
key_value = var.oci_sync_user_public_key_pem
}
# IAM group for the sync user
resource "oci_identity_group" "sync_group" {
compartment_id = var.tenancy_ocid
name = "secrets-sync-group"
description = "Group for cross-cloud secrets sync service users"
}
resource "oci_identity_user_group_membership" "sync_membership" {
group_id = oci_identity_group.sync_group.id
user_id = oci_identity_user.sync_user.id
}
# Minimal IAM policy - only what is needed, nothing more
resource "oci_identity_policy" "sync_policy" {
compartment_id = var.compartment_id
name = "secrets-sync-policy"
description = "Allows sync user to manage secrets in the app vault only"
statements = [
"Allow group secrets-sync-group to manage secret-family in compartment id ${var.compartment_id} where target.vault.id = '${oci_kms_vault.app_vault.id}'",
"Allow group secrets-sync-group to use keys in compartment id ${var.compartment_id} where target.key.id = '${oci_kms_key.secrets_key.id}'"
]
}

The policy scope is intentionally narrow. The sync user can only manage secrets inside this specific vault and can only use this specific key. If the AWS Lambda credentials are ever compromised, the blast radius is limited to this vault.

Step 2: Create the Initial Secret in OCI Vault

We need a secret placeholder in OCI Vault that the Lambda will update. The initial value does not matter since it will be overwritten on the first sync.

hcl

resource "oci_vault_secret" "db_password" {
compartment_id = var.compartment_id
vault_id = oci_kms_vault.app_vault.id
key_id = oci_kms_key.secrets_key.id
secret_name = "prod-db-password"
secret_content {
content_type = "BASE64"
content = base64encode("initial-placeholder-value")
name = "v1"
stage = "CURRENT"
}
metadata = {
source = "aws-secrets-manager"
aws_secret = "prod/database/password"
environment = "production"
}
}

Step 3: AWS Secrets Manager and the Source Secret

On the AWS side, create the authoritative secret and enable automatic rotation.

hcl

resource "aws_secretsmanager_secret" "db_password" {
name = "prod/database/password"
description = "Production database password - synced to OCI Vault"
recovery_window_in_days = 7
tags = {
Environment = "production"
SyncTarget = "oci-vault"
OciSecretName = "prod-db-password"
}
}
resource "aws_secretsmanager_secret_version" "db_password_v1" {
secret_id = aws_secretsmanager_secret.db_password.id
secret_string = jsonencode({
username = "db_admin",
password = var.initial_db_password,
host = var.db_host,
port = 1521,
database = "PRODDB"
})
}
# Rotation configuration - rotate every 30 days
resource "aws_secretsmanager_secret_rotation" "db_password_rotation" {
secret_id = aws_secretsmanager_secret.db_password.id
rotation_lambda_arn = aws_lambda_function.db_rotation_lambda.arn
rotation_rules {
automatically_after_days = 30
}
}

Step 4: Store OCI Credentials in AWS Secrets Manager

The Lambda needs OCI API credentials to authenticate. Store them as a secret in AWS Secrets Manager so they never appear in Lambda environment variables in plaintext.

hcl

resource "aws_secretsmanager_secret" "oci_credentials" {
name = "internal/oci-sync-credentials"
description = "OCI API key credentials for secrets sync Lambda"
tags = {
Environment = "production"
Purpose = "cross-cloud-sync"
}
}
resource "aws_secretsmanager_secret_version" "oci_credentials_v1" {
secret_id = aws_secretsmanager_secret.oci_credentials.id
secret_string = jsonencode({
tenancy_ocid = var.oci_tenancy_ocid,
user_ocid = var.oci_sync_user_ocid,
fingerprint = var.oci_api_key_fingerprint,
private_key = var.oci_private_key_pem,
region = var.oci_region
})
}

Step 5: The Lambda Function

This is the core of the pipeline. The Lambda retrieves the rotated secret from AWS Secrets Manager, loads OCI credentials from its own secrets store, and calls the OCI Vault API to create a new secret version.

python

import boto3
import json
import base64
import oci
import logging
import os
from datetime import datetime, timezone
from botocore.exceptions import ClientError
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def get_oci_config():
"""Retrieve OCI credentials from AWS Secrets Manager."""
client = boto3.client("secretsmanager", region_name=os.environ["AWS_REGION"])
try:
response = client.get_secret_value(
SecretId=os.environ["OCI_CREDENTIALS_SECRET_ARN"]
)
creds = json.loads(response["SecretString"])
return {
"tenancy": creds["tenancy_ocid"],
"user": creds["user_ocid"],
"fingerprint": creds["fingerprint"],
"key_content": creds["private_key"],
"region": creds["region"]
}
except ClientError as e:
logger.error(f"Failed to retrieve OCI credentials: {e}")
raise
def get_aws_secret(secret_arn: str) -> str:
"""Retrieve the current value of an AWS secret."""
client = boto3.client("secretsmanager", region_name=os.environ["AWS_REGION"])
try:
response = client.get_secret_value(SecretId=secret_arn)
return response.get("SecretString") or base64.b64decode(
response["SecretBinary"]
).decode("utf-8")
except ClientError as e:
logger.error(f"Failed to retrieve AWS secret {secret_arn}: {e}")
raise
def push_to_oci_vault(
oci_config: dict,
vault_id: str,
key_id: str,
secret_ocid: str,
secret_value: str
):
"""Create a new version of an OCI Vault secret."""
vaults_client = oci.vault.VaultsClient(oci_config)
encoded_value = base64.b64encode(secret_value.encode("utf-8")).decode("utf-8")
update_details = oci.vault.models.UpdateSecretDetails(
secret_content=oci.vault.models.Base64SecretContentDetails(
content_type=oci.vault.models.SecretContentDetails.CONTENT_TYPE_BASE64,
content=encoded_value,
name=f"sync-{datetime.now(timezone.utc).strftime('%Y%m%d%H%M%S')}",
stage="CURRENT"
),
metadata={
"synced_from": "aws-secrets-manager",
"synced_at": datetime.now(timezone.utc).isoformat()
}
)
response = vaults_client.update_secret(
secret_id=secret_ocid,
update_secret_details=update_details
)
logger.info(
f"OCI secret updated. OCID: {secret_ocid}, "
f"New version: {response.data.current_version_number}"
)
return response.data
def handler(event, context):
"""
EventBridge trigger handler.
Expects event detail to contain:
- aws_secret_arn: ARN of the rotated AWS secret
- oci_secret_ocid: OCID of the target OCI Vault secret
- oci_vault_id: OCID of the target OCI Vault
- oci_key_id: OCID of the OCI KMS key
"""
logger.info(f"Received event: {json.dumps(event)}")
detail = event.get("detail", {})
aws_secret_arn = detail.get("aws_secret_arn")
oci_secret_ocid = detail.get("oci_secret_ocid")
oci_vault_id = detail.get("oci_vault_id")
oci_key_id = detail.get("oci_key_id")
if not all([aws_secret_arn, oci_secret_ocid, oci_vault_id, oci_key_id]):
logger.error("Missing required fields in event detail")
raise ValueError("Event detail must include aws_secret_arn, oci_secret_ocid, oci_vault_id, oci_key_id")
logger.info(f"Syncing secret: {aws_secret_arn} to OCI: {oci_secret_ocid}")
# Step 1: Get OCI credentials
oci_config = get_oci_config()
# Step 2: Retrieve the rotated AWS secret
secret_value = get_aws_secret(aws_secret_arn)
# Step 3: Push to OCI Vault
result = push_to_oci_vault(
oci_config=oci_config,
vault_id=oci_vault_id,
key_id=oci_key_id,
secret_ocid=oci_secret_ocid,
secret_value=secret_value
)
return {
"statusCode": 200,
"body": {
"message": "Secret synced successfully",
"oci_secret_ocid": oci_secret_ocid,
"oci_version": result.current_version_number
}
}

Step 6: Lambda IAM Role and Deployment

hcl

data "aws_iam_policy_document" "lambda_assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
data "aws_iam_policy_document" "lambda_permissions" {
statement {
effect = "Allow"
actions = [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
]
resources = [
aws_secretsmanager_secret.db_password.arn,
aws_secretsmanager_secret.oci_credentials.arn
]
}
statement {
effect = "Allow"
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
resources = ["arn:aws:logs:*:*:*"]
}
}
resource "aws_iam_role" "sync_lambda_role" {
name = "secrets-sync-lambda-role"
assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
}
resource "aws_iam_role_policy" "sync_lambda_policy" {
name = "secrets-sync-lambda-policy"
role = aws_iam_role.sync_lambda_role.id
policy = data.aws_iam_policy_document.lambda_permissions.json
}
resource "aws_lambda_function" "secrets_sync" {
filename = "${path.module}/lambda/secrets_sync.zip"
function_name = "oci-secrets-sync"
role = aws_iam_role.sync_lambda_role.arn
handler = "main.handler"
runtime = "python3.11"
timeout = 60
memory_size = 256
source_code_hash = filebase64sha256("${path.module}/lambda/secrets_sync.zip")
environment {
variables = {
OCI_CREDENTIALS_SECRET_ARN = aws_secretsmanager_secret.oci_credentials.arn
AWS_REGION = var.aws_region
}
}
layers = [aws_lambda_layer_version.oci_sdk_layer.arn]
}

Bundle the OCI Python SDK as a Lambda Layer so the function does not need to package it inline:

bash

mkdir -p lambda_layer/python
pip install oci --target lambda_layer/python
cd lambda_layer && zip -r ../oci_sdk_layer.zip python/

hcl

resource "aws_lambda_layer_version" "oci_sdk_layer" {
filename = "${path.module}/oci_sdk_layer.zip"
layer_name = "oci-python-sdk"
compatible_runtimes = ["python3.11"]
source_code_hash = filebase64sha256("${path.module}/oci_sdk_layer.zip")
}

Step 7: EventBridge Rule to Trigger on Rotation

hcl

resource "aws_cloudwatch_event_rule" "secret_rotation_rule" {
name = "detect-secret-rotation"
description = "Fires when a Secrets Manager secret rotation completes"
event_pattern = jsonencode({
source = ["aws.secretsmanager"],
detail-type = ["AWS API Call via CloudTrail"],
detail = {
eventSource = ["secretsmanager.amazonaws.com"],
eventName = ["RotateSecret", "PutSecretValue"]
}
})
}
resource "aws_cloudwatch_event_target" "sync_lambda_target" {
rule = aws_cloudwatch_event_rule.secret_rotation_rule.name
target_id = "SyncToOCI"
arn = aws_lambda_function.secrets_sync.arn
input_transformer {
input_paths = {
secret_arn = "$.detail.requestParameters.secretId"
}
input_template = <<EOF
{
"detail": {
"aws_secret_arn": "<secret_arn>",
"oci_secret_ocid": "${var.oci_db_password_secret_ocid}",
"oci_vault_id": "${oci_kms_vault.app_vault.id}",
"oci_key_id": "${oci_kms_key.secrets_key.id}"
}
}
EOF
}
}
resource "aws_lambda_permission" "allow_eventbridge" {
statement_id = "AllowEventBridgeInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.secrets_sync.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.secret_rotation_rule.arn
}

Step 8: Verifying the Pipeline

Manually trigger a rotation to test the full pipeline without waiting 30 days:

bash

# Force a rotation in AWS
aws secretsmanager rotate-secret \
--secret-id prod/database/password \
--region us-east-1
# Check Lambda execution logs
aws logs tail /aws/lambda/oci-secrets-sync --follow
# Verify the new version appeared in OCI Vault
oci vault secret get \
--secret-id <your-oci-secret-ocid> \
--query 'data.{name:secret-name, version:"current-version-number", updated:"time-of-current-version-need-rotation"}' \
--output table

A successful sync produces output similar to this in the Lambda logs:

INFO: Syncing secret: arn:aws:secretsmanager:us-east-1:123456789:secret:prod/database/password to OCI: ocid1.vaultsecret.oc1...
INFO: OCI secret updated. OCID: ocid1.vaultsecret.oc1..., New version: 3

Handling Failures and Drift

The pipeline as built is synchronous and event-driven, which means if the Lambda fails, the OCI secret does not get updated. Add a dead-letter queue and a reconciliation function that runs on a schedule to catch any drift.

hcl

resource "aws_sqs_queue" "sync_dlq" {
name = "secrets-sync-dlq"
message_retention_seconds = 86400
}
resource "aws_lambda_function_event_invoke_config" "sync_retry" {
function_name = aws_lambda_function.secrets_sync.function_name
maximum_retry_attempts = 2
maximum_event_age_in_seconds = 300
destination_config {
on_failure {
destination = aws_sqs_queue.sync_dlq.arn
}
}
}

For reconciliation, a scheduled Lambda that runs every hour compares the LastRotatedDate on the AWS secret against the synced_at metadata tag on the OCI secret. If they differ by more than five minutes, it triggers a forced sync.

Security Considerations

A few things to keep in mind when running this in production.

The OCI private key stored in AWS Secrets Manager should be rotated periodically, just like any other credential. Add it to your rotation schedule.

Enable CloudTrail in AWS and OCI Audit logging so every access to both secrets stores is recorded. If something is off with the sync, the audit logs tell you exactly which principal made the change and when.

Use VPC endpoints for Secrets Manager in AWS so the Lambda traffic never crosses the public internet when retrieving credentials.

On the OCI side, enable Vault audit logging to the OCI Logging service so every secret version write is captured.

Wrapping Up

This pipeline solves a real operational problem without requiring a third-party secrets broker. AWS Secrets Manager stays the authoritative source. OCI Vault stays current automatically. The only manual step is the initial deployment.

The pattern extends to other cross-cloud credential types. Database connection strings, API tokens, TLS certificates — any secret that needs to exist on both clouds can follow the same EventBridge to Lambda to OCI Vault flow. Extend the Lambda to support a mapping table of AWS secret ARNs to OCI secret OCIDs and one function handles your entire secrets estate across both providers.

Regards,
Osama

Building a Multi-Cloud Secrets Management Strategy with HashiCorp Vault

Let me ask you something. Where are your database passwords right now? Your API keys? Your TLS certificates?

If you’re like most teams I’ve worked with, the honest answer is “scattered everywhere.” Some are in environment variables. Some are in Kubernetes secrets (base64 encoded, which isn’t encryption by the way). A few are probably still hardcoded in configuration files that someone committed to Git three years ago.

I’m not judging. We’ve all been there. But as your infrastructure grows across multiple clouds, this approach becomes a ticking time bomb. One leaked credential can compromise everything.

In this article, I’ll show you how to build a centralized secrets management strategy using HashiCorp Vault. We’ll deploy it properly, integrate it with AWS, Azure, and GCP, and set up dynamic secrets that rotate automatically. No more shared passwords. No more “who has access to what” mysteries.

Why Vault? Why Now?

Before we dive into implementation, let me explain why I recommend Vault over cloud-native solutions like AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.

Don’t get me wrong. Those services are excellent. If you’re running entirely on one cloud, they might be all you need. But here’s the reality for most organizations:

You have workloads on AWS. Your data team uses GCP for BigQuery. Your enterprise applications run on Azure. Maybe you still have some on-premises systems. And you need a consistent way to manage secrets across all of them.

Vault gives you that single control plane. One audit log. One policy engine. One place to rotate credentials. And it integrates with everything.

Architecture Overview

Here’s what we’re building:

The key principle here is that applications never store long-lived credentials. Instead, they authenticate to Vault and receive short-lived, automatically rotated credentials for the specific resources they need.


Building a Multi-Cloud Secrets Management Strategy with HashiCorp Vault

Let me ask you something. Where are your database passwords right now? Your API keys? Your TLS certificates?

If you’re like most teams I’ve worked with, the honest answer is “scattered everywhere.” Some are in environment variables. Some are in Kubernetes secrets (base64 encoded, which isn’t encryption by the way). A few are probably still hardcoded in configuration files that someone committed to Git three years ago.

I’m not judging. We’ve all been there. But as your infrastructure grows across multiple clouds, this approach becomes a ticking time bomb. One leaked credential can compromise everything.

In this article, I’ll show you how to build a centralized secrets management strategy using HashiCorp Vault. We’ll deploy it properly, integrate it with AWS, Azure, and GCP, and set up dynamic secrets that rotate automatically. No more shared passwords. No more “who has access to what” mysteries.

Why Vault? Why Now?

Before we dive into implementation, let me explain why I recommend Vault over cloud-native solutions like AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.

Don’t get me wrong. Those services are excellent. If you’re running entirely on one cloud, they might be all you need. But here’s the reality for most organizations:

You have workloads on AWS. Your data team uses GCP for BigQuery. Your enterprise applications run on Azure. Maybe you still have some on-premises systems. And you need a consistent way to manage secrets across all of them.

Vault gives you that single control plane. One audit log. One policy engine. One place to rotate credentials. And it integrates with everything.

Architecture Overview

Here’s what we’re building:

The key principle here is that applications never store long-lived credentials. Instead, they authenticate to Vault and receive short-lived, automatically rotated credentials for the specific resources they need.

Step 1: Deploy Vault on Kubernetes

I prefer running Vault on Kubernetes because it gives you high availability, easy scaling, and integrates beautifully with your existing workloads. We’ll use the official Helm chart.

Prerequisites

You’ll need a Kubernetes cluster. Any managed Kubernetes service works: EKS, AKS, GKE, or even OKE. For this guide, I’ll use commands that work across all of them.

Create the Namespace and Storage

bash

kubectl create namespace vault
# Create storage class for Vault data
# This example uses AWS EBS, adjust for your cloud
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vault-storage
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF

Configure Vault Helm Values

yaml

# vault-values.yaml
global:
enabled: true
tlsDisable: false
injector:
enabled: true
replicas: 2
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 512Mi
cpu: 500m
server:
enabled: true
# Run 3 replicas for high availability
ha:
enabled: true
replicas: 3
# Use Raft for integrated storage
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = false
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-tls/tls.key"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-east-1"
kms_key_id = "alias/vault-unseal-key"
}
resources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 2Gi
cpu: 2000m
dataStorage:
enabled: true
size: 20Gi
storageClass: vault-storage
auditStorage:
enabled: true
size: 10Gi
storageClass: vault-storage
# Service account for cloud integrations
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/vault-server-role
ui:
enabled: true
serviceType: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

Generate TLS Certificates

Vault should always use TLS. Here’s how to create certificates using cert-manager:

yaml

# vault-certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: vault-tls
namespace: vault
spec:
secretName: vault-tls
duration: 8760h # 1 year
renewBefore: 720h # 30 days
subject:
organizations:
- YourCompany
commonName: vault.vault.svc.cluster.local
dnsNames:
- vault
- vault.vault
- vault.vault.svc
- vault.vault.svc.cluster.local
- vault-0.vault-internal
- vault-1.vault-internal
- vault-2.vault-internal
- "*.vault-internal"
ipAddresses:
- 127.0.0.1
issuerRef:
name: cluster-issuer
kind: ClusterIssuer

Install Vault

bash

helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault \
--namespace vault \
--values vault-values.yaml \
--version 0.27.0

Initialize and Unseal

This is a one-time operation. Keep these keys safe. I mean really safe. Like offline, in multiple secure locations.

bash

# Initialize Vault
kubectl exec -n vault vault-0 -- vault operator init \
-key-shares=5 \
-key-threshold=3 \
-format=json > vault-init.json
# The output contains your unseal keys and root token
# Store these securely!
# If not using auto-unseal, you'd need to unseal manually:
# kubectl exec -n vault vault-0 -- vault operator unseal <key1>
# kubectl exec -n vault vault-0 -- vault operator unseal <key2>
# kubectl exec -n vault vault-0 -- vault operator unseal <key3>
# With AWS KMS auto-unseal configured, Vault unseals automatically

Step 2: Configure Authentication Methods

Now we need to tell Vault how applications will authenticate. This is where it gets interesting.

Kubernetes Authentication

Applications running in Kubernetes can authenticate using their service account tokens. No passwords needed.

bash

# Enable Kubernetes auth
vault auth enable kubernetes
# Configure it to trust our cluster
vault write auth/kubernetes/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
issuer="https://kubernetes.default.svc.cluster.local"

AWS IAM Authentication

For workloads running on EC2, Lambda, or ECS, they can authenticate using their IAM roles.

bash

# Enable AWS auth
vault auth enable aws
# Configure AWS credentials for Vault to verify requests
vault write auth/aws/config/client \
secret_key=$AWS_SECRET_KEY \
access_key=$AWS_ACCESS_KEY
# Create a role that EC2 instances can use
vault write auth/aws/role/ec2-app-role \
auth_type=iam \
bound_iam_principal_arn="arn:aws:iam::ACCOUNT_ID:role/app-server-role" \
policies=app-policy \
ttl=1h

Azure Authentication

For Azure workloads using Managed Identities:

bash

# Enable Azure auth
vault auth enable azure
# Configure Azure
vault write auth/azure/config \
tenant_id=$AZURE_TENANT_ID \
resource="https://management.azure.com/" \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
# Create a role for Azure VMs
vault write auth/azure/role/azure-app-role \
policies=app-policy \
bound_subscription_ids=$AZURE_SUBSCRIPTION_ID \
bound_resource_groups=production-rg \
ttl=1h

GCP Authentication

For GCP workloads using service accounts:

bash

# Enable GCP auth
vault auth enable gcp
# Configure GCP
vault write auth/gcp/config \
credentials=@gcp-credentials.json
# Create a role for GCE instances
vault write auth/gcp/role/gce-app-role \
type="gce" \
policies=app-policy \
bound_projects="my-project-id" \
bound_zones="us-central1-a,us-central1-b" \
ttl=1h

Step 3: Set Up Dynamic Secrets

Here’s where the magic happens. Instead of storing static database passwords, Vault can generate unique credentials on demand and revoke them automatically when they expire.

Dynamic AWS Credentials

bash

# Enable AWS secrets engine
vault secrets enable aws
# Configure root credentials (Vault uses these to create dynamic creds)
vault write aws/config/root \
access_key=$AWS_ACCESS_KEY \
secret_key=$AWS_SECRET_KEY \
region=us-east-1
# Create a role that generates S3 read-only credentials
vault write aws/roles/s3-reader \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
EOF
# Now any authenticated client can get temporary AWS credentials
vault read aws/creds/s3-reader
# Returns:
# access_key AKIA...
# secret_key xyz123...
# lease_duration 1h
# These credentials will be automatically revoked after 1 hour

Dynamic Database Credentials

This is probably my favorite feature. Every time an application needs to connect to a database, it gets a unique username and password that only it knows.

bash

# Enable database secrets engine
vault secrets enable database
# Configure PostgreSQL connection
vault write database/config/production-postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="app-readonly,app-readwrite" \
connection_url="postgresql://{{username}}:{{password}}@db.example.com:5432/appdb?sslmode=require" \
username="vault_admin" \
password="vault_admin_password"
# Create a read-only role
vault write database/roles/app-readonly \
db_name=production-postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
# Create a read-write role
vault write database/roles/app-readwrite \
db_name=production-postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"

Now when your application requests credentials:

bash

vault read database/creds/app-readonly
# Returns:
# username v-kubernetes-app-readonly-abc123
# password A1B2C3D4E5F6...
# lease_duration 1h

Every request gets a different username and password. If credentials are compromised, they expire automatically. And you have a complete audit trail of who accessed what, when.

Dynamic Azure Credentials

bash

# Enable Azure secrets engine
vault secrets enable azure
# Configure Azure
vault write azure/config \
subscription_id=$AZURE_SUBSCRIPTION_ID \
tenant_id=$AZURE_TENANT_ID \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
# Create a role that generates Azure Service Principals
vault write azure/roles/contributor \
ttl=1h \
azure_roles=-<<EOF
[
{
"role_name": "Contributor",
"scope": "/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/production-rg"
}
]
EOF

Step 4: Application Integration

Let’s see how applications actually use Vault. I’ll show you several patterns.

Pattern 1: Vault Agent Sidecar (Kubernetes)

This is my recommended approach for Kubernetes. Vault Agent runs alongside your application and handles authentication and secret retrieval automatically.

yaml

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
# These annotations tell Vault Agent what to do
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "my-app-role"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/app-readonly"
vault.hashicorp.com/agent-inject-template-db-creds: |
{{- with secret "database/creds/app-readonly" -}}
export DB_USERNAME="{{ .Data.username }}"
export DB_PASSWORD="{{ .Data.password }}"
{{- end }}
spec:
serviceAccountName: my-app
containers:
- name: my-app
image: my-app:latest
command: ["/bin/sh", "-c"]
args:
- source /vault/secrets/db-creds && ./start-app.sh

When this pod starts, Vault Agent automatically:

  1. Authenticates to Vault using the Kubernetes service account
  2. Retrieves database credentials
  3. Writes them to /vault/secrets/db-creds
  4. Renews the credentials before they expire
  5. Updates the file when credentials change

Your application just reads from a file. It doesn’t need to know anything about Vault.

Pattern 2: Direct SDK Integration

For applications that need more control, you can use the Vault SDK directly:

python

# Python example
import hvac
import os
def get_vault_client():
"""Create Vault client using Kubernetes auth."""
client = hvac.Client(url=os.environ['VAULT_ADDR'])
# Read the service account token
with open('/var/run/secrets/kubernetes.io/serviceaccount/token') as f:
jwt = f.read()
# Authenticate to Vault
client.auth.kubernetes.login(
role='my-app-role',
jwt=jwt,
mount_point='kubernetes'
)
return client
def get_database_credentials():
"""Get dynamic database credentials."""
client = get_vault_client()
# Request new database credentials
response = client.secrets.database.generate_credentials(
name='app-readonly',
mount_point='database'
)
return {
'username': response['data']['username'],
'password': response['data']['password'],
'lease_id': response['lease_id'],
'lease_duration': response['lease_duration']
}
def connect_to_database():
"""Connect to database with dynamic credentials."""
creds = get_database_credentials()
connection = psycopg2.connect(
host='db.example.com',
database='appdb',
user=creds['username'],
password=creds['password']
)
return connection

Pattern 3: External Secrets Operator

If you prefer Kubernetes-native secrets, use External Secrets Operator to sync Vault secrets to Kubernetes:

yaml

# external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: vault-backend
target:
name: app-secrets
creationPolicy: Owner
data:
- secretKey: api-key
remoteRef:
key: secret/data/app/api-key
property: value
- secretKey: db-password
remoteRef:
key: secret/data/app/database
property: password

Step 5: Policies and Access Control

Vault policies determine who can access what. Be specific and follow the principle of least privilege.

hcl

# app-policy.hcl
# Allow reading dynamic database credentials
path "database/creds/app-readonly" {
capabilities = ["read"]
}
# Allow reading application secrets
path "secret/data/app/*" {
capabilities = ["read", "list"]
}
# Deny access to admin paths
path "sys/*" {
capabilities = ["deny"]
}
# Allow the app to renew its own token
path "auth/token/renew-self" {
capabilities = ["update"]
}

Apply the policy:

bash

vault policy write app-policy app-policy.hcl
# Create a Kubernetes auth role that uses this policy
vault write auth/kubernetes/role/my-app-role \
bound_service_account_names=my-app \
bound_service_account_namespaces=production \
policies=app-policy \
ttl=1h

Step 6: Monitoring and Audit

You need visibility into who’s accessing secrets. Enable audit logging:

bash

# Enable file audit device
vault audit enable file file_path=/vault/audit/vault-audit.log
# Enable syslog for centralized logging
vault audit enable syslog tag="vault" facility="AUTH"

For monitoring, Vault exposes Prometheus metrics:

yaml

# ServiceMonitor for Prometheus
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: vault
namespace: vault
spec:
selector:
matchLabels:
app.kubernetes.io/name: vault
endpoints:
- port: http
path: /v1/sys/metrics
params:
format: ["prometheus"]
scheme: https
tlsConfig:
insecureSkipVerify: true

Key metrics to alert on:

yaml

# Prometheus alerting rules
groups:
- name: vault
rules:
- alert: VaultSealed
expr: vault_core_unsealed == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Vault is sealed"
description: "Vault instance {{ $labels.instance }} is sealed and unable to serve requests"
- alert: VaultTooManyPendingTokens
expr: vault_token_count > 10000
for: 5m
labels:
severity: warning
annotations:
summary: "Too many Vault tokens"
description: "Vault has {{ $value }} active tokens. Consider reducing TTLs."
- alert: VaultLeadershipLost
expr: increase(vault_core_leadership_lost_count[5m]) > 0
labels:
severity: warning
annotations:
summary: "Vault leadership changes detected"

Common Mistakes to Avoid

Let me save you some headaches by sharing mistakes I’ve seen (and made):

Mistake 1: Using the root token for applications

The root token has unlimited access. Create specific policies and tokens for each application.

Mistake 2: Not rotating the root token

After initial setup, generate a new root token and revoke the original:

bash

vault operator generate-root -init
# Follow the process to generate a new root token
vault token revoke <old-root-token>

Mistake 3: Setting TTLs too long

Short TTLs mean compromised credentials are valid for less time. Start with 1 hour and adjust based on your needs.

Mistake 4: Not testing recovery procedures

Practice unsealing Vault. Practice recovering from backup. Do it regularly. The worst time to learn is during an actual incident.

Mistake 5: Storing unseal keys together

Distribute unseal keys to different people in different locations. Use a threshold scheme (3 of 5) so no single person can unseal Vault.

Regards, Enjoy the Cloud
Osama

Encryption on Azure

What is encryption?

Encryption is the process of making data unreadable and unusable to unauthorized viewers. To use or read the encrypted data, it must be decrypted, which requires the use of a secret key. 

There are two different type :-

  • Symmetric encryption :– Which mean you will use same key  to encrypt and decrypt the data
  • Asymmetric encryption :– Which mean you will use different key , for example Private and public key.

both of these two type having two different ways :-

  • Encryption at rest which mean data stored in a database, or data stored in a storage account.
  • Encryption in transit which means  data actively moving from one location to another.

So, there are different type of Encryption provided by Azure:-

  • Encrypt raw storage
    • Azure Storage Service Encryption :-  encrypts your data before persisting it to Azure Managed Disks, Azure Blob storage, Azure Files, or Azure Queue storage, and decrypts the data before retrieval.
    • Encrypt virtual machine disks low-level encryption protection for data written to physical disk
  • Azure Disk Encryption : this method helps you to encruypt the actually windows or Linux disk, the best way to do this is h Azure Key Vault.
  • Encrypt databases
    • Transparent data encryption :- helps protect Azure SQL Database and Azure Data Warehouse against the threat of malicious activity. It performs real-time encryption and decryption of the database.

The best way to do this which is Azure Key Vault,  cloud service for storing your application secrets. Key Vault helps you control your applications’ secrets by keeping them in a single, why should i use it :-

  • Centralizing the solutions.
  • Securely stored secrets and keys.
  • Monitor access and use.
  • Simplified administration of application secrets.

There are also two different kind of certificate in Azure which will helps you to encrypt for example the website or application, you need to know that Certificates used in Azure are x.509 v3 and can be signed by a trusted certificate authority, or they can be self-signed.

Types of certificates

  • Service certificates are used for cloud services
  • Management certificates are used for authenticating with the management API

Service certificates

which is attached to cloud services and enable secure communication to and from the service. For example, if you deploy a web site, you would want to supply a certificate that can authenticate an exposed HTTPS endpoint. Service certificates, which are defined in your service definition, are automatically deployed to the VM that is running an instance of your role.

Management certificates

allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. However, these types of certificates are not related to cloud services.

Be noted that you can use Azure Key Vault to store your certificates.

Cheers

Osama

Oracle Database Application Security Book

Finally …

The Book is alive

For the first time the book which is dicussed critcal security issues such as database threats, and how to void them, the book also include advance topics about Oracle internet directory, Oracle access manager and how to implement full cycle single sign on,

Focus on the security aspects of designing, building, and maintaining a secure Oracle Database application. Starting with data encryption, you will learn to work with transparent data, back-up, and networks. You will then go through the key principles of audits, where you will get to know more about identity preservation, policies and fine-grained audits. Moving on to virtual private databases, you’ll set up and configure a VPD to work in concert with other security features in Oracle, followed by tips on managing configuration drift, profiles, and default users.

What You Will Learn:- 

  • Work with Oracle Internet Directory using the command-line and the console.
  • Integrate Oracle Access Manager with different applications.
  • Work with the Oracle Identity Manager console and connectors, while creating your own custom one.
  • Troubleshooting issues with OID, OAM, and OID.
  • Dive deep into file system and network security concepts.
  • First time chapter that include most of the critical database threats in real life.

 

You can buy the book now from amazon here

 

Cheers

Osama

Oracle Password Security

As Certified Ethical hacker and Penetration  Testing Always people Asked me about if the Oracle Password can be Cracked or not ? You need to know that if the hacker want to get into your database and he will all you can do is make it harder for him , so don’t choose Easy password to crack

I post these topics not to use it in wrong way , No as DBA you need to know about Securing you database
and How to make it unbreakable.

For example check the below tools that used to crack Oracle Password

And Others Tools Found for free On Internet , for example Red database security (which is amazing company and website provide you with article/topics about oracle security ) provide some of these tools for free.

Thank you
Osama Mustafa

Oracle security Function for password changing

Check this function that is used for changing user password , you need to watch out from functions like that i post this function as an example

FUNCTION CHGPWD (
P_USER VARCHAR2,
P_PWD VARCHAR2)
RETURN BOOLEAN IS
L_STMT VARCHAR2(255);
BEGIN
L_STMT:= ‘ALTER USER “‘ || P_USER || ‘” IDENTIFIED BY “‘ || P_PWD||’”‘;
EXECUTE IMMEDIATE L_STMT;
RETURN TRUE;
END;

Thank you

I will Post More and More Topics about Oracle security

Data Masking In Oracle/Column Masking

Or We Can Call it VPD : Virtual Private Database

What is Data Masking Mean ? 

simple way to hide you valuable data from certain users without having to apply encrypt/decrypt techniques and increase the column width to accommodate the new string like the old times. Through some simple configuration you can create policies to show your important columns as null without rewriting a single line of code on your application side.

There are 3 steps for accomplish column masking:

  1. A function to be used by the policy (function policy) created in next step.
  2. Use dbms_rls package to create the policy.
  3. Assign “exempt access policy” to users to be excluded from the policy. These users can see all data with no masking.

Step1 : Create Function Policy 

CREATE OR REPLACE
FUNCTION vpd_function (obj_owner IN VARCHAR2, obj_name IN VARCHAR2)
RETURN VARCHAR2
AS
BEGIN
RETURN ‘rowid = ”0”’;
END vpd_function;

/

The Above Function is Used for Column Masking , If you set this function to True All User will be able to see the correct Data , But the above function Is to False (rowid=0).


Step2: Create Policy

BEGIN
DBMS_RLS.ADD_POLICY(object_schema=> ‘SCOTT’,
object_name=> ‘EMP’,
policy_name=> ‘scott_emp_policy’,
function_schema=> ‘SYSTEM’,
policy_function=> ‘vpd_function’,
sec_relevant_cols=> ‘JOB’,
policy_type => DBMS_RLS.SHARED_STATIC,
sec_relevant_cols_opt=> dbms_rls.ALL_ROWS);

END;
/

exempt access policy : Use to Exclude Some Users to See All the Correct Data .

Important Views :

dba_policies
v$vpd_policy

Enjoy with Security

Osama Mustafa

Oracle Secuirty Tips / SQLNET.ORA Part 2

Hi All ,

I post before about sqlnet.ora with parameter called invited_list , Exclude_list , assume  that i want to prevent sysdba to access database without password Simple Way .

SQLNET.AUTHENTICATION_SERVICES=NONE 



Setting “SQLNET.AUTHENTICATION_SERVICES” parameter to “NONE” in sqlnet.ora file will make it not possible to connect to the database without a password as sysdba. (sqlplus / as sysdba)

This parameter may also have the values : NTS for Windows NT native authentication, ALL for all authentication methods.

Authentication Methods Available with Oracle Advanced Security:

  • kerberos5 for Kerberos authentication
  • cybersafe for Cybersafe authentication
  • radius for RADIUS authentication
  • dcegssapi for DCE GSSAPI authentication

If authentication has been installed, it is recommended that this parameter be set to either none or to one of the authentication methods.

Enjoy

Thank you
Osama Mustafa

Limit Access to your Database

Its Simple Easy Way to Limit Access for your Database to Prevent People to miss Around , we all know there’s File Called “sqlnet.ora” All you Have to do is Follow The Below Steps and Add what you want :

Sqlnet.ora : $ORACLE_HOME/network/admin

TCP.EXCLUDED_NODES

Purpose
Use the parameter TCP.EXCLUDED_NODES to specify which clients are denied access to the database.

Example
TCP.EXCLUDED_NODES=(finance.us.acme.com, mktg.us.acme.com, 144.25.5.25)

TCP.INVITED_NODES

Purpose
Use the parameter TCP.INVITED_NODES to specify which clients are allowed access to the database.
 This list takes precedence over the TCP.EXCLUDED_NODES parameter if both lists are present.

Example
TCP.INVITED_NODES=(sales.us.acme.com, hr.us.acme.com, 144.185.5.73)

TCP.VALIDNODE_CHECKING

 Purpose
Use the parameter TCP.VALIDNODE_CHECKING to check for the TCP.INVITED_NODES and TCP.
EXCLUDED_NODES to determine which clients to allow or deny access.

Example
TCP.VALIDNODE_CHECKING=yes
TCP.VALIDNODE_CHECKING=no

Simple Way to keep your database Clean . you maybe need to restart your Listener after this

Thank you
Osama Mustafa