Building a Multi-Cloud Secrets Management Strategy with HashiCorp Vault

Let me ask you something. Where are your database passwords right now? Your API keys? Your TLS certificates?

If you’re like most teams I’ve worked with, the honest answer is “scattered everywhere.” Some are in environment variables. Some are in Kubernetes secrets (base64 encoded, which isn’t encryption by the way). A few are probably still hardcoded in configuration files that someone committed to Git three years ago.

I’m not judging. We’ve all been there. But as your infrastructure grows across multiple clouds, this approach becomes a ticking time bomb. One leaked credential can compromise everything.

In this article, I’ll show you how to build a centralized secrets management strategy using HashiCorp Vault. We’ll deploy it properly, integrate it with AWS, Azure, and GCP, and set up dynamic secrets that rotate automatically. No more shared passwords. No more “who has access to what” mysteries.

Why Vault? Why Now?

Before we dive into implementation, let me explain why I recommend Vault over cloud-native solutions like AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.

Don’t get me wrong. Those services are excellent. If you’re running entirely on one cloud, they might be all you need. But here’s the reality for most organizations:

You have workloads on AWS. Your data team uses GCP for BigQuery. Your enterprise applications run on Azure. Maybe you still have some on-premises systems. And you need a consistent way to manage secrets across all of them.

Vault gives you that single control plane. One audit log. One policy engine. One place to rotate credentials. And it integrates with everything.

Architecture Overview

Here’s what we’re building:

The key principle here is that applications never store long-lived credentials. Instead, they authenticate to Vault and receive short-lived, automatically rotated credentials for the specific resources they need.


Building a Multi-Cloud Secrets Management Strategy with HashiCorp Vault

Let me ask you something. Where are your database passwords right now? Your API keys? Your TLS certificates?

If you’re like most teams I’ve worked with, the honest answer is “scattered everywhere.” Some are in environment variables. Some are in Kubernetes secrets (base64 encoded, which isn’t encryption by the way). A few are probably still hardcoded in configuration files that someone committed to Git three years ago.

I’m not judging. We’ve all been there. But as your infrastructure grows across multiple clouds, this approach becomes a ticking time bomb. One leaked credential can compromise everything.

In this article, I’ll show you how to build a centralized secrets management strategy using HashiCorp Vault. We’ll deploy it properly, integrate it with AWS, Azure, and GCP, and set up dynamic secrets that rotate automatically. No more shared passwords. No more “who has access to what” mysteries.

Why Vault? Why Now?

Before we dive into implementation, let me explain why I recommend Vault over cloud-native solutions like AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.

Don’t get me wrong. Those services are excellent. If you’re running entirely on one cloud, they might be all you need. But here’s the reality for most organizations:

You have workloads on AWS. Your data team uses GCP for BigQuery. Your enterprise applications run on Azure. Maybe you still have some on-premises systems. And you need a consistent way to manage secrets across all of them.

Vault gives you that single control plane. One audit log. One policy engine. One place to rotate credentials. And it integrates with everything.

Architecture Overview

Here’s what we’re building:

The key principle here is that applications never store long-lived credentials. Instead, they authenticate to Vault and receive short-lived, automatically rotated credentials for the specific resources they need.

Step 1: Deploy Vault on Kubernetes

I prefer running Vault on Kubernetes because it gives you high availability, easy scaling, and integrates beautifully with your existing workloads. We’ll use the official Helm chart.

Prerequisites

You’ll need a Kubernetes cluster. Any managed Kubernetes service works: EKS, AKS, GKE, or even OKE. For this guide, I’ll use commands that work across all of them.

Create the Namespace and Storage

bash

kubectl create namespace vault
# Create storage class for Vault data
# This example uses AWS EBS, adjust for your cloud
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vault-storage
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF

Configure Vault Helm Values

yaml

# vault-values.yaml
global:
enabled: true
tlsDisable: false
injector:
enabled: true
replicas: 2
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 512Mi
cpu: 500m
server:
enabled: true
# Run 3 replicas for high availability
ha:
enabled: true
replicas: 3
# Use Raft for integrated storage
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = false
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-tls/tls.key"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-east-1"
kms_key_id = "alias/vault-unseal-key"
}
resources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 2Gi
cpu: 2000m
dataStorage:
enabled: true
size: 20Gi
storageClass: vault-storage
auditStorage:
enabled: true
size: 10Gi
storageClass: vault-storage
# Service account for cloud integrations
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/vault-server-role
ui:
enabled: true
serviceType: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

Generate TLS Certificates

Vault should always use TLS. Here’s how to create certificates using cert-manager:

yaml

# vault-certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: vault-tls
namespace: vault
spec:
secretName: vault-tls
duration: 8760h # 1 year
renewBefore: 720h # 30 days
subject:
organizations:
- YourCompany
commonName: vault.vault.svc.cluster.local
dnsNames:
- vault
- vault.vault
- vault.vault.svc
- vault.vault.svc.cluster.local
- vault-0.vault-internal
- vault-1.vault-internal
- vault-2.vault-internal
- "*.vault-internal"
ipAddresses:
- 127.0.0.1
issuerRef:
name: cluster-issuer
kind: ClusterIssuer

Install Vault

bash

helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault \
--namespace vault \
--values vault-values.yaml \
--version 0.27.0

Initialize and Unseal

This is a one-time operation. Keep these keys safe. I mean really safe. Like offline, in multiple secure locations.

bash

# Initialize Vault
kubectl exec -n vault vault-0 -- vault operator init \
-key-shares=5 \
-key-threshold=3 \
-format=json > vault-init.json
# The output contains your unseal keys and root token
# Store these securely!
# If not using auto-unseal, you'd need to unseal manually:
# kubectl exec -n vault vault-0 -- vault operator unseal <key1>
# kubectl exec -n vault vault-0 -- vault operator unseal <key2>
# kubectl exec -n vault vault-0 -- vault operator unseal <key3>
# With AWS KMS auto-unseal configured, Vault unseals automatically

Step 2: Configure Authentication Methods

Now we need to tell Vault how applications will authenticate. This is where it gets interesting.

Kubernetes Authentication

Applications running in Kubernetes can authenticate using their service account tokens. No passwords needed.

bash

# Enable Kubernetes auth
vault auth enable kubernetes
# Configure it to trust our cluster
vault write auth/kubernetes/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
issuer="https://kubernetes.default.svc.cluster.local"

AWS IAM Authentication

For workloads running on EC2, Lambda, or ECS, they can authenticate using their IAM roles.

bash

# Enable AWS auth
vault auth enable aws
# Configure AWS credentials for Vault to verify requests
vault write auth/aws/config/client \
secret_key=$AWS_SECRET_KEY \
access_key=$AWS_ACCESS_KEY
# Create a role that EC2 instances can use
vault write auth/aws/role/ec2-app-role \
auth_type=iam \
bound_iam_principal_arn="arn:aws:iam::ACCOUNT_ID:role/app-server-role" \
policies=app-policy \
ttl=1h

Azure Authentication

For Azure workloads using Managed Identities:

bash

# Enable Azure auth
vault auth enable azure
# Configure Azure
vault write auth/azure/config \
tenant_id=$AZURE_TENANT_ID \
resource="https://management.azure.com/" \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
# Create a role for Azure VMs
vault write auth/azure/role/azure-app-role \
policies=app-policy \
bound_subscription_ids=$AZURE_SUBSCRIPTION_ID \
bound_resource_groups=production-rg \
ttl=1h

GCP Authentication

For GCP workloads using service accounts:

bash

# Enable GCP auth
vault auth enable gcp
# Configure GCP
vault write auth/gcp/config \
credentials=@gcp-credentials.json
# Create a role for GCE instances
vault write auth/gcp/role/gce-app-role \
type="gce" \
policies=app-policy \
bound_projects="my-project-id" \
bound_zones="us-central1-a,us-central1-b" \
ttl=1h

Step 3: Set Up Dynamic Secrets

Here’s where the magic happens. Instead of storing static database passwords, Vault can generate unique credentials on demand and revoke them automatically when they expire.

Dynamic AWS Credentials

bash

# Enable AWS secrets engine
vault secrets enable aws
# Configure root credentials (Vault uses these to create dynamic creds)
vault write aws/config/root \
access_key=$AWS_ACCESS_KEY \
secret_key=$AWS_SECRET_KEY \
region=us-east-1
# Create a role that generates S3 read-only credentials
vault write aws/roles/s3-reader \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
EOF
# Now any authenticated client can get temporary AWS credentials
vault read aws/creds/s3-reader
# Returns:
# access_key AKIA...
# secret_key xyz123...
# lease_duration 1h
# These credentials will be automatically revoked after 1 hour

Dynamic Database Credentials

This is probably my favorite feature. Every time an application needs to connect to a database, it gets a unique username and password that only it knows.

bash

# Enable database secrets engine
vault secrets enable database
# Configure PostgreSQL connection
vault write database/config/production-postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="app-readonly,app-readwrite" \
connection_url="postgresql://{{username}}:{{password}}@db.example.com:5432/appdb?sslmode=require" \
username="vault_admin" \
password="vault_admin_password"
# Create a read-only role
vault write database/roles/app-readonly \
db_name=production-postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
# Create a read-write role
vault write database/roles/app-readwrite \
db_name=production-postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"

Now when your application requests credentials:

bash

vault read database/creds/app-readonly
# Returns:
# username v-kubernetes-app-readonly-abc123
# password A1B2C3D4E5F6...
# lease_duration 1h

Every request gets a different username and password. If credentials are compromised, they expire automatically. And you have a complete audit trail of who accessed what, when.

Dynamic Azure Credentials

bash

# Enable Azure secrets engine
vault secrets enable azure
# Configure Azure
vault write azure/config \
subscription_id=$AZURE_SUBSCRIPTION_ID \
tenant_id=$AZURE_TENANT_ID \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
# Create a role that generates Azure Service Principals
vault write azure/roles/contributor \
ttl=1h \
azure_roles=-<<EOF
[
{
"role_name": "Contributor",
"scope": "/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/production-rg"
}
]
EOF

Step 4: Application Integration

Let’s see how applications actually use Vault. I’ll show you several patterns.

Pattern 1: Vault Agent Sidecar (Kubernetes)

This is my recommended approach for Kubernetes. Vault Agent runs alongside your application and handles authentication and secret retrieval automatically.

yaml

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
# These annotations tell Vault Agent what to do
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "my-app-role"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/app-readonly"
vault.hashicorp.com/agent-inject-template-db-creds: |
{{- with secret "database/creds/app-readonly" -}}
export DB_USERNAME="{{ .Data.username }}"
export DB_PASSWORD="{{ .Data.password }}"
{{- end }}
spec:
serviceAccountName: my-app
containers:
- name: my-app
image: my-app:latest
command: ["/bin/sh", "-c"]
args:
- source /vault/secrets/db-creds && ./start-app.sh

When this pod starts, Vault Agent automatically:

  1. Authenticates to Vault using the Kubernetes service account
  2. Retrieves database credentials
  3. Writes them to /vault/secrets/db-creds
  4. Renews the credentials before they expire
  5. Updates the file when credentials change

Your application just reads from a file. It doesn’t need to know anything about Vault.

Pattern 2: Direct SDK Integration

For applications that need more control, you can use the Vault SDK directly:

python

# Python example
import hvac
import os
def get_vault_client():
"""Create Vault client using Kubernetes auth."""
client = hvac.Client(url=os.environ['VAULT_ADDR'])
# Read the service account token
with open('/var/run/secrets/kubernetes.io/serviceaccount/token') as f:
jwt = f.read()
# Authenticate to Vault
client.auth.kubernetes.login(
role='my-app-role',
jwt=jwt,
mount_point='kubernetes'
)
return client
def get_database_credentials():
"""Get dynamic database credentials."""
client = get_vault_client()
# Request new database credentials
response = client.secrets.database.generate_credentials(
name='app-readonly',
mount_point='database'
)
return {
'username': response['data']['username'],
'password': response['data']['password'],
'lease_id': response['lease_id'],
'lease_duration': response['lease_duration']
}
def connect_to_database():
"""Connect to database with dynamic credentials."""
creds = get_database_credentials()
connection = psycopg2.connect(
host='db.example.com',
database='appdb',
user=creds['username'],
password=creds['password']
)
return connection

Pattern 3: External Secrets Operator

If you prefer Kubernetes-native secrets, use External Secrets Operator to sync Vault secrets to Kubernetes:

yaml

# external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: vault-backend
target:
name: app-secrets
creationPolicy: Owner
data:
- secretKey: api-key
remoteRef:
key: secret/data/app/api-key
property: value
- secretKey: db-password
remoteRef:
key: secret/data/app/database
property: password

Step 5: Policies and Access Control

Vault policies determine who can access what. Be specific and follow the principle of least privilege.

hcl

# app-policy.hcl
# Allow reading dynamic database credentials
path "database/creds/app-readonly" {
capabilities = ["read"]
}
# Allow reading application secrets
path "secret/data/app/*" {
capabilities = ["read", "list"]
}
# Deny access to admin paths
path "sys/*" {
capabilities = ["deny"]
}
# Allow the app to renew its own token
path "auth/token/renew-self" {
capabilities = ["update"]
}

Apply the policy:

bash

vault policy write app-policy app-policy.hcl
# Create a Kubernetes auth role that uses this policy
vault write auth/kubernetes/role/my-app-role \
bound_service_account_names=my-app \
bound_service_account_namespaces=production \
policies=app-policy \
ttl=1h

Step 6: Monitoring and Audit

You need visibility into who’s accessing secrets. Enable audit logging:

bash

# Enable file audit device
vault audit enable file file_path=/vault/audit/vault-audit.log
# Enable syslog for centralized logging
vault audit enable syslog tag="vault" facility="AUTH"

For monitoring, Vault exposes Prometheus metrics:

yaml

# ServiceMonitor for Prometheus
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: vault
namespace: vault
spec:
selector:
matchLabels:
app.kubernetes.io/name: vault
endpoints:
- port: http
path: /v1/sys/metrics
params:
format: ["prometheus"]
scheme: https
tlsConfig:
insecureSkipVerify: true

Key metrics to alert on:

yaml

# Prometheus alerting rules
groups:
- name: vault
rules:
- alert: VaultSealed
expr: vault_core_unsealed == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Vault is sealed"
description: "Vault instance {{ $labels.instance }} is sealed and unable to serve requests"
- alert: VaultTooManyPendingTokens
expr: vault_token_count > 10000
for: 5m
labels:
severity: warning
annotations:
summary: "Too many Vault tokens"
description: "Vault has {{ $value }} active tokens. Consider reducing TTLs."
- alert: VaultLeadershipLost
expr: increase(vault_core_leadership_lost_count[5m]) > 0
labels:
severity: warning
annotations:
summary: "Vault leadership changes detected"

Common Mistakes to Avoid

Let me save you some headaches by sharing mistakes I’ve seen (and made):

Mistake 1: Using the root token for applications

The root token has unlimited access. Create specific policies and tokens for each application.

Mistake 2: Not rotating the root token

After initial setup, generate a new root token and revoke the original:

bash

vault operator generate-root -init
# Follow the process to generate a new root token
vault token revoke <old-root-token>

Mistake 3: Setting TTLs too long

Short TTLs mean compromised credentials are valid for less time. Start with 1 hour and adjust based on your needs.

Mistake 4: Not testing recovery procedures

Practice unsealing Vault. Practice recovering from backup. Do it regularly. The worst time to learn is during an actual incident.

Mistake 5: Storing unseal keys together

Distribute unseal keys to different people in different locations. Use a threshold scheme (3 of 5) so no single person can unseal Vault.

Regards, Enjoy the Cloud
Osama

Enabling TLS Encryption on a PubSub+ Broker – Technical Guide

Secure communication between clients and your messaging broker is critical in modern distributed systems. Transport Layer Security (TLS) protects data in transit from eavesdropping and tampering by encrypting the connection between clients and the broker. In this guide, you’ll learn how to generate certificates, configure TLS on a Solace PubSub+ broker, and validate secure connections.

1. Overview

PubSub+ supports TLS encryption (e.g., TLSv1.1 and TLSv1.2) for secure client connections. This guide focuses on server-side authentication only (the broker authenticating to clients).

2. Certificate and Key Generation

Before enabling TLS, you must create the cryptographic materials:

2.1 Generate a Private Key (RSA 2048 bit)

Use OpenSSL to create a password-protected RSA private key in PEM format:

openssl genpkey -algorithm RSA \
  -aes-256-cbc \
  -out private_key.pem \
  -pkeyopt rsa_keygen_bits:2048

You will be prompted for a passphrase — make sure to record it.

2.2 Extract Public Key

From the private key, export the public key. You will need this later:

ssh-keygen -e -f private_key.pem > public_key.pem

Again you will enter the passphrase you set earlier.

2.3 Create a Certificate Signing Request (CSR)

Generate a CSR to issue a certificate:

openssl req -new -key private_key.pem -out certificate.csr

You will be asked to complete the Distinguished Name (DN) attributes (e.g., Common Name, Organization). Use your broker’s real hostname in Common Name (CN) — this ensures hostname verification works during TLS handshakes.

2.4 Generate the TLS Certificate

You can use the CSR to create a self-signed certificate (for testing), or send the CSR to a CA (recommended for production).

For a self-signed certificate:

openssl x509 -req -in certificate.csr \
  -signkey private_key.pem \
  -days 365 \
  -out server_certificate.pem

This results in a PEM-encoded TLS certificate valid for one year.

3. Prepare the PubSub+ Broker

TLS on PubSub+ requires the certificate file and key to be available in the broker’s certificate directory (/usr/sw/jail/certs)

4. Configure TLS on Solace PubSub+

4.1 Load the Certificate File

Transfer the certificate file to the broker’s /certs directory, for example using SFTP:

solace# copy sftp://admin@<host-ip>/server_certificate.pem /certs/server_certificate.pem

Replace <host-ip> and credentials as appropriate.

4.2 Set the Server Certificate

In the broker CLI:

solace(configure)# ssl
solace(configure/ssl)# server-certificate server_certificate.pem

This tells the broker to use that certificate for all TLS connections. Solace

⚠️ Only one TLS certificate can be active at a time.

4.3 Cipher Suite (Optional, Recommended)

Solace supports selecting specific cipher suites. For example:

solace(configure/ssl)# cipher-suite msg-backbone name AES256-SHA

This forces a secure symmetric cipher for session encryption.

5. Client-Side Requirements

5.1 Trust Store

Clients must trust the CA that signed the server’s certificate. For self-signed certificates, distribute the root certificate to all clients’ trust stores. If using a public CA, clients will automatically trust the certificate.

5.2 Secure Connection URI

Instead of using plaintext connections like:

tcp://broker.example.com:55555

Clients must connect over TLS, e.g.:

tcps://broker.example.com:55443

Where tcps:// indicates TLS transport.

6. Verify the Setup

Once TLS is enabled, attempt a secure connection from a client using TLS-enabled APIs (e.g., Solace Messaging APIs or MQTT with TLS support):

  • Confirm that the TLS handshake completes
  • Ensure the client validates the server certificate and hostname
  • Observe that plaintext connections are rejected

Tools like openssl s_client can also be used for validation:

openssl s_client -connect broker.example.com:55443 \
  -CAfile rootCA.pem

If the certificate is trusted and connection succeeds, you should see handshake details and certificate information.

Regards
Osama

Case study for software architect

Problem Description


We have two separate applications that we would like to integrate together. One is a WYSIWYG application for generating static websites. The other is an admin application for managing an online shopping site. We would like to be able to use the features of the Website Builder to design pages in the Webshop. In addition, we would also like to be able to manage product details (name, price, images, etc.) while updating Webshop pages in the Website Builder.

Website Builder Details

The Website Builder is a single page app written in React. It is mostly served by a monolithic backend with a few services for select features. The app follows a component-driven architecture using Redux for application state management. Each static page in a user’s website is composed of components. Each component is responsible for rendering the view within its container and for supplying the callbacks for displaying its settings panel. The settings panel is unique per component but may share various individual controls for certain
settings (eg: background color, fonts).


When the user is ready to publish their site, the publication service will generate static assets for each page. The Webshop is one component in the Website Builder. When a Webshop is included on a page, a JavaScript snippet is included in the generated HTML.

Webshop Details


The Webshop has 2 parts: the admin portion is a single page app written in KnockoutJS. It is in the process of being rewritten in React. The second portion is the public-facing shop front, also a Knockout application written in KnockoutJS. The admin application lists products, orders, and other management details. The Webshop backend is quite similar to the Website Builder – monolithic aside from a few minor services for certain features.

The documentation is HERE

Cheers


Osama

adop exiting with status = 255 (Fail)

[oracle@ebsnew appl]$ adop phase=abort

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

 Please wait. Validating credentials…

Enter the RUN file system context file name [/u01/oracle/EBSTST/fs1/inst/apps/EBSTST_ebsnew/appl/admin/EBSTST_ebsnew.xml]:

Enter the PATCH file system context file name [/u01/oracle/EBSTST/fs2/inst/apps/EBSTST_ebsnew/appl/admin/EBSTST_ebsnew.xml]:

[STATEMENT] [END   2016/11/22 17:15:45] Performing verification of parameters
[STATEMENT] [START 2016/11/22 17:15:45] Checking for the required ENV setup
[STATEMENT] [END   2016/11/22 17:15:45] Checking for the required ENV setup

************* Start of  session *************
 version: 12.2.0
 started at: Tue Nov 22 2016 17:15:45

APPL_TOP is set to /u01/oracle/EBSTST/fs1/EBSapps/appl
[STATEMENT] [START 2016/11/22 17:15:45] Determining admin node
[STATEMENT] [END   2016/11/22 17:15:47] Determining admin node
[STATEMENT] [START 2016/11/22 17:15:49] Acquiring lock on sessions table
[STATEMENT] [END   2016/11/22 17:15:50] Acquiring lock on sessions table
[STATEMENT] [START 2016/11/22 17:15:50] Checking for any pending sessions
[STATEMENT] There is already a hotpatch session which is incomplete. Details are:
[STATEMENT]     Session Id: 2
[STATEMENT]     Prepare phase status: X
[STATEMENT]     Apply phase status: P
[STATEMENT]     Cutover  phase status: R
[STATEMENT]     Abort phase status: X
[STATEMENT]     Session status: F
[STATEMENT] [Note: Y denotes that the phase is done
[STATEMENT]        N denotes that the phase has not been completed
[STATEMENT]        X denotes that the phase is not applicable
[STATEMENT]        R denotes that the phase is running (in progress) or ran
[STATEMENT]        F denotes that the phase has failed
[STATEMENT]        P (is applicable only to APPLY phase) denotes atleast
[STATEMENT]           one patch is already applied for the session id
[STATEMENT] Online patching tool cannot proceed when a previous patching session is incomplete
[STATEMENT] Please ensure no pending patching sessions exist before trying a new patch
[ERROR]     Unrecoverable error occured. Exiting the current session.
[STATEMENT] [START 2016/11/22 17:16:08] Unlocking sessions table
[STATEMENT] [END   2016/11/22 17:16:09] Unlocking sessions table
[STATEMENT] Log file: /adop_20161122_171500.log
[STATEMENT] [START 2016/11/22 17:16:11] Unlocking sessions table
[STATEMENT] [END   2016/11/22 17:16:12] Unlocking sessions table
Can’t call method “close” on an undefined value at /u01/oracle/EBSTST/fs1/EBSapps/appl/au/12.0.0/perl/ADOP/Phase.pm line 239.

adop exiting with status = 255 (Fail)

I tried different thing but we run the abort again it’s working , and do the following as well.

Run Autoconfig
run adop phase=fs_clone

Thanks.

Installing SIEBEL 15 on RAC took a lot of time

The Situation like the following when we was trying to install SIEBEL On RAC 12C its took 10 hours for importing 2 database which is usually takes 2 hours at max, storage was NFS , enabled DNFS for sure.

simulate the following case :-

  • On single node , 2 hours like usual.
  • On RAC 12c using DNFS file system 10 hours.
  • On RAC 12c using DNFS but single node of RAC took 10 hours.
  • On RAC 12c using local file system and single node 10 hours.
  • Install oracle 11gR2 RAC and try it again took 4 hours using ASM on DNFS.

Using SLOB didn’t see anything related to storage issue.

After investigation and a lot of working without tuning on 12c it’s took 1 Hour and 37 min.

The problem with two different way :-

  • Heartbeat not configured correctly.
  • SIEBEL Installation should not run with Index parallel option.
Thanks
Osama.

Install Oracle EBS R12.2 StartCD 51 on RAC

this post is questionnaire more than a technical, EBS startCD 51 released before 1 month and it’s comes with different features such as DB 12c will be installed and it’s support RAC installation Features immediately which mean you don’t have to install EBS on Single node then convert to RAC.

Like the above pic, well everything done successfully for DB installation without any problem same as below :-

The installation is multi – node which mean Apps on different server, when trying to connect APPS to RAC DB each time the same error appered with no sense which is 
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Be noted that the DB is reigster with SCAN and tnsping is working , and i can access DB using TOAD thru SCAN or Local Listener
Therefore after multi try of the installation and fix the error, i choose the old way which is Single node and covert to RAC and the funny thing it’s working without any issue.
If there is any idea about this error please comment below maybe i missed something 
Thanks
Osama

Unable to retrieve config file from Database

The above error shown up when trying to install APPS and connect it to the RAC/single database to fix this issue Just copy the configuration file after the installation is done successfully under the path :- /ebs1stag/oracle/SEBSDB/12.1.0/appsutil for example  the file name usually conf_SEBSDB.txt and load the file, try again.
Thank you
Osama

oracle.security.jps.wls.listeners.JpsApplicationLifecycleListener’ Class not Found

Aug 24, 2016 2:23:57 PM weblogic.nodemanager.server.AbstractServerManager log
INFO: Server output log file is ‘/u01/Oracle/Middleware/domains/mserver/STAGEDQ/servers/EDQ_INS1_SIPEDQ1/logs/EDQ_INS1_SIPEDQ1.out’

java.io.IOException: Server failed to start up. See server output log for more details.
        at weblogic.nodemanager.server.AbstractServerManager.start(AbstractServerManager.java:196)
        at weblogic.nodemanager.server.ServerManager.start(ServerManager.java:23)
        at weblogic.nodemanager.server.Handler.handleStart(Handler.java:609)
        at weblogic.nodemanager.server.Handler.handleCommand(Handler.java:121)
        at weblogic.nodemanager.server.Handler.run(Handler.java:71)
        at java.lang.Thread.run(Thread.java:745)
Aug 24, 2016 2:26:00 PM weblogic.nodemanager.server.Handler handleStart
WARNING: Exception while starting server ‘EDQ_INS1_SIPEDQ1’
java.io.IOException: Server failed to start up. See server output log for more details.
        at weblogic.nodemanager.server.AbstractServerManager.start(AbstractServerManager.java:196)
        at weblogic.nodemanager.server.ServerManager.start(ServerManager.java:23)
        at weblogic.nodemanager.server.Handler.handleStart(Handler.java:609)
        at weblogic.nodemanager.server.Handler.handleCommand(Handler.java:121)
        at weblogic.nodemanager.server.Handler.run(Handler.java:71)
        at java.lang.Thread.run(Thread.java:745)

The above is EDQ Cluster Error, Fresh installation after searching inside the logs i found the following :-

‘oracle.security.jps.wls.listeners.JpsApplicationLifecycleListener’ Class not Found

This is simply happened because StartScriptEnabled property in nodemanager.properties file was set to ‘false’. therefore must set to true

adstpall.sh: Database connection could not be established. Either the database is down or the APPS credentials supplied are wrong.

trying to startup EBS R12.2. with the following error :-

adstpall.sh: Database connection could not be established. Either the database is down or the APPS credentials supplied are wrong.

to solve this do the following :-

[oracle@tiperp tsterp]$ cd $APPL_TOP
[oracle@tiperp appl]$ pwd
/u01/app/product/tsterp/fs1/EBSapps/appl

after souring the environment try to startup the EBS again. 
Thank you 
Osama