Building a Multi-Cloud Secrets Management Strategy with HashiCorp Vault

Let me ask you something. Where are your database passwords right now? Your API keys? Your TLS certificates?

If you’re like most teams I’ve worked with, the honest answer is “scattered everywhere.” Some are in environment variables. Some are in Kubernetes secrets (base64 encoded, which isn’t encryption by the way). A few are probably still hardcoded in configuration files that someone committed to Git three years ago.

I’m not judging. We’ve all been there. But as your infrastructure grows across multiple clouds, this approach becomes a ticking time bomb. One leaked credential can compromise everything.

In this article, I’ll show you how to build a centralized secrets management strategy using HashiCorp Vault. We’ll deploy it properly, integrate it with AWS, Azure, and GCP, and set up dynamic secrets that rotate automatically. No more shared passwords. No more “who has access to what” mysteries.

Why Vault? Why Now?

Before we dive into implementation, let me explain why I recommend Vault over cloud-native solutions like AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.

Don’t get me wrong. Those services are excellent. If you’re running entirely on one cloud, they might be all you need. But here’s the reality for most organizations:

You have workloads on AWS. Your data team uses GCP for BigQuery. Your enterprise applications run on Azure. Maybe you still have some on-premises systems. And you need a consistent way to manage secrets across all of them.

Vault gives you that single control plane. One audit log. One policy engine. One place to rotate credentials. And it integrates with everything.

Architecture Overview

Here’s what we’re building:

The key principle here is that applications never store long-lived credentials. Instead, they authenticate to Vault and receive short-lived, automatically rotated credentials for the specific resources they need.


Building a Multi-Cloud Secrets Management Strategy with HashiCorp Vault

Let me ask you something. Where are your database passwords right now? Your API keys? Your TLS certificates?

If you’re like most teams I’ve worked with, the honest answer is “scattered everywhere.” Some are in environment variables. Some are in Kubernetes secrets (base64 encoded, which isn’t encryption by the way). A few are probably still hardcoded in configuration files that someone committed to Git three years ago.

I’m not judging. We’ve all been there. But as your infrastructure grows across multiple clouds, this approach becomes a ticking time bomb. One leaked credential can compromise everything.

In this article, I’ll show you how to build a centralized secrets management strategy using HashiCorp Vault. We’ll deploy it properly, integrate it with AWS, Azure, and GCP, and set up dynamic secrets that rotate automatically. No more shared passwords. No more “who has access to what” mysteries.

Why Vault? Why Now?

Before we dive into implementation, let me explain why I recommend Vault over cloud-native solutions like AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.

Don’t get me wrong. Those services are excellent. If you’re running entirely on one cloud, they might be all you need. But here’s the reality for most organizations:

You have workloads on AWS. Your data team uses GCP for BigQuery. Your enterprise applications run on Azure. Maybe you still have some on-premises systems. And you need a consistent way to manage secrets across all of them.

Vault gives you that single control plane. One audit log. One policy engine. One place to rotate credentials. And it integrates with everything.

Architecture Overview

Here’s what we’re building:

The key principle here is that applications never store long-lived credentials. Instead, they authenticate to Vault and receive short-lived, automatically rotated credentials for the specific resources they need.

Step 1: Deploy Vault on Kubernetes

I prefer running Vault on Kubernetes because it gives you high availability, easy scaling, and integrates beautifully with your existing workloads. We’ll use the official Helm chart.

Prerequisites

You’ll need a Kubernetes cluster. Any managed Kubernetes service works: EKS, AKS, GKE, or even OKE. For this guide, I’ll use commands that work across all of them.

Create the Namespace and Storage

bash

kubectl create namespace vault
# Create storage class for Vault data
# This example uses AWS EBS, adjust for your cloud
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vault-storage
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF

Configure Vault Helm Values

yaml

# vault-values.yaml
global:
enabled: true
tlsDisable: false
injector:
enabled: true
replicas: 2
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 512Mi
cpu: 500m
server:
enabled: true
# Run 3 replicas for high availability
ha:
enabled: true
replicas: 3
# Use Raft for integrated storage
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = false
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-tls/tls.key"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-tls/ca.crt"
}
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-east-1"
kms_key_id = "alias/vault-unseal-key"
}
resources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 2Gi
cpu: 2000m
dataStorage:
enabled: true
size: 20Gi
storageClass: vault-storage
auditStorage:
enabled: true
size: 10Gi
storageClass: vault-storage
# Service account for cloud integrations
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/vault-server-role
ui:
enabled: true
serviceType: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

Generate TLS Certificates

Vault should always use TLS. Here’s how to create certificates using cert-manager:

yaml

# vault-certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: vault-tls
namespace: vault
spec:
secretName: vault-tls
duration: 8760h # 1 year
renewBefore: 720h # 30 days
subject:
organizations:
- YourCompany
commonName: vault.vault.svc.cluster.local
dnsNames:
- vault
- vault.vault
- vault.vault.svc
- vault.vault.svc.cluster.local
- vault-0.vault-internal
- vault-1.vault-internal
- vault-2.vault-internal
- "*.vault-internal"
ipAddresses:
- 127.0.0.1
issuerRef:
name: cluster-issuer
kind: ClusterIssuer

Install Vault

bash

helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault \
--namespace vault \
--values vault-values.yaml \
--version 0.27.0

Initialize and Unseal

This is a one-time operation. Keep these keys safe. I mean really safe. Like offline, in multiple secure locations.

bash

# Initialize Vault
kubectl exec -n vault vault-0 -- vault operator init \
-key-shares=5 \
-key-threshold=3 \
-format=json > vault-init.json
# The output contains your unseal keys and root token
# Store these securely!
# If not using auto-unseal, you'd need to unseal manually:
# kubectl exec -n vault vault-0 -- vault operator unseal <key1>
# kubectl exec -n vault vault-0 -- vault operator unseal <key2>
# kubectl exec -n vault vault-0 -- vault operator unseal <key3>
# With AWS KMS auto-unseal configured, Vault unseals automatically

Step 2: Configure Authentication Methods

Now we need to tell Vault how applications will authenticate. This is where it gets interesting.

Kubernetes Authentication

Applications running in Kubernetes can authenticate using their service account tokens. No passwords needed.

bash

# Enable Kubernetes auth
vault auth enable kubernetes
# Configure it to trust our cluster
vault write auth/kubernetes/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
issuer="https://kubernetes.default.svc.cluster.local"

AWS IAM Authentication

For workloads running on EC2, Lambda, or ECS, they can authenticate using their IAM roles.

bash

# Enable AWS auth
vault auth enable aws
# Configure AWS credentials for Vault to verify requests
vault write auth/aws/config/client \
secret_key=$AWS_SECRET_KEY \
access_key=$AWS_ACCESS_KEY
# Create a role that EC2 instances can use
vault write auth/aws/role/ec2-app-role \
auth_type=iam \
bound_iam_principal_arn="arn:aws:iam::ACCOUNT_ID:role/app-server-role" \
policies=app-policy \
ttl=1h

Azure Authentication

For Azure workloads using Managed Identities:

bash

# Enable Azure auth
vault auth enable azure
# Configure Azure
vault write auth/azure/config \
tenant_id=$AZURE_TENANT_ID \
resource="https://management.azure.com/" \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
# Create a role for Azure VMs
vault write auth/azure/role/azure-app-role \
policies=app-policy \
bound_subscription_ids=$AZURE_SUBSCRIPTION_ID \
bound_resource_groups=production-rg \
ttl=1h

GCP Authentication

For GCP workloads using service accounts:

bash

# Enable GCP auth
vault auth enable gcp
# Configure GCP
vault write auth/gcp/config \
credentials=@gcp-credentials.json
# Create a role for GCE instances
vault write auth/gcp/role/gce-app-role \
type="gce" \
policies=app-policy \
bound_projects="my-project-id" \
bound_zones="us-central1-a,us-central1-b" \
ttl=1h

Step 3: Set Up Dynamic Secrets

Here’s where the magic happens. Instead of storing static database passwords, Vault can generate unique credentials on demand and revoke them automatically when they expire.

Dynamic AWS Credentials

bash

# Enable AWS secrets engine
vault secrets enable aws
# Configure root credentials (Vault uses these to create dynamic creds)
vault write aws/config/root \
access_key=$AWS_ACCESS_KEY \
secret_key=$AWS_SECRET_KEY \
region=us-east-1
# Create a role that generates S3 read-only credentials
vault write aws/roles/s3-reader \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
EOF
# Now any authenticated client can get temporary AWS credentials
vault read aws/creds/s3-reader
# Returns:
# access_key AKIA...
# secret_key xyz123...
# lease_duration 1h
# These credentials will be automatically revoked after 1 hour

Dynamic Database Credentials

This is probably my favorite feature. Every time an application needs to connect to a database, it gets a unique username and password that only it knows.

bash

# Enable database secrets engine
vault secrets enable database
# Configure PostgreSQL connection
vault write database/config/production-postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="app-readonly,app-readwrite" \
connection_url="postgresql://{{username}}:{{password}}@db.example.com:5432/appdb?sslmode=require" \
username="vault_admin" \
password="vault_admin_password"
# Create a read-only role
vault write database/roles/app-readonly \
db_name=production-postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
# Create a read-write role
vault write database/roles/app-readwrite \
db_name=production-postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"

Now when your application requests credentials:

bash

vault read database/creds/app-readonly
# Returns:
# username v-kubernetes-app-readonly-abc123
# password A1B2C3D4E5F6...
# lease_duration 1h

Every request gets a different username and password. If credentials are compromised, they expire automatically. And you have a complete audit trail of who accessed what, when.

Dynamic Azure Credentials

bash

# Enable Azure secrets engine
vault secrets enable azure
# Configure Azure
vault write azure/config \
subscription_id=$AZURE_SUBSCRIPTION_ID \
tenant_id=$AZURE_TENANT_ID \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
# Create a role that generates Azure Service Principals
vault write azure/roles/contributor \
ttl=1h \
azure_roles=-<<EOF
[
{
"role_name": "Contributor",
"scope": "/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/production-rg"
}
]
EOF

Step 4: Application Integration

Let’s see how applications actually use Vault. I’ll show you several patterns.

Pattern 1: Vault Agent Sidecar (Kubernetes)

This is my recommended approach for Kubernetes. Vault Agent runs alongside your application and handles authentication and secret retrieval automatically.

yaml

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
# These annotations tell Vault Agent what to do
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "my-app-role"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/app-readonly"
vault.hashicorp.com/agent-inject-template-db-creds: |
{{- with secret "database/creds/app-readonly" -}}
export DB_USERNAME="{{ .Data.username }}"
export DB_PASSWORD="{{ .Data.password }}"
{{- end }}
spec:
serviceAccountName: my-app
containers:
- name: my-app
image: my-app:latest
command: ["/bin/sh", "-c"]
args:
- source /vault/secrets/db-creds && ./start-app.sh

When this pod starts, Vault Agent automatically:

  1. Authenticates to Vault using the Kubernetes service account
  2. Retrieves database credentials
  3. Writes them to /vault/secrets/db-creds
  4. Renews the credentials before they expire
  5. Updates the file when credentials change

Your application just reads from a file. It doesn’t need to know anything about Vault.

Pattern 2: Direct SDK Integration

For applications that need more control, you can use the Vault SDK directly:

python

# Python example
import hvac
import os
def get_vault_client():
"""Create Vault client using Kubernetes auth."""
client = hvac.Client(url=os.environ['VAULT_ADDR'])
# Read the service account token
with open('/var/run/secrets/kubernetes.io/serviceaccount/token') as f:
jwt = f.read()
# Authenticate to Vault
client.auth.kubernetes.login(
role='my-app-role',
jwt=jwt,
mount_point='kubernetes'
)
return client
def get_database_credentials():
"""Get dynamic database credentials."""
client = get_vault_client()
# Request new database credentials
response = client.secrets.database.generate_credentials(
name='app-readonly',
mount_point='database'
)
return {
'username': response['data']['username'],
'password': response['data']['password'],
'lease_id': response['lease_id'],
'lease_duration': response['lease_duration']
}
def connect_to_database():
"""Connect to database with dynamic credentials."""
creds = get_database_credentials()
connection = psycopg2.connect(
host='db.example.com',
database='appdb',
user=creds['username'],
password=creds['password']
)
return connection

Pattern 3: External Secrets Operator

If you prefer Kubernetes-native secrets, use External Secrets Operator to sync Vault secrets to Kubernetes:

yaml

# external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: vault-backend
target:
name: app-secrets
creationPolicy: Owner
data:
- secretKey: api-key
remoteRef:
key: secret/data/app/api-key
property: value
- secretKey: db-password
remoteRef:
key: secret/data/app/database
property: password

Step 5: Policies and Access Control

Vault policies determine who can access what. Be specific and follow the principle of least privilege.

hcl

# app-policy.hcl
# Allow reading dynamic database credentials
path "database/creds/app-readonly" {
capabilities = ["read"]
}
# Allow reading application secrets
path "secret/data/app/*" {
capabilities = ["read", "list"]
}
# Deny access to admin paths
path "sys/*" {
capabilities = ["deny"]
}
# Allow the app to renew its own token
path "auth/token/renew-self" {
capabilities = ["update"]
}

Apply the policy:

bash

vault policy write app-policy app-policy.hcl
# Create a Kubernetes auth role that uses this policy
vault write auth/kubernetes/role/my-app-role \
bound_service_account_names=my-app \
bound_service_account_namespaces=production \
policies=app-policy \
ttl=1h

Step 6: Monitoring and Audit

You need visibility into who’s accessing secrets. Enable audit logging:

bash

# Enable file audit device
vault audit enable file file_path=/vault/audit/vault-audit.log
# Enable syslog for centralized logging
vault audit enable syslog tag="vault" facility="AUTH"

For monitoring, Vault exposes Prometheus metrics:

yaml

# ServiceMonitor for Prometheus
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: vault
namespace: vault
spec:
selector:
matchLabels:
app.kubernetes.io/name: vault
endpoints:
- port: http
path: /v1/sys/metrics
params:
format: ["prometheus"]
scheme: https
tlsConfig:
insecureSkipVerify: true

Key metrics to alert on:

yaml

# Prometheus alerting rules
groups:
- name: vault
rules:
- alert: VaultSealed
expr: vault_core_unsealed == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Vault is sealed"
description: "Vault instance {{ $labels.instance }} is sealed and unable to serve requests"
- alert: VaultTooManyPendingTokens
expr: vault_token_count > 10000
for: 5m
labels:
severity: warning
annotations:
summary: "Too many Vault tokens"
description: "Vault has {{ $value }} active tokens. Consider reducing TTLs."
- alert: VaultLeadershipLost
expr: increase(vault_core_leadership_lost_count[5m]) > 0
labels:
severity: warning
annotations:
summary: "Vault leadership changes detected"

Common Mistakes to Avoid

Let me save you some headaches by sharing mistakes I’ve seen (and made):

Mistake 1: Using the root token for applications

The root token has unlimited access. Create specific policies and tokens for each application.

Mistake 2: Not rotating the root token

After initial setup, generate a new root token and revoke the original:

bash

vault operator generate-root -init
# Follow the process to generate a new root token
vault token revoke <old-root-token>

Mistake 3: Setting TTLs too long

Short TTLs mean compromised credentials are valid for less time. Start with 1 hour and adjust based on your needs.

Mistake 4: Not testing recovery procedures

Practice unsealing Vault. Practice recovering from backup. Do it regularly. The worst time to learn is during an actual incident.

Mistake 5: Storing unseal keys together

Distribute unseal keys to different people in different locations. Use a threshold scheme (3 of 5) so no single person can unseal Vault.

Regards, Enjoy the Cloud
Osama

Encryption on Azure

What is encryption?

Encryption is the process of making data unreadable and unusable to unauthorized viewers. To use or read the encrypted data, it must be decrypted, which requires the use of a secret key. 

There are two different type :-

  • Symmetric encryption :– Which mean you will use same key  to encrypt and decrypt the data
  • Asymmetric encryption :– Which mean you will use different key , for example Private and public key.

both of these two type having two different ways :-

  • Encryption at rest which mean data stored in a database, or data stored in a storage account.
  • Encryption in transit which means  data actively moving from one location to another.

So, there are different type of Encryption provided by Azure:-

  • Encrypt raw storage
    • Azure Storage Service Encryption :-  encrypts your data before persisting it to Azure Managed Disks, Azure Blob storage, Azure Files, or Azure Queue storage, and decrypts the data before retrieval.
    • Encrypt virtual machine disks low-level encryption protection for data written to physical disk
  • Azure Disk Encryption : this method helps you to encruypt the actually windows or Linux disk, the best way to do this is h Azure Key Vault.
  • Encrypt databases
    • Transparent data encryption :- helps protect Azure SQL Database and Azure Data Warehouse against the threat of malicious activity. It performs real-time encryption and decryption of the database.

The best way to do this which is Azure Key Vault,  cloud service for storing your application secrets. Key Vault helps you control your applications’ secrets by keeping them in a single, why should i use it :-

  • Centralizing the solutions.
  • Securely stored secrets and keys.
  • Monitor access and use.
  • Simplified administration of application secrets.

There are also two different kind of certificate in Azure which will helps you to encrypt for example the website or application, you need to know that Certificates used in Azure are x.509 v3 and can be signed by a trusted certificate authority, or they can be self-signed.

Types of certificates

  • Service certificates are used for cloud services
  • Management certificates are used for authenticating with the management API

Service certificates

which is attached to cloud services and enable secure communication to and from the service. For example, if you deploy a web site, you would want to supply a certificate that can authenticate an exposed HTTPS endpoint. Service certificates, which are defined in your service definition, are automatically deployed to the VM that is running an instance of your role.

Management certificates

allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. However, these types of certificates are not related to cloud services.

Be noted that you can use Azure Key Vault to store your certificates.

Cheers

Osama

Oracle Database Application Security Book

Finally …

The Book is alive

For the first time the book which is dicussed critcal security issues such as database threats, and how to void them, the book also include advance topics about Oracle internet directory, Oracle access manager and how to implement full cycle single sign on,

Focus on the security aspects of designing, building, and maintaining a secure Oracle Database application. Starting with data encryption, you will learn to work with transparent data, back-up, and networks. You will then go through the key principles of audits, where you will get to know more about identity preservation, policies and fine-grained audits. Moving on to virtual private databases, you’ll set up and configure a VPD to work in concert with other security features in Oracle, followed by tips on managing configuration drift, profiles, and default users.

What You Will Learn:- 

  • Work with Oracle Internet Directory using the command-line and the console.
  • Integrate Oracle Access Manager with different applications.
  • Work with the Oracle Identity Manager console and connectors, while creating your own custom one.
  • Troubleshooting issues with OID, OAM, and OID.
  • Dive deep into file system and network security concepts.
  • First time chapter that include most of the critical database threats in real life.

 

You can buy the book now from amazon here

 

Cheers

Osama

Oracle Password Security

As Certified Ethical hacker and Penetration  Testing Always people Asked me about if the Oracle Password can be Cracked or not ? You need to know that if the hacker want to get into your database and he will all you can do is make it harder for him , so don’t choose Easy password to crack

I post these topics not to use it in wrong way , No as DBA you need to know about Securing you database
and How to make it unbreakable.

For example check the below tools that used to crack Oracle Password

And Others Tools Found for free On Internet , for example Red database security (which is amazing company and website provide you with article/topics about oracle security ) provide some of these tools for free.

Thank you
Osama Mustafa

Oracle security Function for password changing

Check this function that is used for changing user password , you need to watch out from functions like that i post this function as an example

FUNCTION CHGPWD (
P_USER VARCHAR2,
P_PWD VARCHAR2)
RETURN BOOLEAN IS
L_STMT VARCHAR2(255);
BEGIN
L_STMT:= ‘ALTER USER “‘ || P_USER || ‘” IDENTIFIED BY “‘ || P_PWD||’”‘;
EXECUTE IMMEDIATE L_STMT;
RETURN TRUE;
END;

Thank you

I will Post More and More Topics about Oracle security

Data Masking In Oracle/Column Masking

Or We Can Call it VPD : Virtual Private Database

What is Data Masking Mean ? 

simple way to hide you valuable data from certain users without having to apply encrypt/decrypt techniques and increase the column width to accommodate the new string like the old times. Through some simple configuration you can create policies to show your important columns as null without rewriting a single line of code on your application side.

There are 3 steps for accomplish column masking:

  1. A function to be used by the policy (function policy) created in next step.
  2. Use dbms_rls package to create the policy.
  3. Assign “exempt access policy” to users to be excluded from the policy. These users can see all data with no masking.

Step1 : Create Function Policy 

CREATE OR REPLACE
FUNCTION vpd_function (obj_owner IN VARCHAR2, obj_name IN VARCHAR2)
RETURN VARCHAR2
AS
BEGIN
RETURN ‘rowid = ”0”’;
END vpd_function;

/

The Above Function is Used for Column Masking , If you set this function to True All User will be able to see the correct Data , But the above function Is to False (rowid=0).


Step2: Create Policy

BEGIN
DBMS_RLS.ADD_POLICY(object_schema=> ‘SCOTT’,
object_name=> ‘EMP’,
policy_name=> ‘scott_emp_policy’,
function_schema=> ‘SYSTEM’,
policy_function=> ‘vpd_function’,
sec_relevant_cols=> ‘JOB’,
policy_type => DBMS_RLS.SHARED_STATIC,
sec_relevant_cols_opt=> dbms_rls.ALL_ROWS);

END;
/

exempt access policy : Use to Exclude Some Users to See All the Correct Data .

Important Views :

dba_policies
v$vpd_policy

Enjoy with Security

Osama Mustafa

Oracle Secuirty Tips / SQLNET.ORA Part 2

Hi All ,

I post before about sqlnet.ora with parameter called invited_list , Exclude_list , assume  that i want to prevent sysdba to access database without password Simple Way .

SQLNET.AUTHENTICATION_SERVICES=NONE 



Setting “SQLNET.AUTHENTICATION_SERVICES” parameter to “NONE” in sqlnet.ora file will make it not possible to connect to the database without a password as sysdba. (sqlplus / as sysdba)

This parameter may also have the values : NTS for Windows NT native authentication, ALL for all authentication methods.

Authentication Methods Available with Oracle Advanced Security:

  • kerberos5 for Kerberos authentication
  • cybersafe for Cybersafe authentication
  • radius for RADIUS authentication
  • dcegssapi for DCE GSSAPI authentication

If authentication has been installed, it is recommended that this parameter be set to either none or to one of the authentication methods.

Enjoy

Thank you
Osama Mustafa

Limit Access to your Database

Its Simple Easy Way to Limit Access for your Database to Prevent People to miss Around , we all know there’s File Called “sqlnet.ora” All you Have to do is Follow The Below Steps and Add what you want :

Sqlnet.ora : $ORACLE_HOME/network/admin

TCP.EXCLUDED_NODES

Purpose
Use the parameter TCP.EXCLUDED_NODES to specify which clients are denied access to the database.

Example
TCP.EXCLUDED_NODES=(finance.us.acme.com, mktg.us.acme.com, 144.25.5.25)

TCP.INVITED_NODES

Purpose
Use the parameter TCP.INVITED_NODES to specify which clients are allowed access to the database.
 This list takes precedence over the TCP.EXCLUDED_NODES parameter if both lists are present.

Example
TCP.INVITED_NODES=(sales.us.acme.com, hr.us.acme.com, 144.185.5.73)

TCP.VALIDNODE_CHECKING

 Purpose
Use the parameter TCP.VALIDNODE_CHECKING to check for the TCP.INVITED_NODES and TCP.
EXCLUDED_NODES to determine which clients to allow or deny access.

Example
TCP.VALIDNODE_CHECKING=yes
TCP.VALIDNODE_CHECKING=no

Simple Way to keep your database Clean . you maybe need to restart your Listener after this

Thank you
Osama Mustafa

Threats to Database Security Part 1

Today, all company needs to save data and information, these kinds of data different from company to other such as (pictures, employee and Customer data); it’s so rarely to find companies used papers to save important data.We don’t disagree on database type, Sure Oracle is the best but there’s another product in the market today such as Oracle, Microsoft and k2 but all these products have common thing Called Threats .

Before this, we need to know what we mean with database Security since it’s my article subject
Database Security: can be defined as a system or process by which the “Confidentiality, Integrity, and Availability (CIA) of the database can be protectedUnauthorized entry or access to a database server signifies a loss of confidentiality; unauthorized alteration to the available data signifies loss of integrity; and lack of access to database services signifies loss of availability. Loss of one or more of these basic facets will have a significant impact on the security of the database.
This is the common defined for database security.For an illustration of this concept, imagine that the website of a company contains information like who they are, what they do, and what prospective customers have to do to contact them for their queries. In this case, the availability of the database services is more important when compared with other factors like the confidentiality or integrity of the database security.
Threats and risks to databases have increased and therefore, the need for securing databases has also increased. When it comes to securing a database, lots of things have to take care of , if you was focused On pervious talking you will know that I am talking about:
1- Confidentiality
2- Integrity
3- Availability

Confidentiality

What I mean in this word is so simple by encrypting the data stored in the database,two type of encryption in database :
1-Data-in-transit:
This refers to data that is moving within the network. Sensitive data, for example, that is sent through network layers or through the Internet. A hacker can gain access to this sensitive data by eavesdropping. When this happens, the confidentiality of the data is compromised. Encrypting datain-transit avoids such compromises.

2- Data-at-rest:
It is possible for a hacker to hack the data that is stored in the database. Encrypting data-at-rest prevents such data leakages.

Integrity
 
I am not going to say it’s too simple to, because I don’t want from Listeners to kick me; but guys for me It’s simple sorry, Integrity talking about which users have to be given what permissions in the database For example, data related to employee information is stored in a database. An employee may have permission for viewing the records and altering only part of information like his contact details, whereas a person in the human resources department will have more privileges.

To make sure everything is going to be ok just following these steps:

  • Change the password, once the database is installed.
  • Policies to set strong passwords have to be enforced. A good idea is to have a policy of changing the passwords once per a month.
  • Does your company have multiple database administrators? If yes, segregate the duties
    among these database administrators. 

Simple steps (again simple) save you and your company. 

Now last but I am not finished yet



Availability

Databases must not have unplanned downtime, to ensure this, following steps have to be taken:

  • To ensure high availability, usage of database clusters is recommended. 
  • Databases should be secured against security vulnerabilities.
  • Backup the data at periodic intervals to ensure data recovery in case of application issues.

As we all know there’s nothing perfect so when we talk about Security, that mean we talk about Threats and attackers. With the increase in usage of databases, the frequency of attacks against those databases has also increased; Database attacks are an increasing trend these days. What is the reason behind database attacks? One reason is the increase in access to data stored in databases. When the data is been accessed by many people, the chances of data theft increases. In the past, database attacks were prevalent, but were less in number as hackers hacked the network more to show it was possible to hack and not to sell proprietary information. Another reason for database attacks is to gain money selling sensitive information, which includes credit card numbers, Social Security Numbers, etc. We previously defined database security and talked about common database security concepts. Now let’s look at the various types of threats that affect database security.



Thank You 
Written By : Osama Mustafa 
Consider this as part one since i will post another one related to this .
Twitter