Deep Dive into Oracle Kubernetes Engine Security and Networking in Production

Oracle Kubernetes Engine is often introduced as a managed Kubernetes service, but its real strength only becomes clear when you operate it in production. OKE tightly integrates with OCI networking, identity, and security services, which gives you a very different operational model compared to other managed Kubernetes platforms.

This article walks through OKE from a production perspective, focusing on security boundaries, networking design, ingress exposure, private access, and mutual TLS. The goal is not to explain Kubernetes basics, but to explain how OKE behaves when you run regulated, enterprise workloads.

Understanding the OKE Networking Model

OKE does not abstract networking away from you. Every cluster is deeply tied to OCI VCN constructs.

Core Components

An OKE cluster consists of:

  • A managed Kubernetes control plane
  • Worker nodes running in OCI subnets
  • OCI networking primitives controlling traffic flow

Key OCI resources involved:

  • Virtual Cloud Network
  • Subnets for control plane and workers
  • Network Security Groups
  • Route tables
  • OCI Load Balancers

Unlike some platforms, security in OKE is enforced at multiple layers simultaneously.

Worker Node and Pod Networking

OKE uses OCI VCN-native networking. Pods receive IPs from the subnet CIDR through the OCI CNI plugin.

What this means in practice

  • Pods are first-class citizens on the VCN
  • Pod IPs are routable within the VCN
  • Network policies and OCI NSGs both apply

Example subnet design:

VCN: 10.0.0.0/16

Worker Subnet: 10.0.10.0/24
Load Balancer Subnet: 10.0.20.0/24
Private Endpoint Subnet: 10.0.30.0/24

This design allows you to:

  • Keep workers private
  • Expose only ingress through OCI Load Balancer
  • Control east-west traffic using Kubernetes NetworkPolicies and OCI NSGs together

Security Boundaries in OKE

Security in OKE is layered by design.

Layer 1: OCI IAM and Compartments

OKE clusters live inside OCI compartments. IAM policies control:

  • Who can create or modify clusters
  • Who can access worker nodes
  • Who can manage load balancers and subnets

Example IAM policy snippet:

Allow group OKE-Admins to manage cluster-family in compartment OKE-PROD
Allow group OKE-Admins to manage virtual-network-family in compartment OKE-PROD

This separation is critical for regulated environments.

Layer 2: Network Security Groups

Network Security Groups act as virtual firewalls at the VNIC level.

Typical NSG rules:

  • Allow node-to-node communication
  • Allow ingress from load balancer subnet only
  • Block all public inbound traffic

Example inbound NSG rule:

Source: 10.0.20.0/24
Protocol: TCP
Port: 443

This ensures only the OCI Load Balancer can reach your ingress controller.

Layer 3: Kubernetes Network Policies

NetworkPolicies control pod-level traffic.

Example policy allowing traffic only from ingress namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-ingress
  namespace: app-prod
spec:
  podSelector: {}
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: ingress

This blocks all lateral movement by default.

Ingress Design in OKE

OKE integrates natively with OCI Load Balancer.

Public vs Private Ingress

You can deploy ingress in two modes:

  • Public Load Balancer
  • Internal Load Balancer

For production workloads, private ingress is strongly recommended.

Example service annotation for private ingress:

service.beta.kubernetes.io/oci-load-balancer-internal: "true"
service.beta.kubernetes.io/oci-load-balancer-subnet1: ocid1.subnet.oc1..

This ensures the load balancer has no public IP.

Private Access to the Cluster Control Plane

OKE supports private API endpoints.

When enabled:

  • The Kubernetes API is accessible only from the VCN
  • No public endpoint exists

This is critical for Zero Trust environments.

Operational impact:

  • kubectl access requires VPN, Bastion, or OCI Cloud Shell inside the VCN
  • CI/CD runners must have private connectivity

This dramatically reduces the attack surface.

Mutual TLS Inside OKE

TLS termination at ingress is not enough for sensitive workloads. Many enterprises require mTLS between services.

Typical mTLS Architecture

  • TLS termination at ingress
  • Internal mTLS between services
  • Certificate management via Vault or cert-manager

Example cert-manager issuer using OCI Vault:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: oci-vault-issuer
spec:
  vault:
    server: https://vault.oci.oraclecloud.com
    path: pki/sign/oke

Each service receives:

  • Its own certificate
  • Short-lived credentials
  • Automatic rotation

Traffic Flow Example

End-to-end request path:

  1. Client connects to OCI Load Balancer
  2. Load Balancer forwards traffic to NGINX Ingress
  3. Ingress enforces TLS and headers
  4. Service-to-service traffic uses mTLS
  5. NetworkPolicy restricts lateral movement
  6. NSGs enforce VCN-level boundaries

Every hop is authenticated and encrypted.


Observability and Security Visibility

OKE integrates with:

  • OCI Logging
  • OCI Flow Logs
  • Kubernetes audit logs

This allows:

  • Tracking ingress traffic
  • Detecting unauthorized access attempts
  • Correlating pod-level events with network flows

Regards
Osama

Building a Real-Time Data Enrichment & Inference Pipeline on AWS Using Kinesis, Lambda, DynamoDB, and SageMaker

Modern cloud applications increasingly depend on real-time processing, especially when dealing with fraud detection, personalization, IoT telemetry, or operational monitoring.
In this post, we’ll build a fully functional AWS pipeline that:

  • Streams events using Amazon Kinesis
  • Enriches and transforms them via AWS Lambda
  • Stores real-time feature data in Amazon DynamoDB
  • Performs machine-learning inference using a SageMaker Endpoint

1. Architecture Overview

2. Step-By-Step Pipeline Build


2.1. Create a Kinesis Data Stream

aws kinesis create-stream \
  --stream-name RealtimeEvents \
  --shard-count 2 \
  --region us-east-1

This stream will accept incoming events from your apps, IoT devices, or microservices.


2.2. DynamoDB Table for Real-Time Features

aws dynamodb create-table \
  --table-name UserFeatureStore \
  --attribute-definitions AttributeName=userId,AttributeType=S \
  --key-schema AttributeName=userId,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST \
  --region us-east-1

This table holds live user features, updated every time an event arrives.


2.3. Lambda Function (Real-Time Data Enrichment)

This Lambda:

  • Reads events from Kinesis
  • Computes simple features (e.g., last event time, rolling count)
  • Saves enriched data to DynamoDB
import json
import boto3
from datetime import datetime, timedelta

ddb = boto3.resource("dynamodb")
table = ddb.Table("UserFeatureStore")

def lambda_handler(event, context):

    for record in event["Records"]:
        payload = json.loads(record["kinesis"]["data"])

        user = payload["userId"]
        metric = payload["metric"]
        ts = datetime.fromisoformat(payload["timestamp"])

        # Fetch old features
        old = table.get_item(Key={"userId": user}).get("Item", {})

        last_ts = old.get("lastTimestamp")
        count = old.get("count", 0)

        # Update rolling 5-minute count
        if last_ts:
            prev_ts = datetime.fromisoformat(last_ts)
            if ts - prev_ts < timedelta(minutes=5):
                count += 1
            else:
                count = 1
        else:
            count = 1

        # Save new enriched features
        table.put_item(Item={
            "userId": user,
            "lastTimestamp": ts.isoformat(),
            "count": count,
            "lastMetric": metric
        })

    return {"status": "ok"}

Attach the Lambda to the Kinesis stream.


2.4. Creating a SageMaker Endpoint for Inference

Train your model offline, then deploy it:

aws sagemaker create-endpoint-config \
  --endpoint-config-name RealtimeInferenceConfig \
  --production-variants VariantName=AllInOne,ModelName=MyInferenceModel,InitialInstanceCount=1,InstanceType=ml.m5.large

aws sagemaker create-endpoint \
  --endpoint-name RealtimeInference \
  --endpoint-config-name RealtimeInferenceConfig


2.5. API Layer Performing Live Inference

Your application now requests predictions like this:

import boto3
import json

runtime = boto3.client("sagemaker-runtime")
ddb = boto3.resource("dynamodb").Table("UserFeatureStore")

def predict(user_id, extra_input):

    user_features = ddb.get_item(Key={"userId": user_id}).get("Item")

    payload = {
        "userId": user_id,
        "features": user_features,
        "input": extra_input
    }

    response = runtime.invoke_endpoint(
        EndpointName="RealtimeInference",
        ContentType="application/json",
        Body=json.dumps(payload)
    )

    return json.loads(response["Body"].read())

This combines live enriched features + model inference for maximum accuracy.


3. Production Considerations

Performance

  • Enable Lambda concurrency
  • Use DynamoDB DAX caching
  • Use Kinesis Enhanced Fan-Out for high throughput

Security

  • Use IAM roles with least privilege
  • Encrypt Kinesis, Lambda, DynamoDB, and SageMaker with KMS

Monitoring

  • CloudWatch Metrics
  • CloudWatch Logs Insights queries
  • DynamoDB capacity alarms
  • SageMaker Model error monitoring

Cost Optimization

  • Use PAY_PER_REQUEST DynamoDB
  • Use Lambda Power Tuning
  • Scale SageMaker endpoints with autoscaling

Implementing a Real-Time Anomaly Detection Pipeline on OCI Using Streaming Data, Oracle Autonomous Database & ML

Detecting unusual patterns in real time is critical to preventing outages, catching fraud, ensuring SLA compliance, and maintaining high-quality user experiences.
In this post, we build a real working pipeline on OCI that:

  • Ingests streaming data
  • Computes features in near-real time
  • Stores results in Autonomous Database
  • Runs anomaly detection logic
  • Sends alerts and exposes dashboards

This guide contains every technical step, including:
Streaming → Function → Autonomous DB → Anomaly Logic → Notifications → Dashboards

1. Architecture Overview

Components Used

  • OCI Streaming
  • OCI Functions
  • Oracle Autonomous Database
  • DBMS_SCHEDULER for anomaly detection job
  • OCI Notifications
  • Oracle Analytics Cloud / Grafana

2. Step-by-Step Implementation


2.1 Create OCI Streaming Stream

oci streaming stream create \
  --compartment-id $COMPARTMENT_OCID \
  --display-name "anomaly-events-stream" \
  --partitions 3

2.2 Autonomous Database Table

CREATE TABLE raw_events (
  event_id       VARCHAR2(50),
  event_time     TIMESTAMP,
  metric_value   NUMBER,
  feature1       NUMBER,
  feature2       NUMBER,
  processed_flag CHAR(1) DEFAULT 'N',
  anomaly_flag   CHAR(1) DEFAULT 'N',
  CONSTRAINT pk_raw_events PRIMARY KEY(event_id)
);

2.3 OCI Function – Feature Extraction

func.py:

import oci
import cx_Oracle
import json
from datetime import datetime

def handler(ctx, data: bytes=None):
    event = json.loads(data.decode('utf-8'))

    evt_id = event['id']
    evt_time = datetime.fromisoformat(event['time'])
    value = event['metric']

    # DB Connection
    conn = cx_Oracle.connect(user='USER', password='PWD', dsn='dsn')
    cur = conn.cursor()

    # Fetch previous value if exists
    cur.execute("SELECT metric_value FROM raw_events WHERE event_id=:1", (evt_id,))
    prev = cur.fetchone()
    prev_val = prev[0] if prev else 1.0

    # Compute features
    feature1 = value - prev_val
    feature2 = value / prev_val

    # Insert new event
    cur.execute("""
        INSERT INTO raw_events(event_id, event_time, metric_value, feature1, feature2)
        VALUES(:1, :2, :3, :4, :5)
    """, (evt_id, evt_time, value, feature1, feature2))

    conn.commit()
    cur.close()
    conn.close()

    return "ok"

Deploy the function and attach the streaming trigger.


2.4 Anomaly Detection Job (DBMS_SCHEDULER)

BEGIN
  FOR rec IN (
    SELECT event_id, feature1
    FROM raw_events
    WHERE processed_flag = 'N'
  ) LOOP
    DECLARE
      meanv NUMBER;
      stdv  NUMBER;
      zscore NUMBER;
    BEGIN
      SELECT AVG(feature1), STDDEV(feature1) INTO meanv, stdv FROM raw_events;

      zscore := (rec.feature1 - meanv) / NULLIF(stdv, 0);

      IF ABS(zscore) > 3 THEN
        UPDATE raw_events SET anomaly_flag='Y' WHERE event_id=rec.event_id;
      END IF;

      UPDATE raw_events SET processed_flag='Y' WHERE event_id=rec.event_id;
    END;
  END LOOP;
END;

Schedule this to run every 2 minutes:

BEGIN
  DBMS_SCHEDULER.CREATE_JOB (
    job_name        => 'ANOMALY_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN anomaly_detection_proc; END;',
    repeat_interval => 'FREQ=MINUTELY;INTERVAL=2;',
    enabled         => TRUE
  );
END;


2.5 Notifications

oci ons topic create \
  --compartment-id $COMPARTMENT_OCID \
  --name "AnomalyAlerts"

In the DB, add a trigger:

CREATE OR REPLACE TRIGGER notify_anomaly
AFTER UPDATE ON raw_events
FOR EACH ROW
WHEN (NEW.anomaly_flag='Y' AND OLD.anomaly_flag='N')
BEGIN
  DBMS_OUTPUT.PUT_LINE('Anomaly detected for event ' || :NEW.event_id);
END;
/


2.6 Dashboarding

You may use:

  • Oracle Analytics Cloud (OAC)
  • Grafana + ADW Integration
  • Any BI tool with SQL

Example Query:

SELECT event_time, metric_value, anomaly_flag 
FROM raw_events
ORDER BY event_time;

2. Terraform + OCI CLI Script Bundle

Terraform – Streaming + Function + Policies

resource "oci_streaming_stream" "anomaly" {
  name           = "anomaly-events-stream"
  partitions     = 3
  compartment_id = var.compartment_id
}

resource "oci_functions_application" "anomaly_app" {
  compartment_id = var.compartment_id
  display_name   = "anomaly-function-app"
  subnet_ids     = var.subnets
}

Terraform Notification Topic

resource "oci_ons_notification_topic" "anomaly" {
  compartment_id = var.compartment_id
  name           = "AnomalyAlerts"
}

CLI Insert Test Events

oci streaming stream message put \
  --stream-id $STREAM_OCID \
  --messages '[{"key":"1","value":"{\"id\":\"1\",\"time\":\"2025-01-01T10:00:00\",\"metric\":58}"}]'

Deploying Real-Time Feature Store on Amazon SageMaker Feature Store with Amazon Kinesis Data Streams & Amazon DynamoDB for Low-Latency ML Inference

Modern ML inference often depends on up-to-date features (customer behaviour, session counts, recent events) that need to be available in low-latency operations. In this article you’ll learn how to build a real-time feature store on AWS using:

  • Amazon Kinesis Data Streams for streaming events
  • AWS Lambda for processing and feature computation
  • Amazon DynamoDB (or SageMaker Feature Store) for storage of feature vectors
  • Amazon SageMaker Endpoint for low-latency inference
    You’ll see end-to-end code snippets and architecture guidance so you can implement this in your environment.

1. Architecture Overview

The pipeline works like this:

  1. Front-end/app produces events (e.g., user click, transaction) → published to Kinesis.
  2. A Lambda function consumes from Kinesis, computes derived features (for example: rolling window counts, recency, session features).
  3. The Lambda writes/updates these features into a DynamoDB table (or directly into SageMaker Feature Store).
  4. When a request arrives for inference, the application fetches the current feature set from DynamoDB (or Feature Store) and calls a SageMaker endpoint.
  5. Optionally, after inference you can stream feedback events for model refinement.

This architecture provides real-time feature freshness and low-latencyinference.

2. Setup & Implementation

2.1 Create the Kinesis data stream

aws kinesis create-stream \
  --stream-name UserEventsStream \
  --shard-count 2 \
  --region us-east-1

2.2 Create DynamoDB table for features

aws dynamodb create-table \
  --table-name RealTimeFeatures \
  --attribute-definitions AttributeName=userId,AttributeType=S \
  --key-schema AttributeName=userId,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST \
  --region us-east-1

2.3 Lambda function to compute features

Here is a Python snippet (using boto3) which will be triggered by Kinesis:

import json
import boto3
from datetime import datetime, timedelta

dynamo = boto3.resource('dynamodb', region_name='us-east-1')
table = dynamo.Table('RealTimeFeatures')

def lambda_handler(event, context):
    for record in event['Records']:
        payload = json.loads(record['kinesis']['data'])
        user_id = payload['userId']
        event_type = payload['eventType']
        ts = datetime.fromisoformat(payload['timestamp'])

        # Fetch current features
        resp = table.get_item(Key={'userId': user_id})
        item = resp.get('Item', {})
        
        # Derive features: e.g., event_count_last_5min, last_event_type
        last_update = item.get('lastUpdate', ts.isoformat())
        count_5min = item.get('count5min', 0)
        then = datetime.fromisoformat(last_update)
        if ts - then < timedelta(minutes=5):
            count_5min += 1
        else:
            count_5min = 1
        
        # Update feature item
        new_item = {
            'userId': user_id,
            'lastEventType': event_type,
            'count5min': count_5min,
            'lastUpdate': ts.isoformat()
        }
        table.put_item(Item=new_item)
    return {'statusCode': 200}

2.4 Deploy and connect Lambda to Kinesis

  • Create Lambda function in AWS console or via CLI.
  • Add Kinesis stream UserEventsStream as event source with batch size and start position = TRIM_HORIZON.
  • Assign IAM role allowing kinesis:DescribeStream, kinesis:GetRecords, dynamodb:PutItem, etc.

2.5 Prepare SageMaker endpoint for inference

  • Train model offline (outside scope here) with features stored in training dataset matching real-time features.
  • Deploy model as endpoint, e.g., arn:aws:sagemaker:us-east-1:123456789012:endpoint/RealtimeModel.
  • In your application code call endpoint by fetching features from DynamoDB then invoking endpoint:
import boto3
sagemaker = boto3.client('sagemaker-runtime', region_name='us-east-1')
dynamo = boto3.resource('dynamodb', region_name='us-east-1')
table = dynamo.Table('RealTimeFeatures')

def get_prediction(user_id, input_payload):
    resp = table.get_item(Key={'userId': user_id})
    features = resp.get('Item')
    payload = {
        'features': features,
        'input': input_payload
    }
    response = sagemaker.invoke_endpoint(
        EndpointName='RealtimeModel',
        ContentType='application/json',
        Body=json.dumps(payload)
    )
    result = json.loads(response['Body'].read().decode())
    return result

Conclusion

In this blog post you learned how to build a real-time feature store on AWS: streaming event ingestion with Kinesis, real-time feature computation with Lambda, storage in DynamoDB, and serving via SageMaker. You got specific code examples and operational considerations for production readiness. With this setup, you’re well-positioned to deliver low-latency, ML-powered applications.

Enjoy the cloud
Osama

Automating Cost-Governance Workflows in Oracle Cloud Infrastructure (OCI) with APIs & Infrastructure as Code

Introduction

Cloud cost management isn’t just about checking invoices once a month — it’s about embedding automation, governance, and insights into your infrastructure so that your engineering teams make cost-aware decisions in real time. With OCI, you have native tools (Cost Analysis, Usage APIs, Budgets, etc.) and infrastructure-as-code (IaC) tooling that can help turn cost governance from an after-thought into a proactive part of your DevOps workflow.

In this article you’ll learn how to:

  1. Extract usage and cost data via the OCI Usage API / Cost Reports.
  2. Define IaC workflows (e.g., with Terraform) that enforce budget/usage guardrails.
  3. Build a simple example where you automatically tag resources, monitor spend by tag, and alert/correct when thresholds are exceeded.
  4. Discuss best practices, pitfalls, and governance recommendations for embedding FinOps into OCI operations.

1. Understanding OCI Cost & Usage Data

What data is available?

OCI provides several cost/usage-data mechanisms:

  • The Cost Analysis tool in the console allows you to view trends by service, compartment, tag, etc. Oracle Docs+1
  • The Usage/Cost Reports (CSV format) which you can download or programmatically access via the Usage API. Oracle Docs+1
  • The Usage API (CLI/SDK) to query usage-and-cost programmatically. Oracle Docs+1

Why this matters

By surfacing cost data at a resource, compartment, or tag level, teams can answer questions like:

  • “Which tag values are consuming cost disproportionately?”
  • “Which compartments have heavy spend growth month-over-month?”
  • “Which services (Compute, Storage, Database, etc.) are the highest spenders and require optimization?”

Example: Downloading a cost report via CLI

Here’s a Python/CLI snippet that shows how to download a cost-report CSV from your tenancy:

oci os object get \
  --namespace-name bling \
  --bucket-name <your-tenancy-OCID> \
  --name reports/usage-csv/<report_name>.csv.gz \
  --file local_report.csv.gz
import oci
config = oci.config.from_file("~/.oci/config", "DEFAULT")
os_client = oci.object_storage.ObjectStorageClient(config)
namespace = "bling"
bucket = "<your-tenancy-OCID>"
object_name = "reports/usage-csv/2025-10-19-report-00001.csv.gz"

resp = os_client.get_object(namespace, bucket, object_name)
with open("report-2025-10-19.csv.gz", "wb") as f:
    for chunk in resp.data.raw.stream(1024*1024, decode_content=False):
        f.write(chunk)

2. Defining Cost-Governance Workflows with IaC

Once you have data flowing in, you can enforce guardrails and automate actions. Here’s one example pattern.

a) Enforce tagging rules

Ensure that every resource created in a compartment has a cost_center tag (for example). You can do this via policy + IaC.

# Example Terraform policy for tagging requirement
resource "oci_identity_tag_namespace" "governance" {
  compartment_id = var.compartment_id
  display_name   = "governance_tags"
  is_retired     = false
}

resource "oci_identity_tag_definition" "cost_center" {
  compartment_id = var.compartment_id
  tag_namespace_id = oci_identity_tag_namespace.governance.id
  name            = "cost_center"
  description     = "Cost Center code for FinOps tracking"
  is_retired      = false
}

You can then add an IAM policy that prevents creation of resources if the tag isn’t applied (or fails to meet allowed values). For example:

Allow group ComputeAdmins to manage instance-family in compartment Prod
  where request.operation = “CreateInstance”
  and request.resource.tag.cost_center is not null

b) Monitor vs budget

Use the Usage API or Cost Reports to pull monthly spend per tag, then compare against defined budgets. If thresholds are exceeded, trigger an alert or remediation.

Here’s an example Python pseudo-code:

from datetime import datetime, timedelta
import oci

config = oci.config.from_file()
usage_client = oci.usage_api.UsageapiClient(config)

today = datetime.utcnow()
start = today.replace(day=1)
end = today

req = oci.usage_api.models.RequestSummarizedUsagesDetails(
    tenant_id = config["tenancy"],
    time_usage_started = start,
    time_usage_ended   = end,
    granularity        = "DAILY",
    group_by           = ["tag.cost_center"]
)

resp = usage_client.request_summarized_usages(req)
for item in resp.data.items:
    tag_value = item.tag_map.get("cost_center", "untagged")
    cost     = float(item.computed_amount or 0)
    print(f"Cost for cost_center={tag_value}: {cost}")

    if cost > budget_for(tag_value):
        send_alert(tag_value, cost)
        take_remediation(tag_value)

c) Automated remediation

Remediation could mean:

  • Auto-shut down non-production instances in compartments after hours.
  • Resize or terminate idle resources.
  • Notify owners of over-spend via email/Slack.

Terraform, OCI Functions and Event-Service can help orchestrate that. For example, set up an Event when “cost by compartment exceeds X” → invoke Function → tag resources with “cost_alerted” → optional shutdown.

3. Putting It All Together

Here is a step-by-step scenario:

  1. Define budget categories – e.g., cost_center codes: CC-101, CC-202, CC-303.
  2. Tag resources on creation – via policy/IaC ensure all resources include cost_center tag with one of those codes.
  3. Collect cost data – using Usage API daily, group by tag.cost_center.
  4. Evaluate current spend vs budget – for each code, compare cumulative cost for current month against budget.
  5. If over budget – then:
    • send an alert to the team (via SNS, email, Slack)
    • optionally trigger remediation: e.g., stop non-critical compute in that cost center’s compartments.
  6. Dashboard & visibility – load cost data into a BI tool (could be OCI Analytics Cloud or Oracle Analytics) with trends, forecasts, anomaly detection. Use the “Show cost” in OCI Ops Insights to view usage & forecast cost. Oracle Docs
  7. Continuous improvement – right-size instances, pause dev/test at night, switch to cheaper shapes or reserved/commit models (depending on your discount model). See OCI best practice guide for optimizing cost. Oracle Docs

Example snippet – alerting logic in CLI

# example command to get summarized usage for last 7 days
oci usage-api request-summarized-usages \
  --tenant-id $TENANCY_OCID \
  --time-usage-started $(date -u -d '-7 days' +%Y-%m-%dT00:00:00Z) \
  --time-usage-ended   $(date -u +%Y-%m-%dT00:00:00Z) \
  --granularity DAILY \
  --group-by "tag.cost_center" \
  --query "data.items[?tagMap.cost_center=='CC-101'].computedAmount" \
  --raw-output

Enjoy the OCI
Osama

Building a Serverless Event-Driven Architecture with AWS EventBridge, SQS, and Lambda

In this blog, we’ll design a system where:

  1. Events (e.g., order placements, file uploads) are published to EventBridge.
  2. SQS queues act as durable buffers for downstream processing.
  3. Lambda functions consume events and take action (e.g., send notifications, update databases).

Architecture Overview

![EventBridge → SQS → Lambda Architecture]
(Visual: Producers → EventBridge → SQS → Lambda Consumers)

  1. Event Producers (e.g., API Gateway, S3, custom apps) emit events.
  2. EventBridge routes events to targets (e.g., SQS queues).
  3. SQS ensures reliable delivery and decoupling.
  4. Lambda processes events asynchronously.

Step-by-Step Implementation

1. Set Up an EventBridge Event Bus

Create a custom event bus (or use the default one):

aws events create-event-bus --name MyEventBus

2. Define an Event Rule to Route Events to SQS

Create a rule to forward events matching a pattern (e.g., order_placed) to an SQS queue:

aws events put-rule \
  --name "OrderPlacedRule" \
  --event-pattern '{"detail-type": ["order_placed"]}' \
  --event-bus-name "MyEventBus"

3. Create an SQS Queue and Link It to EventBridge

Create a queue and grant EventBridge permission to send messages:

aws sqs create-queue --queue-name OrderProcessingQueue

Attach the queue as a target to the EventBridge rule:

aws events put-targets \
  --rule "OrderPlacedRule" \
  --targets "Id"="OrderQueueTarget","Arn"="arn:aws:sqs:us-east-1:123456789012:OrderProcessingQueue" \
  --event-bus-name "MyEventBus"

4. Write a Lambda Function to Process SQS Messages

Create a Lambda function (process_order.py) to poll the queue and process orders:

import json
import boto3

def lambda_handler(event, context):
    for record in event['Records']:
        message = json.loads(record['body'])
        order_id = message['detail']['orderId']
        
        print(f"Processing order: {order_id}")
        # Add business logic (e.g., update DynamoDB, send SNS notification)
        
    return {"status": "processed"}

5. Configure SQS as a Lambda Trigger

In the AWS Console:

  • Go to Lambda → Add Trigger → SQS.
  • Select OrderProcessingQueue and set batch size (e.g., 10 messages per invocation).

6. Test the Flow

Emit a test event to EventBridge:

aws events put-events \
  --entries '[{
    "EventBusName": "MyEventBus",
    "Source": "my.app",
    "DetailType": "order_placed",
    "Detail": "{ \"orderId\": \"123\", \"amount\": 50 }"
  }]'

Verify the flow:

  1. EventBridge routes the event to SQS.
  2. Lambda picks up the message and logs:
Processing order: 123  

Use Cases

  • Order processing (e.g., e-commerce workflows).
  • File upload pipelines (e.g., resize images after S3 upload).
  • Notifications (e.g., send emails/SMS for system events).

Enjoy
Thank you
Osama

Real-Time Data Processing with AWS Kinesis, Lambda, and DynamoDB

Many applications today require real-time data processing—whether it’s for analytics, monitoring, or triggering actions. AWS provides powerful services like Amazon Kinesis for streaming data, AWS Lambda for serverless processing, and DynamoDB for scalable storage.

In this blog, we’ll build a real-time data pipeline that:

  1. Ingests streaming data (e.g., clickstream, IoT sensor data, or logs) using Kinesis Data Streams.
  2. Processes records in real-time using Lambda.
  3. Stores aggregated results in DynamoDB for querying.

Architecture Overview

![AWS Kinesis + Lambda + DynamoDB Architecture]
(Visual: Kinesis → Lambda → DynamoDB)

  1. Kinesis Data Stream – Captures high-velocity data.
  2. Lambda Function – Processes records as they arrive.
  3. DynamoDB Table – Stores aggregated results (e.g., counts, metrics).

Step-by-Step Implementation

1. Set Up a Kinesis Data Stream

Create a Kinesis stream to ingest data:

aws kinesis create-stream --stream-name ClickStream --shard-count 1

Producers (e.g., web apps, IoT devices) can send data like:

{
  "userId": "user123",
  "action": "click",
  "timestamp": "2024-05-20T12:00:00Z"
}

2. Create a Lambda Function to Process Streams

Write a Python Lambda function (process_stream.py) to:

  • Read records from Kinesis.
  • Aggregate data (e.g., count clicks per user).
  • Update DynamoDB.
import json
import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('UserClicks')

def lambda_handler(event, context):
    for record in event['Records']:
        payload = json.loads(record['kinesis']['data'])
        user_id = payload['userId']
        
        # Update DynamoDB (increment click count)
        table.update_item(
            Key={'userId': user_id},
            UpdateExpression="ADD clicks :incr",
            ExpressionAttributeValues={':incr': 1}
        )
    return {"status": "success"}

3. Configure Lambda as a Kinesis Consumer

In the AWS Console:

  • Go to Lambda → Create Function → Python.
  • Add Kinesis as the trigger (select your stream).
  • Set batch size (e.g., 100 records per invocation).

4. Set Up DynamoDB for Aggregations

Create a table with userId as the primary key:

aws dynamodb create-table \
    --table-name UserClicks \
    --attribute-definitions AttributeName=userId,AttributeType=S \
    --key-schema AttributeName=userId,KeyType=HASH \
    --billing-mode PAY_PER_REQUEST

5. Test the Pipeline

Send test data to Kinesis:

aws kinesis put-record \
    --stream-name ClickStream \
    --data '{"userId": "user123", "action": "click"}' \
    --partition-key user123

Check DynamoDB for aggregated results:

aws dynamodb get-item --table-name UserClicks --key '{"userId": {"S": "user123"}}'

Output:

{ "userId": "user123", "clicks": 1 }

Use Cases

  • Real-time analytics (e.g., dashboard for user activity).
  • Fraud detection (trigger alerts for unusual patterns).
  • IoT monitoring (process sensor data in real-time).

Enjoy
Thank you
Osama

Building a Scalable Web Application Using AWS Lambda, API Gateway, and DynamoDB

s?

Let’s imagine we want to build a To-Do List Application where users can:

  • Add tasks to their list.
  • View all tasks.
  • Mark tasks as completed.

We’ll use the following architecture:

  1. API Gateway to handle HTTP requests.
  2. Lambda Functions to process business logic.
  3. DynamoDB to store task data.

Step 1: Setting Up DynamoDB

First, we need a database to store our tasks. DynamoDB is an excellent choice because it scales automatically and provides low-latency access.

Creating a DynamoDB Table

  1. Open the AWS Management Console and navigate to DynamoDB .
  2. Click Create Table .
    • Table Name : TodoList
    • Primary Key : id (String)
  3. Enable Auto Scaling for read/write capacity units to ensure the table scales based on demand.

Sample Table Structure

id (Primary Key)task_namestatus
1Buy groceriesPending
2Read a bookCompleted

Step 2: Creating Lambda Functions

Next, we’ll create Lambda functions to handle CRUD operations for our To-Do List application.

Lambda Function: Create Task

This function will insert a new task into the TodoList table.

import json
import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('TodoList')

def lambda_handler(event, context):
    # Extract task details from the event
    task_name = event['task_name']
    
    # Generate a unique ID for the task
    import uuid
    task_id = str(uuid.uuid4())
    
    # Insert the task into DynamoDB
    table.put_item(
        Item={
            'id': task_id,
            'task_name': task_name,
            'status': 'Pending'
        }
    )
    
    return {
        'statusCode': 200,
        'body': json.dumps({'message': 'Task created successfully!', 'task_id': task_id})
    }

Lambda Function: Get All Tasks

This function retrieves all tasks from the TodoList table.

import json
import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('TodoList')

def lambda_handler(event, context):
    # Scan the DynamoDB table
    response = table.scan()
    
    # Return the list of tasks
    return {
        'statusCode': 200,
        'body': json.dumps(response['Items'])
    }

Lambda Function: Update Task Status

This function updates the status of a task (e.g., mark as completed).

import json
import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('TodoList')

def lambda_handler(event, context):
    # Extract task ID and new status from the event
    task_id = event['id']
    new_status = event['status']
    
    # Update the task in DynamoDB
    table.update_item(
        Key={'id': task_id},
        UpdateExpression='SET #status = :new_status',
        ExpressionAttributeNames={'#status': 'status'},
        ExpressionAttributeValues={':new_status': new_status}
    )
    
    return {
        'statusCode': 200,
        'body': json.dumps({'message': 'Task updated successfully!'})
    }

Step 3: Configuring API Gateway

Now that we have our Lambda functions, we’ll expose them via API Gateway.

Steps to Set Up API Gateway

  1. Open the AWS Management Console and navigate to API Gateway .
  2. Click Create API and select HTTP API .
  3. Define the following routes:
    • POST /tasks : Maps to the “Create Task” Lambda function.
    • GET /tasks : Maps to the “Get All Tasks” Lambda function.
    • PUT /tasks/{id} : Maps to the “Update Task Status” Lambda function.
  4. Deploy the API and note the endpoint URL.

Step 4: Testing the Application

Once everything is set up, you can test the application using tools like Postman or cURL .

Example Requests

  1. Create a Task
curl -X POST https://<api-id>.execute-api.<region>.amazonaws.com/tasks \
-H "Content-Type: application/json" \
-d '{"task_name": "Buy groceries"}'

Get All Tasks

curl -X GET https://<api-id>.execute-api.<region>.amazonaws.com/tasks

Update Task Status

curl -X PUT https://<api-id>.execute-api.<region>.amazonaws.com/tasks/<task-id> \
-H "Content-Type: application/json" \
-d '{"status": "Completed"}'

Benefits of This Architecture

  1. Scalability : DynamoDB and Lambda automatically scale to handle varying loads.
  2. Cost Efficiency : You only pay for the compute time and storage you use.
  3. Low Maintenance : AWS manages the underlying infrastructure, reducing operational overhead.

Enjoy the cloud 😁
Osama

Setting up a High-Availability (HA) Architecture with OCI Load Balancer and Compute Instances

Ensuring high availability (HA) for your applications is critical in today’s cloud-first environment. Oracle Cloud Infrastructure (OCI) provides robust tools such as Load Balancers and Compute Instances to help you create a resilient, highly available architecture for your applications. In this post, we’ll walk through the steps to set up an HA architecture using OCI Load Balancer with multiple compute instances across availability domains for fault tolerance.

Prerequisites

  • OCI Account: A working Oracle Cloud Infrastructure account.
  • OCI CLI: Installed and configured with necessary permissions.
  • Terraform: Installed and set up for provisioning infrastructure.
  • Basic knowledge of Load Balancers and Compute Instances in OCI.

Step 1: Set Up a Virtual Cloud Network (VCN)

A VCN is required to house your compute instances and load balancers. To begin, create a new VCN with subnets in different availability domains (ADs) for high availability.

Terraform Configuration (vcn.tf):

resource "oci_core_virtual_network" "vcn" {
  compartment_id = "<compartment_ocid>"
  cidr_block     = "10.0.0.0/16"
  display_name   = "HA-Virtual-Network"
}

resource "oci_core_subnet" "subnet1" {
  compartment_id      = "<compartment_ocid>"
  vcn_id              = oci_core_virtual_network.vcn.id
  cidr_block          = "10.0.1.0/24"
  availability_domain = "AD-1"
  display_name        = "HA-Subnet-AD1"
}

resource "oci_core_subnet" "subnet2" {
  compartment_id      = "<compartment_ocid>"
  vcn_id              = oci_core_virtual_network.vcn.id
  cidr_block          = "10.0.2.0/24"
  availability_domain = "AD-2"
  display_name        = "HA-Subnet-AD2"
}

Step 2: Provision Compute Instances

Create two compute instances (one in each subnet) to ensure redundancy.

Terraform Configuration (compute.tf):

resource "oci_core_instance" "instance1" {
  compartment_id = "<compartment_ocid>"
  availability_domain = "AD-1"
  shape = "VM.Standard2.1"
  display_name = "HA-Instance-1"
  
  create_vnic_details {
    subnet_id = oci_core_subnet.subnet1.id
    assign_public_ip = true
  }

  source_details {
    source_type = "image"
    source_id = "<image_ocid>"
  }
}

resource "oci_core_instance" "instance2" {
  compartment_id = "<compartment_ocid>"
  availability_domain = "AD-2"
  shape = "VM.Standard2.1"
  display_name = "HA-Instance-2"
  
  create_vnic_details {
    subnet_id = oci_core_subnet.subnet2.id
    assign_public_ip = true
  }

  source_details {
    source_type = "image"
    source_id = "<image_ocid>"
  }
}

Step 3: Set Up the OCI Load Balancer

Now, configure the OCI Load Balancer to distribute traffic between the compute instances in both availability domains.

Terraform Configuration (load_balancer.tf):

resource "oci_load_balancer_load_balancer" "ha_lb" {
  compartment_id = "<compartment_ocid>"
  display_name   = "HA-Load-Balancer"
  shape           = "100Mbps"

  subnet_ids = [
    oci_core_subnet.subnet1.id,
    oci_core_subnet.subnet2.id
  ]

  backend_sets {
    name = "backend-set-1"

    backends {
      ip_address = oci_core_instance.instance1.private_ip
      port = 80
    }

    backends {
      ip_address = oci_core_instance.instance2.private_ip
      port = 80
    }

    policy = "ROUND_ROBIN"
    health_checker {
      port = 80
      protocol = "HTTP"
      url_path = "/health"
      retries = 3
      timeout_in_seconds = 10
      interval_in_seconds = 5
    }
  }
}

resource "oci_load_balancer_listener" "ha_listener" {
  load_balancer_id = oci_load_balancer_load_balancer.ha_lb.id
  name = "http-listener"
  default_backend_set_name = "backend-set-1"
  port = 80
  protocol = "HTTP"
}

Step 4: Set Up Health Checks for High Availability

Health checks are critical to ensure that the load balancer sends traffic only to healthy instances. The health check configuration is included in the backend set definition above, but you can customize it as needed.
Step 5: Testing and Validation

Once all resources are provisioned, test the HA architecture:

Verify Load Balancer Health: Ensure that the backend instances are marked as healthy by checking the load balancer’s health checks.

oci load-balancer backend-set get --load-balancer-id <load_balancer_id> --name backend-set-1
  1. Access the Application: Test accessing your application through the Load Balancer’s public IP. The Load Balancer should evenly distribute traffic across the two compute instances.
  2. Failover Testing: Manually shut down one of the instances to verify that the Load Balancer reroutes traffic to the other instance.

Automating Oracle Cloud Networking with OCI Service Gateway and Terraform

Oracle Cloud Infrastructure (OCI) offers a wide range of services that enable users to create secure, scalable cloud environments. One crucial aspect of a cloud deployment is ensuring secure connectivity between services without relying on public internet access. In this blog post, we’ll walk through how to set up and manage OCI Service Gateway for secure, private access to OCI services using Terraform. This step-by-step guide is intended for cloud engineers looking to leverage automation to create robust networking configurations in OCI.

Step 1: Setting up Your Environment

Before deploying the OCI Service Gateway and other networking components with Terraform, you need to set up a few prerequisites:

  1. Terraform Installation: Make sure Terraform is installed on your local machine. You can download it from Terraform’s official site.
  2. OCI CLI and API Key: Install the OCI CLI and set up your authentication key. The key must be configured in your OCI console.
  3. OCI Terraform Provider: You will also need to download the OCI Terraform provider by adding the following configuration to your provider.tf file:
provider "oci" {
  tenancy_ocid     = "<TENANCY_OCID>"
  user_ocid        = "<USER_OCID>"
  fingerprint      = "<FINGERPRINT>"
  private_key_path = "<PRIVATE_KEY_PATH>"
  region           = "us-ashburn-1"
}

Step 2: Defining the Infrastructure

The key to deploying the Service Gateway and related infrastructure is defining the resources in a main.tf file. Below is an example to create a VCN, subnets, and a Service Gateway:

resource "oci_core_vcn" "example_vcn" {
  cidr_block     = "10.0.0.0/16"
  compartment_id = "<COMPARTMENT_OCID>"
  display_name   = "example-vcn"
}

resource "oci_core_subnet" "example_subnet" {
  vcn_id             = oci_core_vcn.example_vcn.id
  compartment_id     = "<COMPARTMENT_OCID>"
  cidr_block         = "10.0.1.0/24"
  availability_domain = "<AVAILABILITY_DOMAIN>"
  display_name       = "example-subnet"
  prohibit_public_ip_on_vnic = true
}

resource "oci_core_service_gateway" "example_service_gateway" {
  vcn_id         = oci_core_vcn.example_vcn.id
  compartment_id = "<COMPARTMENT_OCID>"
  services {
    service_id = "all-oracle-services-in-region"
  }
  display_name  = "example-service-gateway"
}

resource "oci_core_route_table" "example_route_table" {
  vcn_id         = oci_core_vcn.example_vcn.id
  compartment_id = "<COMPARTMENT_OCID>"
  display_name   = "example-route-table"
  route_rules {
    destination       = "all-oracle-services-in-region"
    destination_type  = "SERVICE_CIDR_BLOCK"
    network_entity_id = oci_core_service_gateway.example_service_gateway.id
  }
}

Explanation:

  • oci_core_vcn: Defines the Virtual Cloud Network (VCN) where all resources will reside.
  • oci_core_subnet: Creates a subnet within the VCN to host compute instances or other resources.
  • oci_core_service_gateway: Configures a Service Gateway to allow private access to Oracle services such as Object Storage.
  • oci_core_route_table: Configures the route table to direct traffic through the Service Gateway for services within OCI.

Step 3: Variables for Reusability

To make the code reusable, it’s best to define variables in a variables.tf file:

variable "compartment_ocid" {
  description = "The OCID of the compartment to create resources in"
  type        = string
}

variable "availability_domain" {
  description = "The Availability Domain to launch resources in"
  type        = string
}

variable "vcn_cidr" {
  description = "The CIDR block for the VCN"
  type        = string
  default     = "10.0.0.0/16"
}

This allows you to easily modify parameters like compartment ID, availability domain, and VCN CIDR without touching the core logic.

Step 4: Running the Terraform Script

  1. Initialize TerraformTo start using Terraform with OCI, initialize your working directory using:
terraform init
  1. This command downloads the necessary providers and prepares your environment.
  2. Plan the DeploymentBefore applying changes, always run the terraform plan command. This will provide an overview of what resources will be created.
terraform plan -var-file="config.tfvars"

Apply the Changes

Once you’re confident with the plan, apply it to create your Service Gateway and networking resources:

terraform apply -var-file="config.tfvars"

Step 5: Verification

After deployment, you can verify your resources via the OCI Console. Navigate to Networking > Virtual Cloud Networks to see your VCN, subnets, and the Service Gateway. You can also validate the route table settings to ensure that the traffic routes correctly to Oracle services.

Step 6: Destroy the Infrastructure

To clean up the resources and avoid any unwanted charges, you can use the terraform destroy command:

terraform destroy -var-file="config.tfvars"

Regards
Osama