Most cloud conversations start with a simple assumption: your workloads go to the cloud provider’s data center. For a large number of organizations, particularly in government, financial services, healthcare, and defense, that assumption is the exact problem. Data sovereignty laws, regulatory requirements, and security classification levels mean that certain workloads cannot leave a specific physical location, full stop.
OCI Dedicated Region Cloud@Customer, commonly referred to as DRCC, solves this without forcing a compromise. Oracle deploys a full OCI region — not a subset of services, not a gateway appliance, but a complete cloud region with the same hardware, software stack, APIs, and SLAs — inside your own data center. You get every OCI service you would use in a public region, with the control plane managed by Oracle and the physical infrastructure sitting on your floor.
In this post I will cover how DRCC is architected, how it differs from OCI Exadata Cloud@Customer and Roving Edge, the networking requirements, IAM federation considerations, and how to automate workload deployment using Terraform once the region is live.
What DRCC Actually Delivers
The distinction between DRCC and other on-premises cloud appliances matters technically. Most cloud-at-customer offerings give you a subset of services through a dedicated appliance: a handful of compute shapes, object storage, and maybe a managed database. DRCC is architecturally different.
Oracle physically ships and installs the same rack infrastructure used in public OCI regions into your facility. The region runs the same OCI control plane software, exposes the same REST APIs, and integrates with OCI IAM and Oracle Cloud Console using the same tooling. When you run a Terraform plan against a DRCC region, the provider configuration is identical to a public region. You change the region identifier in your config and the code works without modification.
The full service catalog available in DRCC includes Compute (including bare metal and GPU shapes), OKE (Oracle Kubernetes Engine), Autonomous Database, Exadata Database Service, Object Storage, Block Volumes, File Storage, VCN, Load Balancer, API Gateway, Functions, Streaming, OCI Vault, Identity and Access Management, Monitoring, Logging, Events, and Notifications. This is not a stripped-down subset — it is the complete stack.
Hardware minimum footprint starts at a base rack configuration that supports a production workload. Oracle handles all hardware maintenance, software patching, and control plane operations. Your team manages what runs on top: compartments, IAM policies, networking, and workloads.
How DRCC Differs from Related Oracle Offerings
Before going further it is worth clarifying where DRCC sits relative to two commonly confused offerings.
OCI Exadata Cloud@Customer deploys Exadata Database Service hardware into your data center. It is a database-specific offering. You get Autonomous Database and Exadata Database Service on-premises, but not the broader OCI service catalog. If you need compute, containers, serverless, and object storage alongside the database layer, Exadata Cloud@Customer alone does not cover it.
OCI Roving Edge Infrastructure is a ruggedized portable device designed for disconnected or intermittently connected environments: ships, remote field operations, military forward deployments. It runs a subset of OCI services and is designed to operate without a persistent connection to the OCI control plane. DRCC requires a reliable network connection back to Oracle for control plane operations and is designed for fixed, well-connected facilities.
DRCC is the right choice when you need the full OCI service catalog, the workloads must stay on-premises for regulatory or sovereignty reasons, and you have a proper data center with the power, cooling, and network capacity to host the infrastructure.
Network Architecture Requirements
DRCC has specific network requirements that you need to understand before the hardware arrives. Getting these wrong means the region cannot operate.
The DRCC racks need connectivity on three planes: the management network, the customer data network, and the Oracle back-channel.
The management network connects Oracle’s control plane software running inside your facility to Oracle’s global control plane over the internet or a dedicated circuit. Oracle uses this path for software updates, monitoring, and operational management of the region. This connection is outbound-initiated from the DRCC hardware, encrypted with TLS, and authenticated with certificates. Oracle publishes the specific IP ranges that need to be permitted through your firewall. You do not control what flows over this channel, but Oracle’s contractual commitments define exactly what does.
The customer data network connects your existing on-premises infrastructure to the DRCC region. This is a standard 25G or 100G ethernet connection depending on the rack configuration. You configure VCN peering or FastConnect-equivalent local connections to bridge your existing network into the DRCC VCN.
Here is how you configure a VCN in DRCC using Terraform, which is identical to a public region:
terraform { required_providers { oci = { source = "oracle/oci" version = ">= 5.0.0" } }}provider "oci" { tenancy_ocid = var.tenancy_ocid user_ocid = var.user_ocid fingerprint = var.fingerprint private_key_path = var.private_key_path # This is your DRCC region identifier # Oracle assigns this during provisioning, format: us-yourdatacenter-1 region = var.drcc_region}resource "oci_core_vcn" "drcc_primary_vcn" { compartment_id = var.compartment_id cidr_blocks = ["10.100.0.0/16"] display_name = "drcc-primary-vcn" dns_label = "drccprimary"}# Application tier subnet - privateresource "oci_core_subnet" "app_subnet" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.drcc_primary_vcn.id cidr_block = "10.100.1.0/24" display_name = "app-private-subnet" dns_label = "apppriv" prohibit_public_ip_on_vnic = true route_table_id = oci_core_route_table.private_rt.id security_list_ids = [oci_core_security_list.app_sl.id]}# Database tier subnet - privateresource "oci_core_subnet" "db_subnet" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.drcc_primary_vcn.id cidr_block = "10.100.2.0/24" display_name = "db-private-subnet" dns_label = "dbpriv" prohibit_public_ip_on_vnic = true route_table_id = oci_core_route_table.private_rt.id security_list_ids = [oci_core_security_list.db_sl.id]}# Local Peering Gateway to connect DRCC VCN to your on-premises networkresource "oci_core_local_peering_gateway" "onprem_lpg" { compartment_id = var.compartment_id vcn_id = oci_core_vcn.drcc_primary_vcn.id display_name = "onprem-peering-gateway"}
The Local Peering Gateway in DRCC context connects the DRCC VCN to your on-premises routed network via the physical data network. This gives your existing on-premises workloads direct, low-latency access to everything running in the DRCC region without traffic ever leaving your facility.
IAM Federation in a DRCC Deployment
DRCC shares the OCI IAM control plane with the public region associated with your tenancy. This has important implications for how you manage identities.
Your DRCC region is part of your existing OCI tenancy. Users, groups, and dynamic groups created in OCI IAM apply to DRCC resources the same way they apply to public region resources. If you already federate OCI IAM with your corporate identity provider (Active Directory, Okta, Azure AD), those federated identities work in DRCC without additional configuration.
Here is the IAM federation configuration for Active Directory using SAML:
# Identity Provider configuration for AD FSresource "oci_identity_identity_provider" "ad_federation" { compartment_id = var.tenancy_ocid name = "corporate-adfs" description = "Corporate Active Directory Federation Services" product_type = "ADFS" protocol = "SAML2" metadata = file("${path.module}/adfs-metadata.xml") freeform_tags = { Environment = "production" ManagedBy = "terraform" }}# Map AD group to OCI group for DRCC operations teamresource "oci_identity_idp_group_mapping" "drcc_admins_mapping" { idp_id = oci_identity_identity_provider.ad_federation.id idp_group_name = "CN=DRCC-Admins,OU=CloudTeams,DC=corp,DC=example,DC=com" group_id = oci_identity_group.drcc_admins.id}resource "oci_identity_group" "drcc_admins" { compartment_id = var.tenancy_ocid name = "drcc-platform-admins" description = "DRCC platform administration team"}# Compartment structure for DRCC workload isolationresource "oci_identity_compartment" "drcc_root" { compartment_id = var.tenancy_ocid name = "drcc-production" description = "Root compartment for all DRCC production workloads"}resource "oci_identity_compartment" "drcc_networking" { compartment_id = oci_identity_compartment.drcc_root.id name = "drcc-networking" description = "Networking resources for DRCC region"}resource "oci_identity_compartment" "drcc_workloads" { compartment_id = oci_identity_compartment.drcc_root.id name = "drcc-workloads" description = "Application workloads running in DRCC"}# Least-privilege policy for DRCC adminsresource "oci_identity_policy" "drcc_admin_policy" { compartment_id = oci_identity_compartment.drcc_root.id name = "drcc-admin-policy" description = "Platform admin permissions scoped to DRCC compartment" statements = [ "Allow group drcc-platform-admins to manage all-resources in compartment drcc-production", "Allow group drcc-platform-admins to read all-resources in tenancy where request.region = '${var.drcc_region}'", "Allow group drcc-platform-admins to manage virtual-network-family in compartment drcc-production:drcc-networking", "Allow group drcc-platform-admins to manage instance-family in compartment drcc-production:drcc-workloads", "Allow group drcc-platform-admins to manage autonomous-database-family in compartment drcc-production:drcc-workloads" ]}
One critical IAM behavior specific to DRCC: you can write IAM policies that restrict actions to your DRCC region using the request.region condition. This means a group can have full admin rights in DRCC but zero access to your public OCI regions, or vice versa. For organizations with strict separation between on-premises and cloud teams, this is an important control.
Deploying OKE on DRCC
OKE on DRCC runs the same as OKE in a public region. The control plane components run inside the DRCC rack. The API server endpoint is reachable from within your data center network without any traffic leaving the facility.
resource "oci_containerengine_cluster" "drcc_cluster" { compartment_id = oci_identity_compartment.drcc_workloads.id kubernetes_version = "v1.29.1" name = "drcc-production-cluster" vcn_id = oci_core_vcn.drcc_primary_vcn.id endpoint_config { is_public_ip_enabled = false subnet_id = oci_core_subnet.app_subnet.id } options { service_lb_subnet_ids = [oci_core_subnet.app_subnet.id] kubernetes_network_config { pods_cidr = "10.244.0.0/16" services_cidr = "10.96.0.0/16" } add_ons { is_kubernetes_dashboard_enabled = false is_tiller_enabled = false } }}resource "oci_containerengine_node_pool" "drcc_workers" { cluster_id = oci_containerengine_cluster.drcc_cluster.id compartment_id = oci_identity_compartment.drcc_workloads.id kubernetes_version = "v1.29.1" name = "drcc-worker-pool" node_config_details { size = 3 placement_configs { availability_domain = data.oci_identity_availability_domains.drcc_ads.availability_domains[0].name subnet_id = oci_core_subnet.app_subnet.id } } node_shape = "VM.Standard3.Flex" node_shape_config { memory_in_gbs = 64 ocpus = 8 } node_source_details { image_id = data.oci_core_images.ol8_image.images[0].id source_type = "IMAGE" boot_volume_size_in_gbs = 100 } initial_node_labels { key = "workload-tier" value = "application" }}
The is_public_ip_enabled = false on the endpoint config is non-negotiable in a DRCC context. The API server should only be reachable from within your data center network. Any tooling that manages the cluster (Argo CD, Flux, CI pipelines) connects to the internal endpoint directly.
Deploying Autonomous Database on DRCC
Autonomous Database on DRCC is identical in API and behavior to the public region version. The database runs entirely within your facility.
resource "oci_database_autonomous_database" "drcc_adb" { compartment_id = oci_identity_compartment.drcc_workloads.id db_name = "DRCCPROD" display_name = "drcc-production-adb" db_workload = "OLTP" cpu_core_count = 4 data_storage_size_in_tbs = 2 admin_password = var.adb_admin_password is_auto_scaling_enabled = true is_dedicated = false # Private endpoint configuration - no public access subnet_id = oci_core_subnet.db_subnet.id private_endpoint_label = "drccprodadb" is_access_control_enabled = true whitelisted_ips = [ oci_core_subnet.app_subnet.id ] defined_tags = { "Operations.Environment" = "production" "Operations.Region" = "drcc" "Operations.ManagedBy" = "terraform" }}
The subnet_id and private_endpoint_label fields configure the database with a private endpoint inside the db subnet. Only resources in the whitelisted subnets can connect. No public endpoint is created.
Security Baseline for DRCC Deployments
DRCC gives you physical control over the hardware, but that does not mean you can skip the standard OCI security baseline. The software layer still requires proper configuration.
Enable Cloud Guard at the tenancy level scoped to your DRCC compartments:
resource "oci_cloud_guard_cloud_guard_configuration" "drcc_cloud_guard" { compartment_id = var.tenancy_ocid reporting_region = var.drcc_region status = "ENABLED"}resource "oci_cloud_guard_target" "drcc_target" { compartment_id = oci_identity_compartment.drcc_root.id display_name = "drcc-production-target" target_resource_id = oci_identity_compartment.drcc_root.id target_resource_type = "COMPARTMENT" target_detector_recipes { detector_recipe_id = data.oci_cloud_guard_detector_recipes.config_recipe.detector_recipe_collection[0].items[0].id } target_responder_recipes { responder_recipe_id = data.oci_cloud_guard_responder_recipes.oci_responder.responder_recipe_collection[0].items[0].id }}
Enable Vault for all secrets, keys, and credentials used by workloads running in DRCC. Because the Vault service runs inside the rack, key material never leaves your facility:
resource "oci_kms_vault" "drcc_vault" { compartment_id = oci_identity_compartment.drcc_workloads.id display_name = "drcc-workloads-vault" vault_type = "VIRTUAL_PRIVATE"}resource "oci_kms_key" "drcc_master_key" { compartment_id = oci_identity_compartment.drcc_workloads.id display_name = "drcc-master-encryption-key" management_endpoint = oci_kms_vault.drcc_vault.management_endpoint key_shape { algorithm = "AES" length = 32 } protection_mode = "HSM"}
The VIRTUAL_PRIVATE vault type and HSM protection mode ensure the key material is stored in the hardware security module inside the DRCC rack. Combined with the fact that the rack is physically in your data center, you have full chain-of-custody over the cryptographic material protecting your data.
Operational Considerations
A few things that are specific to operating DRCC that do not come up when working with public regions.
Oracle is responsible for hardware maintenance and software patching of the control plane. You receive advance notification of maintenance windows. During a control plane maintenance window, the management APIs may be briefly unavailable, but running workloads continue without interruption. Plan your deployment pipelines to account for these windows.
Capacity planning is different from the public cloud. In a public region, you scale up by requesting more resources and the cloud absorbs the demand. In DRCC, you have a fixed hardware footprint. If you need to scale beyond the initial rack configuration, you work with Oracle to add capacity. Build capacity planning reviews into your quarterly operations cycle and monitor resource utilization with OCI Monitoring the same way you would in a public region.
The Oracle back-channel for management operations needs to be permanently open. If your network team applies a firewall rule that blocks this traffic, the control plane loses contact with Oracle and becomes degraded. Work with Oracle to get the exact IP ranges and port requirements before go-live and document them clearly in your firewall change management process.
When DRCC Is the Right Choice
DRCC makes sense when at least one of these conditions is true: your regulatory framework requires data residency within a specific physical location you control, your security classification means workloads cannot traverse public internet infrastructure at any point, your latency requirements for database and application tiers demand co-location in your own facility, or you have existing on-premises infrastructure that needs tight integration with cloud services without egress cost or latency overhead.
It is not the right choice for organizations that want cloud economics without data center investment, for workloads with highly variable capacity requirements that would benefit from elastic public cloud scaling, or for teams that want to avoid the operational overhead of maintaining physical infrastructure.
For those who do meet the criteria, DRCC is one of the more complete sovereign cloud offerings on the market. The fact that the APIs and tooling are identical to the public cloud means your engineers do not need to learn a second system, your Terraform code travels unchanged, and your OKE workloads run without modification.
Regards,
Osama