Migrating from MongoDB to Azure cosmos db, Using Mongorestore and mongodump manual/offline

In this post, i will discuss how to migrate from mongoDB (in my case the database was hosted on AWS) to Azure CosmosDB, i searched online about different articles how to do that, the problem i faced most of them were discussing the same way which is Online and using 3rd party software which is not applicable for me due to security reason, thefore i decided to post about it maybe it will useful for someone else.

Usually the easiet way which is use Azure Database Migration Service to perform an offline/online migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB’s API for MongoDB.

There are some prerequisite before start the migration to know more about it read here, the same link explained different ways for migrations, however before you start you should create an instance for Azure Cosmos DB.

Preparation of target Cosmos DB account

Create an Azure Cosmos DB account and select MongoDB as the API. Pre-create your databases through the Azure portal

The home page for azure Cloud

from the search bar just search for “Azure Cosmos DB”

Azure Cosomo DB

You have add new account for the new migration Since we are migrating from MongoDB then The API should be “Azure CosmosDB for MongoDB API”

Create cosmos db

The target is ready for migration but we have to check the connection string so we can use them in our migration from AWS to Azure.

Get the MongoDB connection string to customize

  • the Azure Cosmos DB blade, select the API.
  •  the left pane of the account blade, click Connection String.
  • The Connection String blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
Connection string

From MongoDB (Source server) you have to take backup for the database, now after the backup is completed, no need to move the backup for another server , mongo providing two way of backup either mongodump (dump) or mongoexport and will generate JSON file.

For example using monogdump

mongodump --host <hostname:port> --db <Databasename that you want to backup > --collection <collectionname> --gzip --out /u01/user/

For mongoexport

mongoexport --host<hostname:port> --db <Databasename that you want to backup > --collection <collectionname> --out=<Location for JSON file>

After the the above command will be finished, in my advice run them in the background specially if the database size is big and generate a log for the background process so you can check it frequently.

Run the restore/import command from the source server , do you remember the connection string, now we will use them to connect to Azure Cosmos DB using the following, if you used mongodump then to restore you have to use mongorestore like the below :-

mongorestore --host testserver.mongo.cosmos.azure.com --port 10255 -u testserver -p  w3KQ5ZtJbjPwTmxa8nDzWhVYRuSe0BEOF8dROH6IUXq7rJgiinM3DCDeSWeEdcOIgyDuo4EQbrSngFS7kzVWlg== --db test --collection test /u01/user/notifications_service/user_notifications.bson.gz  --gzip --ssl --sslAllowInvalidCertificates

notice the follwing :-

  • host : From Azure portal/connection string.
  • Port : From Azure portal/connection string.
  • Password : From Azure portal/connection string.
  • DB : The name of the database you want to be created in azure cosmo,this name will be created during the migration to azure.
  • Collection : The name of the collection you want to be created in azure cosmo,this name will be created during the migration to azure.
  • Location for the backup.
  • gzip because i compressed the backup
  • Migration required to use ssl authication otherwise it will fail.

using mongoimport.

mongoimport --host testserver.mongo.cosmos.azure.com:10255 -u testserver -p w3KQ5ZtJbjPwTmxa8nDzWhVYRuSe0BEOF8dROH6IUXq7rJgiinM3DCDeSWeEdcOIgyDuo4EQbrSngFS7kzVWlg== --db test --collection test --ssl --sslAllowInvalidCertificates --type json --file /u01/dump/users_notifications/service_notifications.json

Once you run the command

Note: if you migrating huge or big databases you need to increase the cosmosdb throughout and database level after the migration will be finished return everything to the normal situation because of the cost.

Cheers

Osama

Oracle Database Application Security Book

Finally …

The Book is alive

For the first time the book which is dicussed critcal security issues such as database threats, and how to void them, the book also include advance topics about Oracle internet directory, Oracle access manager and how to implement full cycle single sign on,

Focus on the security aspects of designing, building, and maintaining a secure Oracle Database application. Starting with data encryption, you will learn to work with transparent data, back-up, and networks. You will then go through the key principles of audits, where you will get to know more about identity preservation, policies and fine-grained audits. Moving on to virtual private databases, you’ll set up and configure a VPD to work in concert with other security features in Oracle, followed by tips on managing configuration drift, profiles, and default users.

What You Will Learn:- 

  • Work with Oracle Internet Directory using the command-line and the console.
  • Integrate Oracle Access Manager with different applications.
  • Work with the Oracle Identity Manager console and connectors, while creating your own custom one.
  • Troubleshooting issues with OID, OAM, and OID.
  • Dive deep into file system and network security concepts.
  • First time chapter that include most of the critical database threats in real life.

 

You can buy the book now from amazon here

 

Cheers

Osama

Configure AWS Command Line Interface

In this lesson i will show you how to configure AWS command line, and how to start working with your AWS account thru command line with very simple and basic steps :-

  • First of i will assume that you don’t have any user or group in your AWS console
  • from the AWS Console :-
  • From IAM ( Identity access management ) Choose Group Create Group with permission Administrator access and then hit create.
  • create user and add that user to the group in my case the group name shown above, save the secret key and access ID as CSV.
  • Now from this link here, Download AWS Command Line Interface depend on your operating system and open cmd , terminal … etc.
  • Now from Dos enter AWS configure and fill the information like the below:-
  • Open the command line and test if it’s connected to AWS now.
You can learn more about the command line from AWS documentation , if you recive the following error 
“Could not connect to the endpoint URL: “https://s3.london.amazonaws.com/&#8221;
Then make sure you are on the right Region.
Thanks
Osama

Enable Automatic Memory Management (AMM)

SQL> show parameter target

NAME                                               TYPE        VALUE
———————————— ———–       ——-
archive_lag_target                                     integer     0
db_flashback_retention_target                  integer     1440
fast_start_io_target                                   integer     0
fast_start_mttr_target                                integer     0
memory_max_target                                 big integer 26048M
memory_target                                         big integer 0
parallel_servers_target                              integer     2048
pga_aggregate_target                               big integer 6502M
sga_target                                                big integer 19520M

SQL> ALTER SYSTEM SET MEMORY_MAX_TARGET = 16384M scope=spfile ;

SQL > SHUTDOWN IMMEDIATE;
SQL > STARTUP ;
SQL> ALTER SYSTEM SET SGA_TARGET = 0 scope=spfile ;
System altered.
SQL> ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0 scope=spfile ;
System altered.
SQL> ALTER SYSTEM SET MEMORY_TARGET=12288M scope=spfile ;
System altered.
Thank you 
Osama Mustafa

Database Vulnerabilities

in this post you will find 10 ways make your database easy to hack ,Special thank for Team Shatter , I post online Article about Database security before and as reminder read it again it . LETS START


1-default-blank-and-weak-username-and-passwords

We All know What we mean by Defaults and weak password (which not complex)

To create a strong password:

  • Don’t use words that can be easily guessed or found in the dictionary
  • Use a combination of letters, numbers and characters
  • Create a complex sentence instead of a word
  • Do not share your password with anyone or write it down and leave it in your desk drawer

2-sql-injections-in-the-database

sql injection : attack vector in the DBMS through Web applications because of a failure to sanitize user inputs.
SQL Injection in the DBMS exploits passing SQL commands as a parameter of a function or stored procedure. This will then execute the malicious SQL commands in the context of the component that provides the called function. This is often done using components with system or admin privileges resulting in privilege escalation.

3-excessive-user-and-group-privileges

avoid extensive user and group privileges:

  • Map Job Functions to Privileges on IT assets
  • Never Assign Privileges Directly to Guest Accounts or Public
  • Untangle The Web of User Entitlements
  • Implement Compensating Controls for What You Can’t Fix

4-Unnecessary Enabled Database Features.



These features provide database application developers with a lot more power when working with a DBMS.  The flipside of the coin is, the more power you give a developer, the more attack vectors you potentially expose to the bad guys.

5-check-your-database-configurations with your company Goal.

6-Buffer Overflows 

A buffer overflow is when the input to a function contains more data than the input buffer can hold. If the size of the input is not checked during the copy to that buffer, adjacent memory that is used for other purposes might get overwritten.

 In most cases, this will be more or less random and can lead to unpredictable behavior, like crashing the server. However, if an attacker is able to also change the code execution pointer to the location of the overwritten memory, it is possible to execute any kind of malicious code using the context of the DBMS process.

 This could lead to a potential total compromise of the system, resulting in loss of sensitive information and overall security.
To protect against these types of attacks, it is important to always keep your DBMS updated with the latest security patches available from the vendor, as well as monitoring for known attack signatures.

 7-Privilege Escalation 

privilege escalation attack is when the attacker is exploiting a known vulnerability in a DBMS that allows a user account with restricted privileges to execute instructions or query data that that typically requires higher privileges. Thus unlocking the locks in the candy store.

There are different common vulnerabilities that allow for privilege escalation. Sometimes misusing a function that runs under a sysdba, sa or similar security context. In other cases it is done by exploiting vulnerabilities that allow a low-privileged account to grant itself more rights.

To protect against these types of attacks, it is important to always keep your DBMS updated with the latest security patches available from the vendor, as well as continuously monitor for known attack signatures.

8-Denial of Service Attack DoS

Think of a washing machine. The more clothes you put in it, the more work it needs to do. If you overfill it with clothes, it gets overwhelmed and stops working.

The same thing happens with a Denial of Service attack. It creates so much traffic on a site, a server or even sections of the internet that it cannot function and shuts down.

The most infamous DoS involving database servers was in 2003 when a computer worm called the SQL Slammer compromised more than 75,000 servers and slowed internet traffic to a halt.

9-Unpatched Databases

 Many organizations don’t implement patches right away – some even wait a year or more. The most common excuses are the downtime involved with implementing patches and the time involved with testing these patches to make sure they don’t affect the production software. Whatever the excuse may be, organizations should use database activity monitoring to manage the gap between patches on its databases containing critical information PII data.

10-Unencrypted sensitive data


Encryption is an important part of housing sensitive data. Network traffic should also be encrypted to ensure that the passwords used to access sensitive, critical data cannot be seen by traffic.

Any information that goes over the network or stored in the database should be encrypted and kept from prying eyes. Some network configurations and database management systems might allow for critical information to be sent in clear text. To ensure this doesn’t occur, make sure you have the latest version of software and turn off text indexing.

Remember to back to my online article .
Thank you

Useful Link :
1-Team Shatter

Osama Mustafa

Find User with unlimited Tablespace Quota

First find the user with direct quota on tablespace SYSTEM.

SELECT username,tablespace_name, bytes, max_bytes
FROM dba_ts_quotas
WHERE max_bytes = 1 AND tablespace_name =‘SYSTEM’;

USERNAME                  TABLESPACE_NAME                BYTES  MAX_BYTES
————————- ————————- ———- ———-
SCOTT                     SYSTEM                             0         1
TEST                      SYSTEM                             0         1

why do we want to know who has unlimited quota on the SYSTEM tablespace?

User who have unlimited quota on SYSTEM as well the privilege to create tables or clusters could do some kind of denial of service attack to the database. The just have to fill up the free space in the SYSTEM tablespace. If there default tablespace is as well SYSTEM the could even do this without intention.

Find user with system privilege UNLIMITED TABLESPACE.

SELECT * FROM dba_sys_privs WHERE privilege = ‘UNLIMITED TABLESPACE’

GRANTEE                        PRIVILEGE                      ADM
—————————— —————————— —
WMSYS                          UNLIMITED TABLESPACE           NO
RRDOMREG                       UNLIMITED TABLESPACE           NO
HR                             UNLIMITED TABLESPACE           NO
OE                             UNLIMITED TABLESPACE           NO
SYS                            UNLIMITED TABLESPACE           NO
LOGSTDBY_ADMINISTRATOR         UNLIMITED TABLESPACE           NO
SCOTT                          UNLIMITED TABLESPACE           NO
BI                             UNLIMITED TABLESPACE           NO
OUTLN                          UNLIMITED TABLESPACE           NO
DBSNMP                         UNLIMITED TABLESPACE           NO
IX                             UNLIMITED TABLESPACE           NO
SH                             UNLIMITED TABLESPACE           NO
DBA                            UNLIMITED TABLESPACE           YES
SYSTEM                         UNLIMITED TABLESPACE           YES

What about cascaded roles?

Mmh, but since Oracle 11g it is possible to grant UNLIMITED TABLESPACE to a role and this can be granted to an other role which is granted again to an other role. It could be a role concept or somebody who want to hide a little bit some privileges. To test it I’ve created three roles DBA3, DBA2 and DBA1, granted UNLIMITED TABLESPACE to DBA3.

SELECT
  grantee,
  privilege,
  DECODE(p,‘=>’||grantee,‘direct’,p) path
FROM (
  SELECT
    grantee,
    privilege,
    SYS_CONNECT_BY_PATH(grantee, ‘=>’) p
  FROM (
    SELECT
      grantee,
      privilege
    FROM dba_sys_privs
    UNION ALL
    SELECT
      grantee,
      granted_role privilege
    FROM
      dba_role_privs)
  START WITH privilege = ‘UNLIMITED TABLESPACE’
  CONNECT BY PRIOR grantee = privilege )
WHERE
  (grantee IN (SELECT username FROM dba_users)
  OR grantee = ‘PUBLIC’);

GRANTEE   PRIVILEGE               PATH
——— ———————– ——————————-
BI        UNLIMITED TABLESPACE    direct
SYS       DBA                     =>DBA=>SYS
SYSTEM    DBA                     =>DBA=>SYSTEM
SCOTT     DBA1                    =>DBA3=>DBA2=>DBA1=>SCOTT
SYS       DBA1                    =>DBA3=>DBA2=>DBA1=>SYS
SYS       DBA2                    =>DBA3=>DBA2=>SYS
SYS       DBA3                    =>DBA3=>SYS
DBSNMP    UNLIMITED TABLESPACE    direct
HR        UNLIMITED TABLESPACE    direct
IX        UNLIMITED TABLESPACE    direct
SYS       LOGSTDBY_ADMINISTRATOR  =>LOGSTDBY_ADMINISTRATOR=>SYS
OE        UNLIMITED TABLESPACE    direct
OUTLN     UNLIMITED TABLESPACE    direct
RRDOMREG  UNLIMITED TABLESPACE    direct
SH        UNLIMITED TABLESPACE    direct
SYS       UNLIMITED TABLESPACE    direct
SYSTEM    UNLIMITED TABLESPACE    direct
WMSYS     UNLIMITED TABLESPACE    direct

18 ROWS selected.

create one to find user’s with direct quotas as well through a system privilege will give something like this.

SELECT
  username,
  tablespace_name,
  privilege
FROM (
  SELECT
    grantee username, ‘Any Tablespace’ tablespace_name, privilege
  FROM (
    — first get the users with direct grants
    SELECT
      p1.grantee grantee, privilege
    FROM
      dba_sys_privs p1
    WHERE
      p1.privilege=‘UNLIMITED TABLESPACE’
    UNION ALL
    — and then the ones with UNLIMITED TABLESPACE through a role…
    SELECT
      r3.grantee, granted_role privilege
    FROM
      dba_role_privs r3
      START WITH r3.granted_role IN (
          SELECT
            DISTINCT p4.grantee
          FROM
            dba_role_privs r4, dba_sys_privs p4
          WHERE
            r4.granted_role=p4.grantee
            AND p4.privilege = ‘UNLIMITED TABLESPACE’)
    CONNECT BY PRIOR grantee = granted_role)
    — we just whant to see the users not the roles
  WHERE grantee IN (SELECT username FROM dba_users) OR grantee = ‘PUBLIC’
  UNION ALL
  — list the user with unimited quota on a dedicated tablespace
  SELECT
    username,tablespace_name,‘DBA_TS_QUOTA’ privilege
  FROM
    dba_ts_quotas
  WHERE
    max_bytes = 1 )
WHERE tablespace_name LIKE UPPER(‘SYSTEM’)
    OR tablespace_name = ‘Any Tablespace’;

USERNAME                  TABLESPACE_NAME           PRIVILEGE
————————- ————————- ——————————

SYSTEM                    Any Tablespace            UNLIMITED TABLESPACE
SYS                       Any Tablespace            DBA
SYSTEM                    Any Tablespace            DBA
SCOTT                     Any Tablespace            DBA1
SYS                       Any Tablespace            DBA1
SYS                       Any Tablespace            DBA2
SYS                       Any Tablespace            DBA3
SYS                       Any Tablespace            LOGSTDBY_ADMINISTRATOR
TEST                      SYSTEM                    DBA_TS_QUOTA

19 ROWS selected.

 You Can Download Script from here : Tablespace-scripts
Links :
1-Find TableSpace Quota .
 Thank you
Osama Mustafa

ORA-12557: TNS:protocol adapter not loadable

this problem On windows Platforms .

and it must be related to Windows Environment or Oracle Home PATH because sqlplus command works smoothly when I execute it inside ORACLE_HOME\bin.

RUN: SYSDM.CPL to open Windows System Properties
Click on Advanced Tab > Environment Variables…

Click the Path variable under System Variable, then click  Edit…

change the order between Oracle Client Home and Oracle DB Home:

From: D:\oracle\product\10.2.0\client_1\bin;D:\oracle\product\10.2.0\db_1\bin;

To: D:\oracle\product\10.2.0\db_1\bin;D:\oracle\product\10.2.0\client_1\bin;

in other words, put the Oracle DB Home in front of the other path.

Or i Don’t love this way , Since its not actual solution but its solve problem sometimes :
1-Remove ORACLE_HOME From environment Variable .


2- Restart PC 
Done

Osama mustafa 

drop all tables in a tablespace

set echo off
prompt
prompt this script is used to generate SQL file for deleting all tables in a tablespace.
prompt
prompt specify tablespace name 1:
define tsname=&1
prompt Add purge clause, y for yes, n for no 2:
define prg=&2
prompt enter output sql file name 3:
define filename=&3
set heading off
set verify off
set feedback off
start del_ts_tb.sql
set verify on
set heading on
set echo on
set feedback on

spool &filename
select 'drop table ' || owner || '.' || table_name || decode(upper('&prg'),'Y',' purge','') || ';' DropState
from dba_tables
where tablespace_name = upper('&tsname');
spool off 





Osama mustafa

Gather Schema Statistics fails with Ora-20001

Cause: FDPSTP failed due to ORA-06550: line 1, column 7:
PLS-00307: too many declarations of ‘ GATHER_ALL_COLUMN_STATS’ match this call
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored.

Error #1: ERROR: While GATHER_TABLE_STATS:
object_name=GL.JE_BE_LINE_TYPE_MAP***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
Error #2: ERROR: While GATHER_TABLE_STATS:
object_name=GL.JE_BE_LOGS***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
Error #3: ERROR: While GATHER_TABLE_STATS:
object_name=GL.JE_BE_VAT_REP_RULES***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
Solution :
To Check which objects or tables are locked
sql > SELECT OWNER,TABLE_NAME,STATTYPE_LOCKED
FROM DBA_TAB_STATISTICS
WHERE STATTYPE_LOCKED IS NOT NULL;

TO Unlock all the tables in a schema at once :
sql> exec dbms_stats.unlock_schema_stats(‘schema_owner’);
e.g : sql> exec dbms_stats.unlock_schema_stats(‘apps’);
TO Unlock all Individual tables in a schema at once
sql> exec dbms_stats.unlock_schema_stats(‘table_owner’,’table_name’);
e.g : sql > exec dbms_stats.unlock_schema_stats(‘AR’,’AR_REV_REC_QT’); 
There are two reasons for that error message:
1 ) There are duplicate rows on FND_HISTOGRAM_COLS table for JE_BE_LINE_TYPE_MAP table.
Because of this problem, FND_STATS tries to gather histogram information using wrong command and it fails with ora-20001 errors.
Following SQL should have returned one row , not two.
SQL> select a.column_name, nvl(a.hsize,254) hsize
from FND_HISTOGRAM_COLS a
where table_name = ‘JE_BE_LINE_TYPE_MAP’
order by column_name;
COLUMN_NAME HSIZE
—————————— ———-
SOURCE 254
SOURCE 254
2) Column does not exist on the table but still listed in FND_HISTOGRAMS_COL table.
Solution:
Find out all duplicates and/or obsolete rows in FND_HISTOGRAM_COLS and delete one of them.
Remember to take backup of the FND_HISTOGRAM_COLS table before deleting any data.
— identify duplicate rows
select table_name, column_name, count(*)
from FND_HISTOGRAM_COLS
group by table_name, column_name
having count(*) > 1;
— Use above results on the following SQL to delete duplicates
delete from FND_HISTOGRAM_COLS
where table_name = ‘&TABLE_NAME’
and column_name = ‘&COLUMN_NAME’
and rownum=1;
— Use following SQL to delete obsoleted rows
delete from FND_HISTOGRAM_COLS
where (table_name, column_name) in
(
select hc.table_name, hc.column_name
from FND_HISTOGRAM_COLS hc , dba_tab_columns tc
where hc.table_name =’&TABLE_NAME’
and hc.table_name= tc.table_name (+)
and hc.column_name = tc.column_name (+)
and tc.column_name is null
);