Moving Oracle database to the cloud. 12.2 standalone to 19C RAC PDB.

I see more and more Oracle databases are moving to the public cloud or to a hybrid cloud solution. Depending on the platform, size and used options it could be a different path but the general approach boils down to three main options – Oracle RMAN backup and restore, Oracle Data Guard or Oracle Data Pump with or without transportable tablespaces. Here I want to share our approach for migration from 12.2 standalone database to 19c RAC container as PDB in the Oracle Cloud Infrastructure (OCI). 

Here are the initial conditions and requirements. We had multiple Linux 86×64 12.2 Enterprise edition standalone databases on a file system based storage moving  to Oracle Cloud Extreme Performance 19c with RAC on ASM.

Considering size for the databases and the endian format (little) the most viable option was Oracle DataGuard (DG) . The main question was whether we upgrade our database and convert to a pluggable database (PDB) on-prem and move it to the cloud after or do the migration, upgrade and conversion in one shot using the same downtime window. We’ve chosen the latter.

Here is the high level diagram:

And here is general workflow:

The source database was analyzed by the Oracle preupgrade.jar tool to verify if it was ready to be upgraded to 19c. A few issues were fixed in advance and some reported problems were ignored. There was no universal solution for everybody to tell which warning should be ignored and which one should be taken into consideration and fixed. 

The next step was to prepare the database to use TDE on the target platform. An Oracle encryption wallet and master key was created for the original database. The basic steps are following (all paths and values are arbitrary):

SQL> administer KEY management CREATE keystore '/u01/app/oracle/product/12.2.0/dbhome_1/network/admin' IDENTIFIED BY #SYS_PASSWORD#;
keystore altered.
SQL> administer KEY management SET keystore OPEN IDENTIFIED BY #SYS_PASSWORD#;
keystore altered.
SQL> administer KEY management SET KEY IDENTIFIED BY #SYS_PASSWORD# WITH backup;
keystore altered.
SQL> administer KEY management CREATE auto_login keystore FROM keystore '/u01/app/oracle/product/12.2.0/dbhome_1/network/admin' IDENTIFIED BY #SYS_PASSWORD#;
keystore altered.
SQL> administer KEY management SET keystore close IDENTIFIED BY #SYS_PASSWORD#;
keystore altered.

You can verify the status of your wallet in the v$encryption_wallet view and make sure it is shown as “open”.

After that we created the target 19c container RAC database on DBCS using ASM as storage (you remember you have two options for DBCS). Having the database created we were able to use the first node as a staging area for our DR. We cloned the database software from on-prem to that node and created a dummy database with the same database name but with a different unique name.

[oracle@oracloud01 ~]$./ ORACLE_HOME=/u01/app/oracle/staging/12.2.0/dbhome_1 ORACLE_HOME_NAME=OraHome12201_dbhome_1 ORACLE_BASE=/u01/app/oracle 
[oracle@oracloud01 ~]$ dbca -createDatabase -silent -createAsContainerDatabase false -templateName General_Purpose.dbc -gdbName nsppwcb -storageType ASM -diskGroupName +DATAC1 -recoveryAreaDestination +RECOC1 -recoveryAreaSize 10240 -initParams db_name=appdb,db_unique_name=appdbstg,sga_target=2G,sga_max_size=2G
Enter SYS user password:
Enter SYSTEM user password:

Having set it up and copied the wallet from source to the staging node we were ready to set up a Data Guard standby where the cloud based standby would be using the cloned home and be encrypted using the master key we created on the source. 

I used the Oracle Zero Downtime Migration tool to establish the replication and prepare the staging. It saved time, effort and avoided human mistakes providing a unified, consistent approach to all the migration. We spent some time troubleshooting different  issues during implementation and a dry run for the first database but it paid off later.  We used the parameter “-pauseafter ZDM_CONFIGURE_DG_SRC” to wait before actual cutover. As you could see the Oracle ZDM can be useful even if it cannot cover the complete migration path. 

Before doing cutover we also moved the standby datafiles to be inside the ASM under ” <Disk group>/<CDB unique name>/<Future PDB GUID>>” using rman backup as copy. I might write another short blog with all the details on how to do that.

We were keeping the Data Guard replication until the cutover time when the real downtime for production started. On the scheduled time we resumed the ZDM job to complete the database switchover and make our staging database primary. The command was simple:

./zdmcli resume job -jobid 3
and in the output for "query job" command we got list of completed actions:
ZDM_CLONE_TGT ................. COMPLETED
[zdmuser@vlxpr1008 ~]$

After the successful switchover we started the staging database in read only mode, exported the encryption key and description file to be plugged to the CDB.

SQL> administer KEY management export encryption KEYS WITH secret "my_secret" TO '/home/oracle/appdb01_export.p12' force keystore IDENTIFIED BY #SYS_PASSWORD;
keystore altered.
SQL> !ls -l /home/oracle/appdb01_export.p12
-rw-r--r-- 1 oracle asmadmin 2548 Mar  2 19:48 /home/oracle/appdb01_export.p12
    pdb_descr_file => '/home/oracle/appdb01.xml');
END;  2    3    4
  5  /
PL/SQL PROCEDURE successfully completed.

It is recommended to verify the target PDB for any violations if it will be plugged into the target CDB using “DBMS_PDB.CHECK_PLUG_COMPATIBILITY” package and the exported xml file.

  compatible CONSTANT VARCHAR2(3) :=
           pdb_descr_file => '/home/oracle/appdb1.xml',
           pdb_name       => 'appdb1')
    ELSE 'NO'
END;SQL>   2    3    4    5    6    7    8    9   10   11
 12  /
PL/SQL PROCEDURE successfully completed.
SQL> SELECT line,message,STATUS FROM pdb_plug_in_violations WHERE name='APPDB1' ORDER BY TIME,line;

Some violations like version mismatch and the fact the database is not yet container PDB could be ignored. Also keep in mind that some violations are “ERROR” type and should be fixed sooner or later but some are just “WARNING” and might not have any impact.

After that we shut down our staging database and plugged it into the target CDB with “nocopy” option effectively using already encrypted data files and saving time during the cutover downtime. 

SQL> CREATE pluggable DATABASE appdb01 USING '/home/oracle/appdb01.xml' nocopy;
Pluggable DATABASE created.
SQL> SHOW pdbs
---------- ------------------------------ ---------- ----------

Our database was plugged in to the target container CDB and was ready for upgrade. Before doing the upgrade I imported the master encryption key we used on the source and staging.

SQL> administer KEY management import encryption KEYS WITH secret "my_secret" FROM '/home/oracle/appdb01_export.p12' force keystore IDENTIFIED BY #SYS_PASSWORD WITH backup;
keystore altered.

The next step is to upgrade our new PDB to make it the same version as the container.

SQL>alter session set container=APPDB01;
SQL>startup upgrade;
[oracle@oracloud01 ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@oracloud01 admin]$ $ORACLE_HOME/perl/bin/perl -c APPDB01 catupgrd.sql
Argument list for []
For Oracle internal use only A = 0
Run in                       c = APPDB01
Do not run in                C = 0
Input Directory              d = 0
Echo OFF                     e = 1
Simulate                     E = 0
Forced cleanup               F = 0
Log Id                       i = 0
Child Process                I = 0
Log Dir                      l = 0
Priority List Name           L = 0
Upgrade Mode active          M = 0
SQL Process Count            n = 0
SQL PDB Process Count        N = 0
Open Mode Normal             o = 0
Start Phase                  p = 0
End Phase                    P = 0
Reverse Order                r = 0
AutoUpgrade Resume           R = 0
Script                       s = 0
Serial Run                   S = 0
RO User Tablespaces          T = 0
Display Phases               y = 0
Debug              z = 0
Debug              Z = 0 VERSION: []
           STATUS: [Production]

After all successful upgrade steps (including fixup.sql script if it is required) our database was almost ready and required only to be converted into PDB.

SQL> ALTER SESSION SET container=appdb01;
SESSION altered.
SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
SQL> SET trimout ON
SQL> SET trimspool ON
SQL> SET underline "-"
SQL> SET verify OFF
SQL> SET wrap ON
SQL> SET xmloptimizationcheck OFF
SQL> SHOW pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 APPDB01                        READ WRITE YES
SQL> SHOW con_name
Pluggable DATABASE closed.
SQL> startup
Pluggable DATABASE opened.

And you can verify all components in your pluggable database using dba_registry_view:

SQL> col comp_name FOR a50
SQL> SELECT comp_name, version, STATUS FROM dba_registry;
COMP_NAME                                          VERSION                        STATUS
-------------------------------------------------- ------------------------------ --------------------
Oracle DATABASE Catalog Views                                 VALID
Oracle DATABASE Packages AND Types                            VALID
JServer JAVA Virtual Machine                                  VALID
Oracle XDK                                                    VALID
Oracle DATABASE Java Packages                                 VALID
OLAP Analytic Workspace                                       VALID
Oracle REAL Application Clusters                              VALID
Oracle XML DATABASE                                           VALID
Oracle Workspace Manager                                      VALID
Oracle Text                                                   VALID
Oracle Multimedia                                             VALID
Spatial                                                       VALID
Oracle OLAP API                                               VALID
Oracle Label Security                                         VALID
Oracle DATABASE Vault                                         VALID
15 ROWS selected.

The result is a fully migrated and upgraded database after 1 hour 30 min cutover time. Of course you still need to create services, complete acceptance and verification tests, adjust UNDO tablespaces  and do some application or company specific actions but the migration is done. The staging home and leftovers from the database could be removed if they are not going to be used for the next migration to the same container.

I didn’t put all the small details or issues we encountered and should solve during our migrations – it would be too long and totally unreadable. Hopefully I will be able to create a webinar or discuss on one of the virtual events about different pitfalls and unexpected issues you can expect during migration.

This is only one case when you have a 12.2 source and 19c target in the Oracle cloud but for the last several months we did different migrations involving other source and target versions and platforms. Let us know if you need our help and we will be happy to do that. 

Linux LVM for Oracle Database on OCI

Oracle Database as a service (DBCS) on Oracle Cloud Infrastructure (OCI) had been traditionally built based on Oracle Grid Infrastructure with ASM as the main storage layer for the database, however Oracle recently has started to offer a Linux LVM as the storage alternative. Which option is the better alternative? Let’s review some of the differences between the two options.

When provisioning a new DBCS VM on the OCI you are given two choices, Oracle Grid Infrastructure and Linux LVM, Linux LV is positioned by Oracle as a better option for quick deployment.

Screen Shot 2019-11-17 at 9.01.43 AM.png

How much faster is the deployment of a VM with LVM in compared to the GI ASM option? I compared both options in the Toronto region. The creation of the LVM based database system took from 14:07 GMT to 14:25 GMT or just about 18 minutes. For the ASM based DBCS, the deployment took from 14:35 GMT to 15:56 GMT taking 1 hour 21 minutes. The ASM option was about 4.5 times slower.
What are the other differences? First, the LVM based DB system is only a single node option.RAC is not an option on LVM – only single node. Second, there are differences with the available database versions. The GI ASM option offers the full range from 11gr2 to 19c but the LVM based option can use only 18c and 19c database versions.

Screen Shot 2019-11-17 at 10.09.39 AM.png

Third, the initial storage size available on GI ASM version is from 256GB up to 40 TB where as for the LVM option the initial size is from 256 Gb to 8 TB. Scaling is different as well. The max storage scaling for the LVM option depends on the initial storage size chosen during creation. For example, for the initial 256 Gb, we can scale up only up to 2560 Gb. The full matrix on the scaling options for LVM based database can be found in the Oracle documentation.
On the LVM based VM, we are getting not one but two different volume groups for our database. One of them is 252 Gb RECO_GRP designed for redo logs and has been built based on two 128 Gb physical volumes and the second one is the DATA_GRP with another two 128 Gb volumes.
Screen Shot 2019-11-17 at 10.29.32 AM

On the ASM version, we have eight 64 Gb disks for two external redundancy ASM disk groups. It is relatively the same volume size and the same redundancy level. It looks like Oracle uses hardware raids vs ASM or LVM based protection.
Screen Shot 2019-11-17 at 11.11.12 AM.png

Screen Shot 2019-11-17 at 11.13.20 AM.png

Screen Shot 2019-11-17 at 11.17.22 AM.png

What about performance? I tried a simple load test using Dominic Giles’ Swingbench tool and compared similar runs on the LVM and on the ASM based DB system created in the same region using the same VM shape and storage size. I used a small VM.Standard2.1 shape for my VM and 256 Gb initial storage allocation.The options for the “oewizard” generator were “-async_off -scale 5 -hashpart -create -cl -v”.
Here are results for LVM based deployment.
The SOE schema creation time:

Screen Shot 2019-11-17 at 11.20.56 AM.png

For the test itself I used the “charbench” with parameters “-c ../configs/SOE_Server_Side_V2.xml -v users,tpm,tps,vresp -intermin 0 -intermax 0 -min 0 -max 0 -uc 128 -di SQ,WQ,WA -rt 00:10:00”


Here is the test result summary for the LVM based instance:


And here are results for the ASM GI installation.
The SOE schema generation:
Screen Shot 2019-11-17 at 12.29.03 PM.png

We can see that it took 58 min on ASM vs 34 min on LVM with 24,544 rows generated per second on ASM vs 43,491 on LVM. I cannot say for sure without more elaborate troubleshooting why it was so slow, but I could see that CPU usage was significantly higher on the ASM based VM than on the LVM and it seemed that not all the load was from the database. Some other tools (like Oswatcher) contributed to the load. Probably it could show different results with bigger shapes where it would be able to use more CPU.

And here is the test result summary for the ASM based instance:


The tests showed relatively the same performance ratio between LVM and ASM based instances as during the data generation. The LVM was about two times faster than the ASM based instance. When I looked at the AWR for the ASM based instance it seemed that the CPU was the main bottleneck in the performance. As I said earlier it is quite possible that for larger VMs with more CPU the difference would not be as big.

Overall, the LVM based option for DBCS can be a great option if you want to fire a new single-node Oracle DBCS instance and you can work within the scaling and DB version limitations. In terms of performance, the LVM had much better performance results as compared to the ASM option for a small 1 OCPU shape VM. In my opinion, LVM is as a good tool for developers and testers or even for production machines considering the superior performance results with a small machine.

Copy files to Oracle OCI cloud object storage from command line.

This blog post is bit longer than usual but I wanted to cover at least three options to upload files to the Oracle OCI object storage. If you need to just upload one file you can stop reading after the first option since it covers probably most of needs to upload a single file. But if you want a bit more it makes sense to check other options too.

The OCI Object storage has a web interface with an “Upload object” button, but sometimes you need to upload files directly from a host where you have only a command line shell. In general we have at least three ways how to do that.
The first and the simplest way is to create a temporary “Pre-Authenticated Request” which will expire after specified time. The procedure is easy and intuitive.
You need to go to your bucket details and click on the right side to open the tab with “Pre-Authenticated Requests”

Screen Shot 2019-03-09 at 9.43.32 AM

Push the button “Create Pre-Authenticated Request”, choose the name and expiration time for the link.

Screen Shot 2019-03-09 at 9.44.36 AM

The link will appear in a pop up window only once, and you have to copy and save it if you want to use it later. If you have forgotten to do that it is not a problem – you can create another one.

I’ve created a link used it to upload a test file to the “TestUpload” bucket without any problem.

[opc@sandbox tmp]$dd if=/dev/zero of=random_file.out bs=1024k count=5
5+0 records in
5+0 records out
5242880 bytes transferred in 0.001785 secs (2937122019 bytes/sec)
[opc@sandbox tmp]$ll
total 10240
-rw-r--r-- 1 otochkin staff 5.0M 9 Mar 09:55 random_file.out
[opc@sandbox tmp]$curl -T random_file.out
[opc@sandbox tmp]$

It is the easiest way but what if you want to set up a more permanent process without the disappearing links. Maybe the uploading is going to be a part of data flow or you want to schedule a regular activity.  The answers are the Oracle OCI CLI and the Rest API interface using API keys. Let’s check how we can do it without installing Oracle OCI CLI.

The first thing you need is an “API key”. Behind the scenes it is a public part of a secret key you create on your box where you plan too run your scripts or in your application.

[opc@sandbox ~]$ mkdir ~/.oci
[opc@sandbox ~]$ openssl genrsa -out ~/.oci/oci_api_key.pem 2048
[opc@sandbox ~]$ chmod go-rwx ~/.oci/oci_api_key.pem
[opc@sandbox ~]$ openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem

[opc@sandbox ~]$ cat ~/.oci/oci_api_key_public.pem


-----END PUBLIC KEY-----
[opc@gleb-bastion-us ~]$

You need to copy and paste the output from the last command (from “—–BEGIN PUBLIC KEY” to “—–END PUBLIC KEY—–“) to the form which appears when you push the button “Add Public Key” on your user details.

Screen Shot 2019-04-01 at 11.23.15 AM

Having the API key in your profile on OCI we can now use the function oci-curl provided by oracle and use it in our command line. But before doing that we need to gather some values to pass to the function. The tenancy id can be found in your tenancy details you can get from the drop-down menu in top right conner of your OCI webpage. The same menu provides your user details where we need the user id. The key fingerprint for our recently created key can be found on the same page.

Screen Shot 2019-04-01 at 11.39.43 AM

Now you can change the section in the script replacing the OCID by your values

# TODO: update these values to your own local tenancyId="ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq"; local authUserId="ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq"; local keyFingerprint="20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34"; local privateKeyPath="/Users/someuser/.oci/oci_api_key.pem";
Instead of fixing the OCID in the script you may choose to use environment variables providing them either by an “export” command in the shell or putting them to a file with environments. Here is example how you can do that.
Creating a file
[opc@sandbox ~]$ vi .oci_env

privateKeyPath=~/.oci/oci_api_key.pem keyFingerprint="c9:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:fe" authUserId=ocid1.user.oc1..aaaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq tenancyId=ocid1.tenancy.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq compartmentId=ocid1.compartment.oc1..aaaaaaaa4laqzjcaty5eqbb6qt7cdfx2jl4d7bvuitvlmz4b5c2hiz6dbssza namespace=mytenancyname bucketName=TestUpload export privateKeyPath keyFingerprint authUserId tenancyId compartmentId endpoint namespace bucketName 
You can see that in addition to the OCID in the script I’ve added endpoint, namespace, bucket name and OCID for my compartment. Those values we need to upload our files. We can use the file to export all those variables.
[opc@sandbox ~]$ source .oci_env
[opc@sandbox ~]$
Download the  signing_sample_bash.txt, remove the lines with values for OCID and paths, and remove the the UTF-8 Byte Order Mark from the file replacing it by a simple “#” symbol.
[opc@sandbox ~]$ curl -O % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4764 100 4764 0 0 8707 0 --:--:-- --:--:-- --:--:-- 8709 [opc@sandbox ~]$ sed -i "/\(local tenancyId=\|local authUserId=\|local keyFingerprint=\|local privateKeyPath=\)/d" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: UTF-8 Unicode (with BOM) text [opc@sandbox ~]$ sed -i "1s/^.*#/#/" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: ASCII text [opc@sandbox ~]$ 
Run the script.
[opc@sandbox ~]$ source signing_sample_bash.txt
[opc@sandbox ~]$
Now we can use the “oci-curl” function in our command line and upload files to an OCI bucket without installing software to the machine.
Create a file.
[opc@sandbox ~]$ dd if=/dev/urandom of=new_random_file.out bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.188255 s, 55.7 MB/s
[opc@sandbox ~]$
Upload from the command line
[opc@sandbox ~]$ oci-curl $endpoint put ./new_random_file.out /n/$namespace/b/$bucketName/o/new_random_file.out
[opc@gleb-bastion-us ~]$
We can list the files:
[opc@sandbox ~]$ oci-curl $endpoint get /n/$namespace/b/$bucketName/o/{"objects":[{"name":"another_random_file.out"},{"name":"new_random_file.out"}]} 

[opc@sandbox ~]$ 
And we can see the file
Screen Shot 2019-04-03 at 11.07.58 AM
You can see more examples how to use the oci-curl function in the Oracle blog.
The last way is to install the Oracle OCI CLI as it is described in documentation. It will take only few minutes. You need to run just one command and answer to some questions.
[opc@sandbox ~]$ bash -c "$(curl -L"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6283  100  6283    0     0  23755      0 --:--:-- --:--:-- --:--:-- 23889
Downloading Oracle Cloud Infrastructure CLI install script from to /tmp/oci_cli_install_tmp_mwll.
######################################################################## 100.0%
Python3 not found on system PATH
Running install script.
output was reduced.
Then you need to configure the CLI using “oci setup config” command.
[opc@sandbox ~]$ oci setup config

This command provides a walkthrough of creating a valid CLI config file

It will ask about tenancy and use OCID and suggest to create new keys but you can say “n” if you have already the key.
Enter a region (e.g. ca-toronto-1, eu-frankfurt-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1): ca-toronto-1
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]: n
Enter the location of your private key file: /home/opc/.oci/oci_api_key.pem
Fingerprint: 20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34
Config written to /home/opc/.oci/config
If you haven't already uploaded your public key through the console,
follow the instructions on the page linked below in the section 'How to
upload the public key':
[opc@sandbox ~]$

And we can use the oci command line interface to upload or list the files or perform other actions.

[opc@sandbox~]$ oci os object put -bn TestUpload --file one_more_random_file.out
Uploading object [####################################] 100%
"etag": "31a3ae0c-5749-4390-8bae-d937a1709d9a",
"last-modified": "Wed, 03 Apr 2019 16:21:48 GMT",
"opc-content-md5": "s18Q1y1YYX113hBOqA19Mw=="
[opc@sandbox ~]$ oci os object list -bn TestUpload
"data": [
"md5": "y3wX2q+fN+lBHppGMJqfhw==",
"name": "another_random_file.out",
"size": 5242880,
"time-created": "2019-03-10T16:16:33.707000+00:00"
"md5": "/XHj/5+IkyoDbLteg6E/7w==",
"name": "new_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T15:05:47.270000+00:00"
"md5": "s18Q1y1YYX113hBOqA19Mw==",
"name": "one_more_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T16:21:47.734000+00:00"
"prefixes": []
[opc@sandbox ~]$

As a short summary  I want to say that the oci-cli command line interface can be useful and provides easy way for regular operations when the REST API can be extremely useful when you want to incorporate it to your code and use in some of your applications or cannot install any tools on your box due to some restrictions.

Oracle Cloud Infrastructure multi-factor authentication

For some time we didn’t have a multi-factor authentication in the Oracle cloud and those short-lived numeric codes were one of the best way to reinforce your protection and prevent a bad actor to break your credentials. It is not 100% protection but it is well better than a username and a password. Just recently I read in the Oracle Infrastructure cloud blog about new native multi-factor authentication for Identity and Access Management (IAM) system on Oracle Cloud Infrastructure (OCI). Of course, I went directly to my account and started to test it.

I found that it was extremely easy and intuitive. I clicked on “user settings” in the drop down menu for my profile.

Screen Shot 2019-03-11 at 1.59.53 PM

And there I saw the new button “Enable Multi-Factor Authentication”.

Screen Shot 2019-03-11 at 2.00.10 PM

When I clicked the button a new pop-up window with a QR-Code appeared. To use it you need to install Oracle Authenticator to your mobile phone. I tried with the Google Authenticator but it didn’t work for me.

Screen Shot 2019-03-11 at 2.00.26 PM

After scanning the bar-code the Oracle Authenticator automatically added the new record with a name like <tenancy_name – user_name> and provided a number you need to put to the form. After that, the user was registered for the multi-factor authentication and the new form was appearing after putting username and password.

Screen Shot 2019-03-11 at 2.48.07 PM

It worked without any problems and gave me more assurance and protection working with Oracle Cloud Infrastructure.

A couple of things worth to mention. The new native multi-factor authentication works only for OCI users and doesn’t work for federated and SSO users. And the second thing is to be careful trying other non-oracle mobile authenticators. When I tried the Google one it allowed me to enable the authentication but I was not able to log in after that. Luckily your administrator can disable the feature and you can try it again with correct authenticator software.