Oracle OCI Resource Manager Discovery.

If you work with Terraform, you are quite familiar with the situation when a lot of resources have already been deployed manually. What options do we have in such a case? The first one is to use the native Terraform Resource Discovery and create the state file, which can be imported to your enterprise configuration. But if you plan to use Resource Manager in OCI, you can use the new Resource Manager Discovery feature. It creates a stack discovering your resources in a compartment.
Let’s see how it works. In my Ashburn region, I have a regional network with private and public subnets, three compute instances, and a MySQL database. All the resources were deployed manually from the console or command line without using OCI Resource Manager.

Now I am going to the Resource manager page and push the button “Create Stack”.

When you do that, you have four options, and the last one is to create a stack from the existing configuration using already deployed resources.

You can choose whether you want all services in the compartment or only a specific subset of resources. It would be nice to choose based on the compartment and tags, but it is not an option for now. Also, please be aware the sub-compartments are not going to be included.

Then you push next, next again and finally, after pushing the “Create” button, you are going to get running Oracle Resource Manager stack with all your selected resources in the compartment. That can be an excellent first step in adopting Infrastructure as Code (IaC) approach in your environment.

For those who are just started with Terraform and Resource Manager, it can be a good training material and syntax template. You can download the zip file with the terraform configuration, modify it, zip it back, and edit the stack by pushing the “Edit Stack” button to upload the new configuration.

If you have a look inside the zip file, you will see normal terraform files that could be used as a basis for your future Resource Manager deployment. The zip file contains a dedicated “*.tf” file for each group of resources. I don’t have most of them, and as a result, many “tf” files are empty.

otochkin$ ll
total 328
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 analytics.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 apigateway.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 auto_scaling.tf
-rw-rw-r--@ 1 otochkin  staff   462B 12 Aug  2020 availability_domain.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 bds.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 containerengine.tf
-rw-rw-r--@ 1 otochkin  staff    18K 12 Aug  2020 core.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 data_safe.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 database.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 datacatalog.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 dataflow.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 dataintegration.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 datascience.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 dns.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 email_compartment.tf
-rw-rw-r--@ 1 otochkin  staff   999B 12 Aug  2020 events.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 file_storage.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 functions.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 health_checks.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 integration.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 kms.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 load_balancer.tf
-rw-rw-r--@ 1 otochkin  staff   1.8K 12 Aug  2020 marketplace.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 monitoring.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 mysql.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 nosql.tf
-rw-rw-r--@ 1 otochkin  staff   1.5K 12 Aug  2020 object_storage.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 oce.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 ocvp.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 oda.tf
-rw-rw-r--@ 1 otochkin  staff   648B 12 Aug  2020 ons.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 osmanagement.tf
-rw-rw-r--@ 1 otochkin  staff    38B 12 Aug  2020 provider.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 streaming.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 tagging.tf
-rw-rw-r--@ 1 otochkin  staff   596B 12 Aug  2020 vars.tf
-rw-rw-r--@ 1 otochkin  staff    63B 12 Aug  2020 waas.tf
 
 
otochkin$ cat apigateway.tf
## This configuration was generated by terraform-provider-oci

Most of the resources are concentrated in the “core.tf” file where we can see all compute instances and other resources, including network and database.

otochkin$ cat core.tf
## This configuration was generated by terraform-provider-oci
 
resource oci_core_instance export_app-forms-01 {
  agent_config {
    is_management_disabled = "false"
    is_monitoring_disabled = "false"
  }
  availability_domain = data.oci_identity_availability_domain.export_gwmA-US-ASHBURN-AD-1.name
  compartment_id      = var.compartment_ocid
  create_vnic_details {
    assign_public_ip = "true"
    defined_tags = {
...

I think the new feature is one of the key improvements that help administrators adopt the automated deployment and management framework. The next blog is about using version control for your terraform in the Oracle cloud. Stay tuned.

Oracle ExaCC Gen 2 new features and improvements.

Some time ago after the last Oracle Open World Christine Kivi wrote a blog stating that this is not “your father’s Oracle” anymore . The rapid development and continuous improvements in Oracle cloud is one of the signs that Oracle is changing. The generation 2 Exadata cloud at customer (ExaCC) was released on that last OOW 19 and initially had some limitations in options and interface. Oracle team promised to fix the issues and provide new functionality, planning some major updates in the next calendar year (2020). And so far as I can see Oracle team is working delivering the promised. Here I will try to review some of the new features implemented for the last several months. This is going to be a relatively long post. You can go to the bottom, read the summary and read in details only about changes you are interested in.

Let’s start from March 31 when shared Oracle homes feature was introduced to the ExaCC gen 2. Until that time we were able to create only one Oracle database per home and it put some limits to the potential number of the databases on the environment. After making the homes shareable it allowed to reduce usage of the local file system and potentially reduce management overhead. It looked like a simple change but considering all the required updates in internal tools responsible for creating, updating, cleaning and patching the database and changes to OCI console it didn’t look like a small fix. Now you can create multiple databases using the same home but keep in mind if you want to patch the home you will need to update all containers using it. Also be aware about location for TDE wallet and dedicated sqlnet.ora and tnsnames.ora for each database.

[oracle@virt1 ~]$ cat testdb01.env | grep TNS
TNS_ADMIN=/u02/app/oracle/product/19.0.0.0/dbhome_2/network/admin/testdb01; export TNS_ADMIN
oracle@virt1 ~]$ srvctl getenv database -db testdb02
testdb02:
TNS_ADMIN=/u02/app/oracle/product/19.0.0.0/dbhome_3/network/admin/testdb02
[oracle@virt1 ~]$

In the May the improvements continued and we got some changes in the interface and functionality. One of the major changes was the option to choose a character set for you container database. In some cases it is extremely important for earlier than 12.2 database releases. If you have 11gr2 or 12cr1 databases to migrate you have to be able to choose it. The option was theoretically available even before with using “dbaasapi” tool but it didn’t work perfectly well for me. After adding support for console it was working fine from the both interfaces whether it was Web GUI or “dbaasapi” tool.

One more update was announced on the same day in May and was related to the local timezone. It allows to specify a timezone for your exadata infrastructure during creation. The default time zone before that was UTC.

The next update came about a month later in June and included two major updates in functionality. The first one was the long anticipated option to create multiple VM clusters on ExaCC gen 2. Let me explain in a couple of words what it means. The ExaCC behind the scenes is built on virtualized Exadata where host machines (Dom0) and storage managed by Oracle and the VM clusters (OVM based DomU) are managed by the customer. Initial release of ExaCC generation 2 allowed to create only one VM cluster per machine. It means the VM (DomU) on each compute node was taking all available memory, local filesystem etc. With the update you can virtually split your ExaCC to several VM clusters segregating network, storage and access for different environments. The change introduced new features in the OCI API and Web GUI console interface for the ExaCC. 

We can specify CPU, Exadata storage, memory, local filesystem size per cluster. But it is not only that. If you try to go to the page of scaling the existing cluster it has all those values. So, you should be able to scale up and down OCPU, memory and storage allocation for your VM.

One more feature was introduced on the same day in June – offline OCPU scaling for VM clusters. It provided ability to scale your OCPU allocation even when you don’t have connectivity to OCI. Of course you will not be able to scale from Web GUI but you can use the “dbaascli” utility to do that. The command “dbaascli cpuscale update” will put new number of OCPU per VM it will be synchronized with the Web console as soon as you get your connectivity back. So far it is available only in selected regions such us-ashburn-1, ap-hyderabad-1 and sa-sanpaulo-1.

The next major update came just a week after the previous one and made Oracle Autonomous database available on ExaCC. That was a long expected change promised when the generation 2 ExaCC was revealed to the public on the Oracle Open World 2019. The update allows us to create Autonomous Exadata VM Clusters and deploy Autonomous Container Databases and Autonomous Databases. I am sure most of us are aware about Oracle Autonomous databases but if you’ve missed it you can check it here. The deployment is similar to what you can have with Oracle Autonomous on dedicated Exadata Infrastructure. I have to note that we cannot mix non-Autonomous and Autonomous VM clusters on the same ExaCC. So, unfortunately we cannot deploy a non-autonomous VM cluster and try the autonomous on the same ExaCC. The autonomous deployment will take all rac resources and after the deployment we can have up to 12 Autonomous Containers. The max number of the actual Autonomous databases varies from 100 on a quarter rack ExaCC to 400 on the full rack. So the number is roughly the same as the number of available OCPU. I think it is a very promising step and look at that as the second step to proper PaaS databases.

The next update on July 14 enabled per-second billing for OCPU usage on ExaCC and a week after the same update came for Autonomous on ExaCC. Now, instead of paying for an hour even if you scaled up for several minutes you are paying just for time you were actually using the OCPU. The minimum billing is a minute, so if you’ve scaled your OCPU up for less than a minute you pay for a minute at least. I am not sure even if it is technically possible to scale up or down for less than a minute. I seriously doubt that.

The last known and publicly available update on July 28 introduced the new interface feature allowing you to patch GI and database homes from the OCI console. You need to have all nodes up and running and for a database all instances up to perform the patching. It is not too much different from requirements for the dbaascli tool. The main benefit I see is the ability to use Oracle OCI API to embed it to the other maintenance procedures in your OCI environment.

As a summary I am providing list of changes in chronological order:

  • March 31, 2020 – Shared Database Homes for Exadata Cloud at Customer Systems.
  • May 7, 2020 – Character set and national character set can now be configured.
  • June 13, 2020 – Create and manage multiple virtual machines per Exadata system.
  • June 13, 2020 – Scale OCPUs without cloud connectivity.
  • June 23, 2020 – Oracle Autonomous Database.
  • July 14, 2020 – Per-Second Billing for OCPU Usage.
  • July 28, 2020 – Oracle Grid Infrastructure and Oracle Database Patching.

It looks like the pace is gaining momentum and we are getting new updates almost every week now. What are we going to get in the next coming weeks? Let’s see.

Moving Oracle database to the cloud. 12.2 standalone to 19C RAC PDB.

I see more and more Oracle databases are moving to the public cloud or to a hybrid cloud solution. Depending on the platform, size and used options it could be a different path but the general approach boils down to three main options – Oracle RMAN backup and restore, Oracle Data Guard or Oracle Data Pump with or without transportable tablespaces. Here I want to share our approach for migration from 12.2 standalone database to 19c RAC container as PDB in the Oracle Cloud Infrastructure (OCI). 

Here are the initial conditions and requirements. We had multiple Linux 86×64 12.2 Enterprise edition standalone databases on a file system based storage moving  to Oracle Cloud Extreme Performance 19c with RAC on ASM.

Considering size for the databases and the endian format (little) the most viable option was Oracle DataGuard (DG) . The main question was whether we upgrade our database and convert to a pluggable database (PDB) on-prem and move it to the cloud after or do the migration, upgrade and conversion in one shot using the same downtime window. We’ve chosen the latter.

Here is the high level diagram:

And here is general workflow:

The source database was analyzed by the Oracle preupgrade.jar tool to verify if it was ready to be upgraded to 19c. A few issues were fixed in advance and some reported problems were ignored. There was no universal solution for everybody to tell which warning should be ignored and which one should be taken into consideration and fixed. 

The next step was to prepare the database to use TDE on the target platform. An Oracle encryption wallet and master key was created for the original database. The basic steps are following (all paths and values are arbitrary):

SQL> administer KEY management CREATE keystore '/u01/app/oracle/product/12.2.0/dbhome_1/network/admin' IDENTIFIED BY #SYS_PASSWORD#;
 
keystore altered.
 
SQL> administer KEY management SET keystore OPEN IDENTIFIED BY #SYS_PASSWORD#;
 
keystore altered.
 
SQL> administer KEY management SET KEY IDENTIFIED BY #SYS_PASSWORD# WITH backup;
 
keystore altered.
 
SQL> administer KEY management CREATE auto_login keystore FROM keystore '/u01/app/oracle/product/12.2.0/dbhome_1/network/admin' IDENTIFIED BY #SYS_PASSWORD#;
 
keystore altered.
 
SQL> administer KEY management SET keystore close IDENTIFIED BY #SYS_PASSWORD#;
 
keystore altered.

You can verify the status of your wallet in the v$encryption_wallet view and make sure it is shown as “open”.

After that we created the target 19c container RAC database on DBCS using ASM as storage (you remember you have two options for DBCS). Having the database created we were able to use the first node as a staging area for our DR. We cloned the database software from on-prem to that node and created a dummy database with the same database name but with a different unique name.

[oracle@oracloud01 ~]$./clone.pl ORACLE_HOME=/u01/app/oracle/staging/12.2.0/dbhome_1 ORACLE_HOME_NAME=OraHome12201_dbhome_1 ORACLE_BASE=/u01/app/oracle 
...
 
[oracle@oracloud01 ~]$ dbca -createDatabase -silent -createAsContainerDatabase false -templateName General_Purpose.dbc -gdbName nsppwcb -storageType ASM -diskGroupName +DATAC1 -recoveryAreaDestination +RECOC1 -recoveryAreaSize 10240 -initParams db_name=appdb,db_unique_name=appdbstg,sga_target=2G,sga_max_size=2G
Enter SYS user password:
 
Enter SYSTEM user password:

Having set it up and copied the wallet from source to the staging node we were ready to set up a Data Guard standby where the cloud based standby would be using the cloned home and be encrypted using the master key we created on the source. 

I used the Oracle Zero Downtime Migration tool to establish the replication and prepare the staging. It saved time, effort and avoided human mistakes providing a unified, consistent approach to all the migration. We spent some time troubleshooting different  issues during implementation and a dry run for the first database but it paid off later.  We used the parameter “-pauseafter ZDM_CONFIGURE_DG_SRC” to wait before actual cutover. As you could see the Oracle ZDM can be useful even if it cannot cover the complete migration path. 

Before doing cutover we also moved the standby datafiles to be inside the ASM under ” <Disk group>/<CDB unique name>/<Future PDB GUID>>” using rman backup as copy. I might write another short blog with all the details on how to do that.

We were keeping the Data Guard replication until the cutover time when the real downtime for production started. On the scheduled time we resumed the ZDM job to complete the database switchover and make our staging database primary. The command was simple:

./zdmcli resume job -jobid 3
 
and in the output for "query job" command we got list of completed actions:
 
ZDM_DISCOVER_SRC .............. COMPLETED
ZDM_COPYFILES ................. COMPLETED
ZDM_PREPARE_TGT ............... COMPLETED
ZDM_SETUP_TDE_TGT ............. COMPLETED
ZDM_CLONE_TGT ................. COMPLETED
ZDM_FINALIZE_TGT .............. COMPLETED
ZDM_CONFIGURE_DG_SRC .......... COMPLETED
ZDM_SWITCHOVER_SRC ............ COMPLETED
ZDM_SWITCHOVER_TGT ............ COMPLETED
ZDM_MANIFEST_TO_CLOUD ......... COMPLETED
ZDM_NONCDBTOPDB_PRECHECK ...... COMPLETED
ZDM_NONCDBTOPDB_CONVERSION .... COMPLETED
ZDM_POSTUSERACTIONS ........... COMPLETED
ZDM_POSTUSERACTIONS_TGT ....... COMPLETED
ZDM_CLEANUP_SRC ............... COMPLETED
ZDM_CLEANUP_TGT ............... COMPLETED
[zdmuser@vlxpr1008 ~]$

After the successful switchover we started the staging database in read only mode, exported the encryption key and description file to be plugged to the CDB.

SQL> administer KEY management export encryption KEYS WITH secret "my_secret" TO '/home/oracle/appdb01_export.p12' force keystore IDENTIFIED BY #SYS_PASSWORD;
 
keystore altered.
 
SQL> !ls -l /home/oracle/appdb01_export.p12
-rw-r--r-- 1 oracle asmadmin 2548 Mar  2 19:48 /home/oracle/appdb01_export.p12
 
SQL>
 
 
SQL> BEGIN
  DBMS_PDB.DESCRIBE(
    pdb_descr_file => '/home/oracle/appdb01.xml');
END;  2    3    4
  5  /
 
PL/SQL PROCEDURE successfully completed.
 
SQL>

It is recommended to verify the target PDB for any violations if it will be plugged into the target CDB using “DBMS_PDB.CHECK_PLUG_COMPATIBILITY” package and the exported xml file.

SQL> SET SERVEROUTPUT ON
DECLARE
  compatible CONSTANT VARCHAR2(3) :=
    CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
           pdb_descr_file => '/home/oracle/appdb1.xml',
           pdb_name       => 'appdb1')
    WHEN TRUE THEN 'YES'
    ELSE 'NO'
END;
BEGIN
  DBMS_OUTPUT.PUT_LINE(compatible);
END;SQL>   2    3    4    5    6    7    8    9   10   11
 12  /
NO
 
PL/SQL PROCEDURE successfully completed.
 
SQL> SELECT line,message,STATUS FROM pdb_plug_in_violations WHERE name='APPDB1' ORDER BY TIME,line;

Some violations like version mismatch and the fact the database is not yet container PDB could be ignored. Also keep in mind that some violations are “ERROR” type and should be fixed sooner or later but some are just “WARNING” and might not have any impact.

After that we shut down our staging database and plugged it into the target CDB with “nocopy” option effectively using already encrypted data files and saving time during the cutover downtime. 

SQL> CREATE pluggable DATABASE appdb01 USING '/home/oracle/appdb01.xml' nocopy;
 
Pluggable DATABASE created.
 
SQL> SHOW pdbs
 
    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 CDB01_PDB1			  READ WRITE NO
	 4 APPDB01			  MOUNTED
SQL>

Our database was plugged in to the target container CDB and was ready for upgrade. Before doing the upgrade I imported the master encryption key we used on the source and staging.

SQL> administer KEY management import encryption KEYS WITH secret "my_secret" FROM '/home/oracle/appdb01_export.p12' force keystore IDENTIFIED BY #SYS_PASSWORD WITH backup;
 
keystore altered.
 
SQL>

The next step is to upgrade our new PDB to make it the same version as the container.

SQL>alter session set container=APPDB01;
 
SQL>startup upgrade;
 
 
 
[oracle@oracloud01 ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@oracloud01 admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c APPDB01 catupgrd.sql
 
Argument list for [catctl.pl]
For Oracle internal use only A = 0
Run in                       c = APPDB01
Do not run in                C = 0
Input Directory              d = 0
Echo OFF                     e = 1
Simulate                     E = 0
Forced cleanup               F = 0
Log Id                       i = 0
Child Process                I = 0
Log Dir                      l = 0
Priority List Name           L = 0
Upgrade Mode active          M = 0
SQL Process Count            n = 0
SQL PDB Process Count        N = 0
Open Mode Normal             o = 0
Start Phase                  p = 0
End Phase                    P = 0
Reverse Order                r = 0
AutoUpgrade Resume           R = 0
Script                       s = 0
Serial Run                   S = 0
RO User Tablespaces          T = 0
Display Phases               y = 0
Debug catcon.pm              z = 0
Debug catctl.pl              Z = 0
 
catctl.pl VERSION: [19.0.0.0.0]
           STATUS: [Production]
...

After all successful upgrade steps (including fixup.sql script if it is required) our database was almost ready and required only to be converted into PDB.

SQL> ALTER SESSION SET container=appdb01;
 
SESSION altered.
 
SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
...
SQL> SET trimout ON
SQL> SET trimspool ON
SQL> SET underline "-"
SQL> SET verify OFF
SQL> SET wrap ON
SQL> SET xmloptimizationcheck OFF
SQL>
SQL> SHOW pdbs
 
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 APPDB01                        READ WRITE YES
SQL> SHOW con_name
 
CON_NAME
------------------------------
APPDB01
SQL> shutdown IMMEDIATE
Pluggable DATABASE closed.
SQL> startup
Pluggable DATABASE opened.
SQL>

And you can verify all components in your pluggable database using dba_registry_view:

SQL> col comp_name FOR a50
SQL> col STATUS FOR a20
SQL> SELECT comp_name, version, STATUS FROM dba_registry;
 
COMP_NAME                                          VERSION                        STATUS
-------------------------------------------------- ------------------------------ --------------------
Oracle DATABASE Catalog Views                      19.0.0.0.0                     VALID
Oracle DATABASE Packages AND Types                 19.0.0.0.0                     VALID
JServer JAVA Virtual Machine                       19.0.0.0.0                     VALID
Oracle XDK                                         19.0.0.0.0                     VALID
Oracle DATABASE Java Packages                      19.0.0.0.0                     VALID
OLAP Analytic Workspace                            19.0.0.0.0                     VALID
Oracle REAL Application Clusters                   19.0.0.0.0                     VALID
Oracle XML DATABASE                                19.0.0.0.0                     VALID
Oracle Workspace Manager                           19.0.0.0.0                     VALID
Oracle Text                                        19.0.0.0.0                     VALID
Oracle Multimedia                                  19.0.0.0.0                     VALID
Spatial                                            19.0.0.0.0                     VALID
Oracle OLAP API                                    19.0.0.0.0                     VALID
Oracle Label Security                              19.0.0.0.0                     VALID
Oracle DATABASE Vault                              19.0.0.0.0                     VALID
 
15 ROWS selected.
 
SQL>

The result is a fully migrated and upgraded database after 1 hour 30 min cutover time. Of course you still need to create services, complete acceptance and verification tests, adjust UNDO tablespaces  and do some application or company specific actions but the migration is done. The staging home and leftovers from the database could be removed if they are not going to be used for the next migration to the same container.

I didn’t put all the small details or issues we encountered and should solve during our migrations – it would be too long and totally unreadable. Hopefully I will be able to create a webinar or discuss on one of the virtual events about different pitfalls and unexpected issues you can expect during migration.

This is only one case when you have a 12.2 source and 19c target in the Oracle cloud but for the last several months we did different migrations involving other source and target versions and platforms. Let us know if you need our help and we will be happy to do that. 

Copy files to Oracle OCI cloud object storage from command line.

This blog post is bit longer than usual but I wanted to cover at least three options to upload files to the Oracle OCI object storage. If you need to just upload one file you can stop reading after the first option since it covers probably most of needs to upload a single file. But if you want a bit more it makes sense to check other options too.

The OCI Object storage has a web interface with an “Upload object” button, but sometimes you need to upload files directly from a host where you have only a command line shell. In general we have at least three ways how to do that.
The first and the simplest way is to create a temporary “Pre-Authenticated Request” which will expire after specified time. The procedure is easy and intuitive.
You need to go to your bucket details and click on the right side to open the tab with “Pre-Authenticated Requests”

Screen Shot 2019-03-09 at 9.43.32 AM

Push the button “Create Pre-Authenticated Request”, choose the name and expiration time for the link.

Screen Shot 2019-03-09 at 9.44.36 AM

The link will appear in a pop up window only once, and you have to copy and save it if you want to use it later. If you have forgotten to do that it is not a problem – you can create another one.

I’ve created a link used it to upload a test file to the “TestUpload” bucket without any problem.

[opc@sandbox tmp]$dd if=/dev/zero of=random_file.out bs=1024k count=5
5+0 records in
5+0 records out
5242880 bytes transferred in 0.001785 secs (2937122019 bytes/sec)
[opc@sandbox tmp]$ll
total 10240
-rw-r--r-- 1 otochkin staff 5.0M 9 Mar 09:55 random_file.out
[opc@sandbox tmp]$curl -T random_file.out https://objectstorage.ca-toronto-1.oraclecloud.com/p/PCmrR1tN3D_5SkJimndiatnClEwNQbnMpaVHfYYwio4/n/gleb/b/TestUpload/o/
[opc@sandbox tmp]$

It is the easiest way but what if you want to set up a more permanent process without the disappearing links. Maybe the uploading is going to be a part of data flow or you want to schedule a regular activity.  The answers are the Oracle OCI CLI and the Rest API interface using API keys. Let’s check how we can do it without installing Oracle OCI CLI.

The first thing you need is an “API key”. Behind the scenes it is a public part of a secret key you create on your box where you plan too run your scripts or in your application.

[opc@sandbox ~]$ mkdir ~/.oci
[opc@sandbox ~]$ openssl genrsa -out ~/.oci/oci_api_key.pem 2048
[opc@sandbox ~]$ chmod go-rwx ~/.oci/oci_api_key.pem
[opc@sandbox ~]$ openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem

[opc@sandbox ~]$ cat ~/.oci/oci_api_key_public.pem
-----BEGIN PUBLIC KEY-----
MIOFIjANBg.....

...

cQIDYQAB
-----END PUBLIC KEY-----
[opc@gleb-bastion-us ~]$

You need to copy and paste the output from the last command (from “—–BEGIN PUBLIC KEY” to “—–END PUBLIC KEY—–“) to the form which appears when you push the button “Add Public Key” on your user details.

Screen Shot 2019-04-01 at 11.23.15 AM

Having the API key in your profile on OCI we can now use the function oci-curl provided by oracle and use it in our command line. But before doing that we need to gather some values to pass to the function. The tenancy id can be found in your tenancy details you can get from the drop-down menu in top right conner of your OCI webpage. The same menu provides your user details where we need the user id. The key fingerprint for our recently created key can be found on the same page.

Screen Shot 2019-04-01 at 11.39.43 AM

Now you can change the section in the script replacing the OCID by your values

# TODO: update these values to your own local tenancyId="ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq"; local authUserId="ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq"; local keyFingerprint="20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34"; local privateKeyPath="/Users/someuser/.oci/oci_api_key.pem";
Instead of fixing the OCID in the script you may choose to use environment variables providing them either by an “export” command in the shell or putting them to a file with environments. Here is example how you can do that.
Creating a file
[opc@sandbox ~]$ vi .oci_env

privateKeyPath=~/.oci/oci_api_key.pem keyFingerprint="c9:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:fe" authUserId=ocid1.user.oc1..aaaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq tenancyId=ocid1.tenancy.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq compartmentId=ocid1.compartment.oc1..aaaaaaaa4laqzjcaty5eqbb6qt7cdfx2jl4d7bvuitvlmz4b5c2hiz6dbssza 
endpoint=objectstorage.ca-toronto-1.oraclecloud.com namespace=mytenancyname bucketName=TestUpload export privateKeyPath keyFingerprint authUserId tenancyId compartmentId endpoint namespace bucketName 
You can see that in addition to the OCID in the script I’ve added endpoint, namespace, bucket name and OCID for my compartment. Those values we need to upload our files. We can use the file to export all those variables.
[opc@sandbox ~]$ source .oci_env
[opc@sandbox ~]$
Download the  signing_sample_bash.txt, remove the lines with values for OCID and paths, and remove the the UTF-8 Byte Order Mark from the file replacing it by a simple “#” symbol.
 
[opc@sandbox ~]$ curl -O https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/signing_sample_bash.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4764 100 4764 0 0 8707 0 --:--:-- --:--:-- --:--:-- 8709 [opc@sandbox ~]$ sed -i "/\(local tenancyId=\|local authUserId=\|local keyFingerprint=\|local privateKeyPath=\)/d" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: UTF-8 Unicode (with BOM) text [opc@sandbox ~]$ sed -i "1s/^.*#/#/" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: ASCII text [opc@sandbox ~]$ 
Run the script.
[opc@sandbox ~]$ source signing_sample_bash.txt
[opc@sandbox ~]$
Now we can use the “oci-curl” function in our command line and upload files to an OCI bucket without installing software to the machine.
Create a file.
[opc@sandbox ~]$ dd if=/dev/urandom of=new_random_file.out bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.188255 s, 55.7 MB/s
[opc@sandbox ~]$
Upload from the command line
[opc@sandbox ~]$ oci-curl $endpoint put ./new_random_file.out /n/$namespace/b/$bucketName/o/new_random_file.out
[opc@gleb-bastion-us ~]$
We can list the files:
[opc@sandbox ~]$ oci-curl $endpoint get /n/$namespace/b/$bucketName/o/{"objects":[{"name":"another_random_file.out"},{"name":"new_random_file.out"}]} 

[opc@sandbox ~]$ 
And we can see the file
Screen Shot 2019-04-03 at 11.07.58 AM
You can see more examples how to use the oci-curl function in the Oracle blog.
The last way is to install the Oracle OCI CLI as it is described in documentation. It will take only few minutes. You need to run just one command and answer to some questions.
[opc@sandbox ~]$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6283  100  6283    0     0  23755      0 --:--:-- --:--:-- --:--:-- 23889
Downloading Oracle Cloud Infrastructure CLI install script from https://raw.githubusercontent.com/oracle/oci-cli/6dc61e3b5fd2781c5afff2decb532c24969fa6bf/scripts/install/install.py to /tmp/oci_cli_install_tmp_mwll.
######################################################################## 100.0%
Python3 not found on system PATH
Running install script.
...
output was reduced.
Then you need to configure the CLI using “oci setup config” command.
[opc@sandbox ~]$ oci setup config

This command provides a walkthrough of creating a valid CLI config file

It will ask about tenancy and use OCID and suggest to create new keys but you can say “n” if you have already the key.
...
Enter a region (e.g. ca-toronto-1, eu-frankfurt-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1): ca-toronto-1
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]: n
Enter the location of your private key file: /home/opc/.oci/oci_api_key.pem
Fingerprint: 20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34
Config written to /home/opc/.oci/config
If you haven't already uploaded your public key through the console,
follow the instructions on the page linked below in the section 'How to
upload the public key':

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How2
[opc@sandbox ~]$

And we can use the oci command line interface to upload or list the files or perform other actions.

[opc@sandbox~]$ oci os object put -bn TestUpload --file one_more_random_file.out
Uploading object [####################################] 100%
{
"etag": "31a3ae0c-5749-4390-8bae-d937a1709d9a",
"last-modified": "Wed, 03 Apr 2019 16:21:48 GMT",
"opc-content-md5": "s18Q1y1YYX113hBOqA19Mw=="
}
[opc@sandbox ~]$ oci os object list -bn TestUpload
{
"data": [
{
"md5": "y3wX2q+fN+lBHppGMJqfhw==",
"name": "another_random_file.out",
"size": 5242880,
"time-created": "2019-03-10T16:16:33.707000+00:00"
},
{
"md5": "/XHj/5+IkyoDbLteg6E/7w==",
"name": "new_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T15:05:47.270000+00:00"
},
{
"md5": "s18Q1y1YYX113hBOqA19Mw==",
"name": "one_more_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T16:21:47.734000+00:00"
}
],
"prefixes": []
}
[opc@sandbox ~]$

As a short summary  I want to say that the oci-cli command line interface can be useful and provides easy way for regular operations when the REST API can be extremely useful when you want to incorporate it to your code and use in some of your applications or cannot install any tools on your box due to some restrictions.