Cloud and how it might help at difficult times.

The recent changes caused by the virus and economic meltdown affected almost everybody in the world. We are all now going through a difficult period of our history, and when many companies are struggling to survive, the other thrive and boost production. In such volatile environment, it becomes more and more important to be able to adapt the IT environment to immediate business needs quickly.

We work with different customers helping to adjust and evolve according to the changing business and IT landscape. It touches a lot of various aspects of IT, such as software and hardware support, logistic, availability of stuff, and restrictions put by government authorities.

In one case, we had to solve the puzzle of logistics to be able to replace some faulty parts of critical infrastructure in the conditions when the vendor didn’t have a physical representation in the location. In normal times the engineer would fly over to replace the part, had a night in a hotel, and fly back. Sounds easy, right? Now, when most of the flights are canceled, all the hotels in the vicinity are closed, and borders have backlogs of people trying to cross that’s not easy anymore. We were able to work it out, but it was difficult and took much more time than expected. During all that time, the environment was using redundant parts, but it could be a disaster if any of those parts would break.

Another case I saw when one of the businesses’ total workload went from 80% of IT capacity to almost zero. At the same time, the company should keep the infrastructure up, pay for electricity, cooling, data centers’ rent, and the license support. All those money could be saved if the company could temporarily reduce the number of licenses and computers according to real business needs.

And we all know about some companies expanding and growing due to growing demands. For example, usage of Zoom has ballooned overnight, reaching more than 200 million daily users. Some other companies providing delivery and remote services were also experienced significant growth and unexpected load on the IT infrastructure. Some of them were unable to take it well, and their online services crashed.

I think this is the time when the cloud-based solutions show how it can be done and how the cloud can help businesses to be more flexible, agile, and keep up with demands. Let me list some benefits of putting your critical IT to a public cloud.

In first, you don’t need to think and jump over your head to fix your IT infrastructure when all supply chains and regular logistics are broken. Make it somebody’s problem and not yours anymore.

You can quickly scale down your environment reducing infrastructure subscription cost and licensing cost, which is probably even more important for some types of licensing software. So, your already strangled business will not need to endure keeping afloat massive infrastructure which is not used by anyone. Good examples are air company and recreational industry.

At the same time, if you have designed and built your environment in and for the cloud, you can scale up and scale out supporting growing demands for your business. You can think about well-known retail chains for hardware and home supplies.

And speaking about the time, it can be the best moment for changes when your infrastructure is down, and you can afford prolonged maintenance for your IT infrastructure. For the others, it is the best opportunity to rethink the business model and orient it more to remote delivery and online retail.

I’ve listed here only a few reasons why the cloud is better and more adapted to the changes in the business. But cloud provides some other benefits and opportunities to improve your analytics and apply modern machine learning technics adding even more value to your data.

That is true that some cloud providers experienced serious capacity problems during the first days of the increased demands, but so far, they have been able to overcome and fix most of the issues for their enterprise customers.
As a final word, I would like to say that we are ready to help you to go through that difficult time and ready to share our experience and knowledge. The world is changing, and we are changing with it.

Exadata Cloud at Customer – number of active CPUs and adding a new database.

Let’s imagine a typical working day, and you are getting a request to add a new database to your Exadata Cloud at Customer (ExaCC). If you are not familiar with the product, you can read about it in detail here. In short, it is an Exadata machine with a cloud interface, something like Oracle Exadata Cloud Service, but with the hardware installed in your datacenter. 

You carry on with the request, fill up all the information on the database creation page, and push the button “Create Database”. 

Everything looks correct, and you’ve probably done it several times already and don’t expect any surprises. But somehow, after a while, you are getting notification that your request has failed, and the database was not created.

When it happened the first time to me, I was disappointed by the lack of any useful information in OCI console and logs. I was expecting a bit more than just “failed due to an unknown error”.

Here is starting a quick troubleshooting part. I went to the logs on the VM cluster and, after some research, found the real reason why the request had failed.

 root@exacc01:~>tail -16 /var/opt/oracle/ocde/ocde_createdb_testdb03.out
Removed all entries from cfg file : /var/opt/oracle/ocde/ocde_createdb_testdb03.cfg matching passwd
 
ERROR : rac: _is_cpu_count_ok, cpu_count 8 is not enough for running_dbs_count 20. Please increase the number of cpus.At a minimum we should be one vcpu per two DBs.
ERROR: OCDE createdb pre-reqs failed, please check logs
corereg: secure: Wallet location is not defined, securing corereg means losing all credentials.
INFO: corereg: secure: Removed all entries matching passwd or decrypt_key from corereg file /var/opt/oracle/creg/testdb03.ini
INFO: Total time taken by ocde is 8 seconds
OCDE failed with message: ERROR: OCDE createdb pre-reqs failed, please check logs
 
 
INFO : ocde_time_format is 2020/04/05 09:06:05
OCDE failed with message: ERROR: OCDE createdb pre-reqs failed, please check logs
 
 
 
#### Completed OCDE with errors, please check logs ####
root@exacc01:~>

Going a little bit deeper to the Oracle ExaCC tools, we can find the Perl module “rac.pm” and the “_is_cpu_count_ok()” function there. That function compares the number of CPU to the twice of the number of running database instances. As a result, we have the hardcoded limit of available container databases on the ExaCC, which is bound to twice of number of OCPU for the VM cluster.

We can apply two workarounds for the issue. The first is to scale up the number of OCPU on the ExaCC. You don’t have to keep them all the time. You can burst it up only for the database creation and then scale down to the original number.

The second workaround is to shut down one of the existing databases for the duration of the creation of a new one. I don’t think it is the best course of action, but it might be acceptable if you know that some databases can be stopped at certain times.

The summary is short. We have a hardcoded limit of 2*OCPU for the databases, which can be solved in a couple of ways. And the level of logging in the OCI interface is not adequate, and you need to dig into the logs by yourself or create an Oracle support SR to get the real cause of the error.

Is Oracle cloud only for Oracle?

Several days ago, discussing public cloud solutions and competition between different providers, one of the people mentioned that Oracle Cloud is just for Oracle products. At the same time, AWS and Azure are more vendor agnostic. I was a bit surprised by that statement but it appeared that several other people shared the same view. I decided to write the blog and show what options Oracle Cloud Infrastructure (OCI) has for different workloads.

Let’s start with the VM types and flavors. Of course, by default, you are offered Oracle Linux but if you push the “Change Image Source” button:

You are going to see several different options for the platform, including Oracle Linux, Ubuntu, Centos, and various Windows server versions.

Those are the primary platform images for VM, but in addition, you have Oracle built images with different sets of software included, free and with Bring Your Own License (BYOL) policies.

Then we have the partner’s images on the next tab, and there we can find some other Linux distributions like SUSE and prebuilt software images like Jenkins from Bitnami.

If you need to build your own image with custom software and settings, you can create a custom image from your Linux or Windows-based VM, which may be an on-premises or cloud image. In my opinion, this should cover most of the requirements for the necessary VM infrastructure services. I am not discussing other aspects like network and storage here since they are not too much different in functionality presented by other cloud vendors.

If you haven’t found what you need or want some certified software deployment, you can go to the Oracle Cloud Marketplace and choose from multiple available packages, including Oracle and non-Oracle vendors.

We have several filters for the publisher, category, type, and price but no free search area. I hope the search option will be added there.

Here is a subset of publishers.

All the images from the Marketplace are certified by Oracle and prepared for deployment using Oracle Resource Manager (RM). The RM itself is using HashiCorp Terraform scripts behind the scenes. Terraform is one of the most popular deployment tools in the community, and, in my opinion, it is better than a proprietary solution. You can adopt a unified approach for a multi-cloud environment without locking yourself to a single vendor’s platform.

If you work with Docker and Kubernetes and want to build and deploy your custom microservices architecture, the Oracle Cloud Developer services are here to help. The Oracle registry is for your Docker images, and Oracle Kubernetes Engine (OKE) for the Kubernetes cluster is here to help you to deploy the applications.

So far, we were talking about native OCI tools and resources but it doesn’t end there. With the Oracle and Microsoft partnership in the cloud, we can expand our footprint and combine both clouds. I was testing it in July 2019 and wrote a blog about it. It was quite easy to set it up, and it showed acceptable performance. At that time, it was available only in the US Virginia region, but now it is available in Canada and the UK , and hopefully other areas soon. It opens new possibilities to incorporate your company strategy and place products to the most suitable cloud environment. For example, if you want to build an MS SQL database solution, you have two choices – use Azure with interconnect link to OCI or deploy a Windows server in OCI and put your database there.

So, is Oracle Cloud only for Oracle products? Of course not. Oracle public cloud offerings on infrastructure are pretty much comparable to any other public cloud providers and offer a flexible environment to deploy your application and, if you want, your preferred database solution as well.

Copy files to Oracle OCI cloud object storage from command line.

This blog post is bit longer than usual but I wanted to cover at least three options to upload files to the Oracle OCI object storage. If you need to just upload one file you can stop reading after the first option since it covers probably most of needs to upload a single file. But if you want a bit more it makes sense to check other options too.

The OCI Object storage has a web interface with an “Upload object” button, but sometimes you need to upload files directly from a host where you have only a command line shell. In general we have at least three ways how to do that.
The first and the simplest way is to create a temporary “Pre-Authenticated Request” which will expire after specified time. The procedure is easy and intuitive.
You need to go to your bucket details and click on the right side to open the tab with “Pre-Authenticated Requests”

Screen Shot 2019-03-09 at 9.43.32 AM

Push the button “Create Pre-Authenticated Request”, choose the name and expiration time for the link.

Screen Shot 2019-03-09 at 9.44.36 AM

The link will appear in a pop up window only once, and you have to copy and save it if you want to use it later. If you have forgotten to do that it is not a problem – you can create another one.

I’ve created a link used it to upload a test file to the “TestUpload” bucket without any problem.

[opc@sandbox tmp]$dd if=/dev/zero of=random_file.out bs=1024k count=5
5+0 records in
5+0 records out
5242880 bytes transferred in 0.001785 secs (2937122019 bytes/sec)
[opc@sandbox tmp]$ll
total 10240
-rw-r--r-- 1 otochkin staff 5.0M 9 Mar 09:55 random_file.out
[opc@sandbox tmp]$curl -T random_file.out https://objectstorage.ca-toronto-1.oraclecloud.com/p/PCmrR1tN3D_5SkJimndiatnClEwNQbnMpaVHfYYwio4/n/gleb/b/TestUpload/o/
[opc@sandbox tmp]$

It is the easiest way but what if you want to set up a more permanent process without the disappearing links. Maybe the uploading is going to be a part of data flow or you want to schedule a regular activity.  The answers are the Oracle OCI CLI and the Rest API interface using API keys. Let’s check how we can do it without installing Oracle OCI CLI.

The first thing you need is an “API key”. Behind the scenes it is a public part of a secret key you create on your box where you plan too run your scripts or in your application.

[opc@sandbox ~]$ mkdir ~/.oci
[opc@sandbox ~]$ openssl genrsa -out ~/.oci/oci_api_key.pem 2048
[opc@sandbox ~]$ chmod go-rwx ~/.oci/oci_api_key.pem
[opc@sandbox ~]$ openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem

[opc@sandbox ~]$ cat ~/.oci/oci_api_key_public.pem
-----BEGIN PUBLIC KEY-----
MIOFIjANBg.....

...

cQIDYQAB
-----END PUBLIC KEY-----
[opc@gleb-bastion-us ~]$

You need to copy and paste the output from the last command (from “—–BEGIN PUBLIC KEY” to “—–END PUBLIC KEY—–“) to the form which appears when you push the button “Add Public Key” on your user details.

Screen Shot 2019-04-01 at 11.23.15 AM

Having the API key in your profile on OCI we can now use the function oci-curl provided by oracle and use it in our command line. But before doing that we need to gather some values to pass to the function. The tenancy id can be found in your tenancy details you can get from the drop-down menu in top right conner of your OCI webpage. The same menu provides your user details where we need the user id. The key fingerprint for our recently created key can be found on the same page.

Screen Shot 2019-04-01 at 11.39.43 AM

Now you can change the section in the script replacing the OCID by your values

# TODO: update these values to your own local tenancyId="ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq"; local authUserId="ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq"; local keyFingerprint="20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34"; local privateKeyPath="/Users/someuser/.oci/oci_api_key.pem";
Instead of fixing the OCID in the script you may choose to use environment variables providing them either by an “export” command in the shell or putting them to a file with environments. Here is example how you can do that.
Creating a file
[opc@sandbox ~]$ vi .oci_env

privateKeyPath=~/.oci/oci_api_key.pem keyFingerprint="c9:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:fe" authUserId=ocid1.user.oc1..aaaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq tenancyId=ocid1.tenancy.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq compartmentId=ocid1.compartment.oc1..aaaaaaaa4laqzjcaty5eqbb6qt7cdfx2jl4d7bvuitvlmz4b5c2hiz6dbssza 
endpoint=objectstorage.ca-toronto-1.oraclecloud.com namespace=mytenancyname bucketName=TestUpload export privateKeyPath keyFingerprint authUserId tenancyId compartmentId endpoint namespace bucketName 
You can see that in addition to the OCID in the script I’ve added endpoint, namespace, bucket name and OCID for my compartment. Those values we need to upload our files. We can use the file to export all those variables.
[opc@sandbox ~]$ source .oci_env
[opc@sandbox ~]$
Download the  signing_sample_bash.txt, remove the lines with values for OCID and paths, and remove the the UTF-8 Byte Order Mark from the file replacing it by a simple “#” symbol.
 
[opc@sandbox ~]$ curl -O https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/signing_sample_bash.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4764 100 4764 0 0 8707 0 --:--:-- --:--:-- --:--:-- 8709 [opc@sandbox ~]$ sed -i "/\(local tenancyId=\|local authUserId=\|local keyFingerprint=\|local privateKeyPath=\)/d" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: UTF-8 Unicode (with BOM) text [opc@sandbox ~]$ sed -i "1s/^.*#/#/" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: ASCII text [opc@sandbox ~]$ 
Run the script.
[opc@sandbox ~]$ source signing_sample_bash.txt
[opc@sandbox ~]$
Now we can use the “oci-curl” function in our command line and upload files to an OCI bucket without installing software to the machine.
Create a file.
[opc@sandbox ~]$ dd if=/dev/urandom of=new_random_file.out bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.188255 s, 55.7 MB/s
[opc@sandbox ~]$
Upload from the command line
[opc@sandbox ~]$ oci-curl $endpoint put ./new_random_file.out /n/$namespace/b/$bucketName/o/new_random_file.out
[opc@gleb-bastion-us ~]$
We can list the files:
[opc@sandbox ~]$ oci-curl $endpoint get /n/$namespace/b/$bucketName/o/{"objects":[{"name":"another_random_file.out"},{"name":"new_random_file.out"}]} 

[opc@sandbox ~]$ 
And we can see the file
Screen Shot 2019-04-03 at 11.07.58 AM
You can see more examples how to use the oci-curl function in the Oracle blog.
The last way is to install the Oracle OCI CLI as it is described in documentation. It will take only few minutes. You need to run just one command and answer to some questions.
[opc@sandbox ~]$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6283  100  6283    0     0  23755      0 --:--:-- --:--:-- --:--:-- 23889
Downloading Oracle Cloud Infrastructure CLI install script from https://raw.githubusercontent.com/oracle/oci-cli/6dc61e3b5fd2781c5afff2decb532c24969fa6bf/scripts/install/install.py to /tmp/oci_cli_install_tmp_mwll.
######################################################################## 100.0%
Python3 not found on system PATH
Running install script.
...
output was reduced.
Then you need to configure the CLI using “oci setup config” command.
[opc@sandbox ~]$ oci setup config

This command provides a walkthrough of creating a valid CLI config file

It will ask about tenancy and use OCID and suggest to create new keys but you can say “n” if you have already the key.
...
Enter a region (e.g. ca-toronto-1, eu-frankfurt-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1): ca-toronto-1
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]: n
Enter the location of your private key file: /home/opc/.oci/oci_api_key.pem
Fingerprint: 20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34
Config written to /home/opc/.oci/config
If you haven't already uploaded your public key through the console,
follow the instructions on the page linked below in the section 'How to
upload the public key':

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How2
[opc@sandbox ~]$

And we can use the oci command line interface to upload or list the files or perform other actions.

[opc@sandbox~]$ oci os object put -bn TestUpload --file one_more_random_file.out
Uploading object [####################################] 100%
{
"etag": "31a3ae0c-5749-4390-8bae-d937a1709d9a",
"last-modified": "Wed, 03 Apr 2019 16:21:48 GMT",
"opc-content-md5": "s18Q1y1YYX113hBOqA19Mw=="
}
[opc@sandbox ~]$ oci os object list -bn TestUpload
{
"data": [
{
"md5": "y3wX2q+fN+lBHppGMJqfhw==",
"name": "another_random_file.out",
"size": 5242880,
"time-created": "2019-03-10T16:16:33.707000+00:00"
},
{
"md5": "/XHj/5+IkyoDbLteg6E/7w==",
"name": "new_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T15:05:47.270000+00:00"
},
{
"md5": "s18Q1y1YYX113hBOqA19Mw==",
"name": "one_more_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T16:21:47.734000+00:00"
}
],
"prefixes": []
}
[opc@sandbox ~]$

As a short summary  I want to say that the oci-cli command line interface can be useful and provides easy way for regular operations when the REST API can be extremely useful when you want to incorporate it to your code and use in some of your applications or cannot install any tools on your box due to some restrictions.