The blog was supposed to be a small how-to but it has grown to a bigger one and hopefully might help to avoid some minor problems while exporting data from an Autonomous Database (ADB) in Oracle Cloud (OCI). It is about exporting data to the Oracle DataPump format to move data to another database or as a logical “backup”.
Oracle documentation provides sufficient information but I find it more and more difficult to navigate considering the number of options and flavours for Oracle databases. There are some new ways and tools around Oracle OCI ATP which can help in some cases. If you want you can jump directly to the end to read the summary.
This blog was primarily driven by questions from my peers and colleagues who wondered where my blog was hosted and how it was created. It might help to move from a hosting platform to your own website and where to start.
Like most bloggers I started my blog using one of the hosting platforms but soon after found some limitations in choosing appearance, plugins and was a bit annoyed by some commercial banners on my page. After a while I decided to move to my own site. I bought a domain name for myself and created my own environment using the WordPress software on a cloud VM. It didn’t cost me too much but it was not entirely for free. When Oracle introduced some additions to the always free set of resources. I decided to give it a try and move my blog entirely to the OCI free tier.
For those who would like to skip the reading and try, I have a set of Terraform scripts on GitHub. They haven’t been updated lately and use not the latest versions but can be a good place to start.
Terraform is probably already the de-facto standard for cloud deployment. I use it on a daily basis deploying and destroying my tests and demo setups in my Oracle cloud tenancy. Sometimes the deployment environment for a demo has too many files or some files inside are really big and hard to read due to the number of different resources and parameters included there. How can we make our configuration more usable? Let’s try Terraform modules and demonstrate how they work. For our tests we are going to use terraform v1.0.3 and Oracle Cloud Infrastructure (OCI). You will need a working OCI and on your machine with terraform defined environment variables. The full list of required environment variables will be provided in the README file in the GitHub repository. Let’s say we have a simple demo or test configuration with a dedicated network, internet gateway and a VM. And we want to assign multiple security rules using security lists and maybe one or two security groups. We can include all those rules to the configuration file for the network but maybe there is a better way. What if we want to reuse the similar set of the security rules and security groups not only to that deployment but share with some other stacks? We can try to use Terraform modules.
Some time ago I wrote a short blog about dependencies between the number of enabled CPUs and how many databases you could build. Today we got another error when we were trying to create a new database. Here is the screenshot of the error.
If you can’t read it on a small screen it says “Create Database operation failed due to an unknown error. Refer to work request ID 2580d3ff-064e-4e6f-ab06-1327fd02f40e when opening a Service Request at My Oracle Support.” and provide an error code which is “Error“
Most Oracle DBA are sufficiently educated about benefits using large memory pages for Oracle database SGA to reduce overhead and improve performance. If you want to read more about it you can start from that Oracle blog or read it from other multiple articles and blogs. Oracle is using parameter use_large_pages to direct behaviour of an Oracle instance during startup.
In the previous versions before 19c we had three possible values – “TRUE”, “FALSE and “ONLY”. Since Oracle 184.108.40.206 the “TRUE” meant that the instance will allocate as many hugepages as free available in the system and get the rest from the normal small pages. The “FALSE” would tell it to not use the hugepages at all and the “ONLY” would be able to start an instance only if sufficient number of free hugepages is available in the system to fit all SGA in it. The “TRUE” was default for all databases.
In the 19c version we got one more value – “AUTO_ONLY” and now it is the default value for Exadata systems running Oracle Database 19c. The description in documentation is not totally clear and sounds very similar to the description of “ONLY” value. Here is an excerpt from the documentation:
“It specifies that, during startup, the instance will calculate and request the number of large pages it requires. If the operating system can fulfill this request, then the instance will start successfully. If the operating system cannot fulfill this request, then the instance will fail to start.”
Let me show you how it works. Here is my sandbox with a 19c database and no hugepages is configured on the box by default.
Since the first days of working in the Google public cloud there have been debates about the possibility to move an Oracle workload to GCP. The major concerns were coming not from the technical challenges but rather from Oracle’s licensing policies and guidelines. In the famous Oracle’s document about licensing Oracle software in the public cloud it was stated – “This policy applies to cloud computing environments from the following vendors: Amazon Web Services – Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS) and Microsoft Azure Platform (collectively, the ‘Authorized Cloud Environments’)”. So the Google Cloud was not listed as an ‘Authorized Cloud Environment’ and it was unclear how to apply the Oracle licensing there. I believe it will be sorted in time but in the meanwhile as a solution Google presented a Bare Metal Service as the platform for Oracle workload.
Last week while checking my twitter feed I found a tweet from Confluent with an announcement about a new Kafka connector for Oracle database as a source. We had an Oracle connector before but it was working scanning the source tables and, as a result, adding a load to the source database. But that one was different and we got a connector which could get the changes from Oracle redo logs. I started to test it using my Kafka dev environment in the Google Cloud and one of my sandbox databases in the Oracle cloud. Here I would like to share how to start to test it and my very first experience with the tool.
I’ve been using the OCI and AWS clouds for a number of years, but primarily it was either one or another. Only in a few cases was it required to connect each other and mainly get data from an AWS S3 bucket. But with the new OCI services, the idea of using both clouds is getting more attractive, and multi-cloud environments become more common. One of the main challenges for such a layout is the network. We have several options using dedicated connections or 3d party tools deployed on both sides, and all of them have their pros and cons. Today, I would like to talk about the most simplistic case when we use only native services on both sides and establish IPSec VPN connections between two clouds.
You’ve probably already seen in the news that the Oracle 21c is available and saw some tweets and blogs about the new release. But did you know that not only DBCS with “normal” cloud databases available but also the Autonomous version?
For those who are puzzled by the title here is a short explanation. I didn’t pay too much attention to what I had in my fridge and one day I found only a couple of cucumbers, chocolate and some coffee. That was not too bad but I couldn’t call it a proper nutrition diet. It was at the same time when I was exploring a possibility to have a non-cdb 12.1 Oracle database on an Exadata Cloud at Customer (ExaCC). One might think the blog is about comparing the unusual diet with the non-cdb deployment on a cloud environment telling that you should not really use non-cdb as you probably shouldn’t eat only cucumbers, chocolate and coffee. But it is not true, the blog is how to create such non-cdb on an ExaCC.