New kid in the block – Rocky linux.

If you’ve been following the recent changes in the linux world you probably remember how Red Hat and Centos announced in December 2020 that the CentOS Project was shifting focus to CentOS Stream and support for CentOS Linux 8 had been cut to December 31, 2021. It created a wave of discussions in the community about the future for Centos as an enterprise platform and some people started to look to alternative Linux distributives. As a result we got a new, community-driven downstream built, same as Centos used to be, Rocky linux.

The downstream build is based on the same code base as the vendor distributive and resembles most features of the “parent” vendor Linux. It is following all the releases after they have been built by the vendor. In most of my tests I am using Oracle Linux when I am in the Oracle cloud but I am using Centos in Google cloud and other public clouds like Azure or AWS. Now we have Rocky Linux available on those platforms and I’ve had a quick look and done some testing using the Rocky Linux 8.4 (Green Obsidian).

Continue reading “New kid in the block – Rocky linux.”

Exadata Cloud at Customer – free space in ASM and adding a new database.

Some time ago I wrote a short blog about dependencies between the number of enabled CPUs and how many databases you could build. Today we got another error when we were trying to create a new database. Here is the screenshot of the error.

If you can’t read it on a small screen it says “Create Database operation failed due to an unknown error. Refer to work request ID 2580d3ff-064e-4e6f-ab06-1327fd02f40e when opening a Service Request at My Oracle Support.” and provide an error code which is “Error

Continue reading “Exadata Cloud at Customer – free space in ASM and adding a new database.”

Upgrading Terraform command line to the latest version.

Some time ago I updated my terraform command line tool to the version 0.15.3 and was surprised how easy it went. Originally I planned to write a blog but it was not too much to write about. The upgrades to version 11 or 13 were much more painful. Last week HashiCorp announced Terraform version 1.0 General Availability and it meant that the time for a new upgrade had come. I upgraded it on one of my machines and decided to write a short blog about both upgrades to encourage people to try and do the upgrade.

Continue reading “Upgrading Terraform command line to the latest version.”

Linux Hugepages and AUTO_ONLY in Oracle 19c.

Most Oracle DBA are sufficiently educated about benefits using large memory pages for Oracle database SGA to reduce overhead and improve performance. If you want to read more about it you can start from that Oracle blog or read it from other multiple articles and blogs. Oracle is using parameter use_large_pages to direct behaviour of an Oracle instance during startup.

In the previous versions before 19c we had three possible values – “TRUE”, “FALSE and “ONLY”. Since Oracle 11.2.0.3 the “TRUE” meant that the instance will allocate as many hugepages as free available in the system and get the rest from the normal small pages. The “FALSE” would tell it to not use the hugepages at all and the “ONLY” would be able to start an instance only if sufficient number of free hugepages is available in the system to fit all SGA in it. The “TRUE” was default for all databases. 

In the 19c version we got one more value – “AUTO_ONLY” and now it is the default value for Exadata systems running Oracle Database 19c. The description in documentation is not totally clear and sounds very similar to the description of “ONLY” value. Here is an excerpt from the documentation:

“It specifies that, during startup, the instance will calculate and request the number of large pages it requires. If the operating system can fulfill this request, then the instance will start successfully. If the operating system cannot fulfill this request, then the instance will fail to start.”

Let me show you how it works. Here is my sandbox with a 19c database and no hugepages is configured on the box by default. 

Continue reading “Linux Hugepages and AUTO_ONLY in Oracle 19c.”

Google Bare Metal in numbers.

In the previous posts I shared my first impression and how to start using the Google Bare Metal Service (BMS). In this post I will try to show some numbers related to the performance of the solution and you can compare it with your existing environment.

Let me start from the box characteristics. For my tests I was using a “o2-standard-32-metal” box located in the us-west2 zone (Los Angeles) . The solution was configured with 2Gbps interconnect and had a couple of storage resources attached to it. The first one was represented by two 512Gb disks based on HDD storage where I placed my binaries and a recovery ASM disk group and the second was a 2Tb volume “all flash” I used for data.  Here is summary table:

Characteristic
BMS Box typeo2-standard-32-metal
CPU Intel(R) Xeon(R) Gold 6234 CPU @ 3.30GHz
CPU sockets2
CPU cores16
Memory384 GB
Disk 1512 Gb – Standard disk
Disk 2 512 Gb – Standard disk
Disk 3 2048 Gb – All flash
Network4 NICs Speed: 25000Mb/s
OS Oracle Linux 7.9
BMS box characteristics.

Before starting the tests I updated my Oracle Linux and installed a number of packages required for my Oracle database and packages to test IO and Network such as fio and iperf3. Here is a summary table with software and tools used to test the performance.

PackageTesting scope
fioIO performance
stress-ngCPU. Memory
swingbenchOracle database performance
SLOBOracle database IO
iperf3Network
oratcptestNetwork
Continue reading “Google Bare Metal in numbers.”

Google Bare Metal – how to start.

In the previous post I put some of my thoughts on why you would use the Google Bare Metal Service (BMS) and my first impression about it. In this post I want to talk about the first steps and how you can start to work with the Google Bare Metal Service (BMS).

To put your hands on BMS you need to contact your Google Cloud sales representative and order it. It means you need to know to some extent your requirements and prepare for that. The major preparation steps are described in the Google documentation and here I will try to go through some of them.

The first main step is to outline your architecture and identify the region for the BMS. The service is a region extension and it means it is connected to your regional Google cloud infrastructure by high speed low latency network interconnect. It makes sense to place it where the most of your applications and users are going to be. For example, in my case I’ve chosen the us-west2 (Los Angeles) and it was aligned with my main test app servers and provided the best response time. The 64 bytes ping from an app server in the same region was 0.991 ms on average.

Continue reading “Google Bare Metal – how to start.”

Google Bare Metal for Oracle.

Since the first days of working in the Google public cloud there have been debates about the possibility to move an Oracle workload to GCP. The major concerns were coming not from the technical challenges but rather from Oracle’s licensing policies and guidelines. In the famous Oracle’s document about licensing Oracle software in the public cloud it was stated – “This policy applies to cloud computing environments from the following vendors: Amazon Web Services – Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS) and Microsoft Azure Platform (collectively, the ‘Authorized Cloud Environments’)”. So the Google Cloud was not listed as an ‘Authorized Cloud Environment’ and it was unclear how to apply the Oracle licensing there. I believe it will be sorted in time but in the meanwhile as a solution Google presented a Bare Metal Service as the platform for Oracle workload.

Continue reading “Google Bare Metal for Oracle.”

From Oracle to Google Big Query by Kafka

Last week while checking my twitter feed I found a tweet from Confluent with an announcement about a new Kafka connector for Oracle database as a source. We had an Oracle connector before but it was working scanning the source tables and, as a result, adding a load to the source database. But that one was different and we got a connector which could get the changes from Oracle redo logs. I started to test it using my Kafka dev environment in the Google Cloud and one of my sandbox databases in the Oracle cloud. Here I would like to share how to start to test it and my very first experience with the tool.

Continue reading “From Oracle to Google Big Query by Kafka”

IPSec VPN between OCI and AWS.

I’ve been using the OCI and AWS clouds for a number of years, but primarily it was either one or another. Only in a few cases was it required to connect each other and mainly get data from an AWS S3 bucket. But with the new OCI services, the idea of using both clouds is getting more attractive, and multi-cloud environments become more common. One of the main challenges for such a layout is the network. We have several options using dedicated connections or 3d party tools deployed on both sides, and all of them have their pros and cons. Today, I would like to talk about the most simplistic case when we use only native services on both sides and establish IPSec VPN connections between two clouds.

Continue reading “IPSec VPN between OCI and AWS.”

The new 21 is already here for Oracle Autonomous.

You’ve probably already seen in the news that the Oracle 21c is available and saw some tweets and blogs about the new release. But did you know that not only DBCS with “normal” cloud databases available but also the Autonomous version?

Continue reading “The new 21 is already here for Oracle Autonomous.”