Google Bare Metal in numbers.

In the previous posts I shared my first impression and how to start using the Google Bare Metal Service (BMS). In this post I will try to show some numbers related to the performance of the solution and you can compare it with your existing environment.

Let me start from the box characteristics. For my tests I was using a “o2-standard-32-metal” box located in the us-west2 zone (Los Angeles) . The solution was configured with 2Gbps interconnect and had a couple of storage resources attached to it. The first one was represented by two 512Gb disks based on HDD storage where I placed my binaries and a recovery ASM disk group and the second was a 2Tb volume “all flash” I used for data.  Here is summary table:

Characteristic
BMS Box typeo2-standard-32-metal
CPU Intel(R) Xeon(R) Gold 6234 CPU @ 3.30GHz
CPU sockets2
CPU cores16
Memory384 GB
Disk 1512 Gb – Standard disk
Disk 2 512 Gb – Standard disk
Disk 3 2048 Gb – All flash
Network4 NICs Speed: 25000Mb/s
OS Oracle Linux 7.9
BMS box characteristics.

Before starting the tests I updated my Oracle Linux and installed a number of packages required for my Oracle database and packages to test IO and Network such as fio and iperf3. Here is a summary table with software and tools used to test the performance.

PackageTesting scope
fioIO performance
stress-ngCPU. Memory
swingbenchOracle database performance
SLOBOracle database IO
iperf3Network
oratcptestNetwork
Continue reading “Google Bare Metal in numbers.”

Google Bare Metal for Oracle.

Since the first days of working in the Google public cloud there have been debates about the possibility to move an Oracle workload to GCP. The major concerns were coming not from the technical challenges but rather from Oracle’s licensing policies and guidelines. In the famous Oracle’s document about licensing Oracle software in the public cloud it was stated – “This policy applies to cloud computing environments from the following vendors: Amazon Web Services – Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS) and Microsoft Azure Platform (collectively, the ‘Authorized Cloud Environments’)”. So the Google Cloud was not listed as an ‘Authorized Cloud Environment’ and it was unclear how to apply the Oracle licensing there. I believe it will be sorted in time but in the meanwhile as a solution Google presented a Bare Metal Service as the platform for Oracle workload.

Continue reading “Google Bare Metal for Oracle.”

From Oracle to Google Big Query by Kafka

Last week while checking my twitter feed I found a tweet from Confluent with an announcement about a new Kafka connector for Oracle database as a source. We had an Oracle connector before but it was working scanning the source tables and, as a result, adding a load to the source database. But that one was different and we got a connector which could get the changes from Oracle redo logs. I started to test it using my Kafka dev environment in the Google Cloud and one of my sandbox databases in the Oracle cloud. Here I would like to share how to start to test it and my very first experience with the tool.

Continue reading “From Oracle to Google Big Query by Kafka”

IPSec VPN between OCI and AWS.

I’ve been using the OCI and AWS clouds for a number of years, but primarily it was either one or another. Only in a few cases was it required to connect each other and mainly get data from an AWS S3 bucket. But with the new OCI services, the idea of using both clouds is getting more attractive, and multi-cloud environments become more common. One of the main challenges for such a layout is the network. We have several options using dedicated connections or 3d party tools deployed on both sides, and all of them have their pros and cons. Today, I would like to talk about the most simplistic case when we use only native services on both sides and establish IPSec VPN connections between two clouds.

Continue reading “IPSec VPN between OCI and AWS.”

The new 21 is already here for Oracle Autonomous.

You’ve probably already seen in the news that the Oracle 21c is available and saw some tweets and blogs about the new release. But did you know that not only DBCS with “normal” cloud databases available but also the Autonomous version?

Continue reading “The new 21 is already here for Oracle Autonomous.”

Cucumbers, coffee and chocolate or how to create non-cdb on Exadata Cloud at Customer.

For those who are puzzled by the title here is a short explanation. I didn’t pay too much attention to what I had in my fridge and one day I found only a couple of cucumbers, chocolate and some coffee. That was not too bad but I couldn’t call it a proper nutrition diet. It was at the same time when I was exploring a possibility to have a non-cdb 12.1 Oracle database on an Exadata Cloud at Customer (ExaCC). One might think the blog is about comparing the unusual diet with the non-cdb deployment on a cloud environment telling that you should not really use non-cdb as you probably shouldn’t eat only cucumbers, chocolate and coffee. But it is not true, the blog is how to create such non-cdb on an ExaCC.

Continue reading “Cucumbers, coffee and chocolate or how to create non-cdb on Exadata Cloud at Customer.”

Oracle OCI Resource Manager Discovery.

If you work with Terraform, you are quite familiar with the situation when a lot of resources have already been deployed manually. What options do we have in such a case? The first one is to use the native Terraform Resource Discovery and create the state file, which can be imported to your enterprise configuration. But if you plan to use Resource Manager in OCI, you can use the new Resource Manager Discovery feature. It creates a stack discovering your resources in a compartment.

Continue reading “Oracle OCI Resource Manager Discovery.”

Oracle ExaCC Gen 2 new features and improvements.

Some time ago after the last Oracle Open World Christine Kivi wrote a blog stating that this is not “your father’s Oracle” anymore . The rapid development and continuous improvements in Oracle cloud is one of the signs that Oracle is changing. The generation 2 Exadata cloud at customer (ExaCC) was released on that last OOW 19 and initially had some limitations in options and interface. Oracle team promised to fix the issues and provide new functionality, planning some major updates in the next calendar year (2020). And so far as I can see Oracle team is working delivering the promised. Here I will try to review some of the new features implemented for the last several months. This is going to be a relatively long post. You can go to the bottom, read the summary and read in details only about changes you are interested in.

Continue reading “Oracle ExaCC Gen 2 new features and improvements.”

Oracle multi-tenant: PDB scope parameters and where you can see them.

Working with Oracle multi-tenant architecture gives us some obvious benefits but also some challenges. What if we want to change a system parameter but only for a certain pluggable database (PDB) and keep it default for all others? Starting from 12.1 Oracle provides the ability to modify parameters on PDB level. If you look to a reference documentation for database parameters it states clearly whether it can be applied on CDB level or not. And with every new release we have more and more parameters which can be changed on PDB level. It has grown from 185 for 12.1.0.2 to 194 on 19.7.0.0.

Continue reading “Oracle multi-tenant: PDB scope parameters and where you can see them.”

Migrating a new PDB to the existing 19c DataGuard on ExaCC.

We’ve already discussed  how to migrate databases from a standalone 12.2 database to a pluggable database (PDB) in a 19c container in the Oracle cloud. But what if the target container database (CDB)  is already part of a Data Guard configuration and has several PDB in it? I will try to go through the main steps on how to do that without breaking the replication. 

Continue reading “Migrating a new PDB to the existing 19c DataGuard on ExaCC.”