Since the first days of working in the Google public cloud there have been debates about the possibility to move an Oracle workload to GCP. The major concerns were coming not from the technical challenges but rather from Oracle’s licensing policies and guidelines. In the famous Oracle’s document about licensing Oracle software in the public cloud it was stated – “This policy applies to cloud computing environments from the following vendors: Amazon Web Services – Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS) and Microsoft Azure Platform (collectively, the ‘Authorized Cloud Environments’)”. So the Google Cloud was not listed as an ‘Authorized Cloud Environment’ and it was unclear how to apply the Oracle licensing there. I believe it will be sorted in time but in the meanwhile as a solution Google presented a Bare Metal Service as the platform for Oracle workload.
The Bare Metal Service (BMS) is a dedicated physical box which is connected to the Google Cloud zone by a high speed, low latency network and can be used either as a bare metal box with one of the approved OS installed or with an Oracle OVM v3 installed on top of it.
It allows to apply the same licensing and policy guidelines as for a normal compatible physical hardware and use the same set of options and packs. For example, some database options like “Database In-Memory Base Level” are not available in AWS and Azure but they can be used on a physical hardware. Another example is the Oracle Real Application Cluster (RAC).
If you are familiar with Oracle CPU licensing you probably know about Oracle processor core factor and according to it most of x86-64 Intel or AMD CPU have factor 0.5 which permits to use two CPU cores per one CPU license. In the authorized cloud environments like AWS and Azure that are not applied but for a bare metal box the rules should be the same as for a normal on-premises installation. Of course in that case you need to choose an appropriate shape for your machine according to your license or use the Oracle Virtual Machine to split the box to several VM’s.
When it comes to shapes the BMS is available in fixed shapes starting from 8 cores with 192GB and up to 448 cores with 24TB memory. The jumps between the number of cores is sometimes too wide, like one shape is 24 cores and the next one is already 56 cores. There you can apply the OVM and pin the cores to VM accordingly to make it aligned with your licenses. From the performance point of view the BMS machines provide decent performance same or better than other machines on premises or in the cloud with comparable configuration. I will explore it in detail in the following blogs about CPU, storage performance and tests I did on a sample Oracle database.
The network layer was really good. The latency between my sample app boxes in Google cloud and BMS in the same zone was less than 1ms and the throughput was according to the provided characteristics. My BMS box had 2Gbps network and it was quite enough to run all my tests with Swingbench and Kafka replication to Google Big Query service.
For the Swingbench server side SOE tests I got an average 11598 Tx/Sec on my 16 core box. By the way it was the same whether I ran it from the same box or from a Google cloud VM.
Overall my experience working with BMS was positive starting from documentation and to working with the support team resolving some routing issues and configuration.
I think the BMS can be a great target destination for an Oracle workload when somebody can properly align the Oracle licenses with BMS shapes or doesn’t have any objections to use an appropriate authorized hypervisor for that. It provides good performance and allows us to use all the Google services and Google SLA. One example I see is an OLTP Oracle database with the application tier in the Google Cloud, offloading reporting information to Google Big Queries and analyzing data using Google Machine Learning abilities.
In the next post I am going to provide some performance numbers for IO and CPU and some network details for my BMS environment. Stay tuned.