The blog was supposed to be a small how-to but it has grown to a bigger one and hopefully might help to avoid some minor problems while exporting data from an Autonomous Database (ADB) in Oracle Cloud (OCI). It is about exporting data to the Oracle DataPump format to move data to another database or as a logical “backup”.
Oracle documentation provides sufficient information but I find it more and more difficult to navigate considering the number of options and flavours for Oracle databases. There are some new ways and tools around Oracle OCI ATP which can help in some cases. If you want you can jump directly to the end to read the summary.
This blog was primarily driven by questions from my peers and colleagues who wondered where my blog was hosted and how it was created. It might help to move from a hosting platform to your own website and where to start.
Like most bloggers I started my blog using one of the hosting platforms but soon after found some limitations in choosing appearance, plugins and was a bit annoyed by some commercial banners on my page. After a while I decided to move to my own site. I bought a domain name for myself and created my own environment using the WordPress software on a cloud VM. It didn’t cost me too much but it was not entirely for free. When Oracle introduced some additions to the always free set of resources. I decided to give it a try and move my blog entirely to the OCI free tier.
For those who would like to skip the reading and try, I have a set of Terraform scripts on GitHub. They haven’t been updated lately and use not the latest versions but can be a good place to start.
A short disclaimer. I am writing it in the middle of March 2022 and it is possible that when you read the blog the information published here is not relevant anymore. Cloud products are evolving very fast.
I write the post to share some observations and potential issues you might have with deploying GCP Memorystore for Redis instances through Anthos Config Connector (ACC) controller. If you are not familiar with ACCI, I strongly recommend reading at least a high level overview of the product. In essence this is a Kubernetes addon which allows you to automatically deploy and manage GCP services by applying a manifest file (YAML or Helm chart) to a Kubernetes cluster with the ACC controller. It allows you to use the Kubernetes cluster as a deployment tool for GCP resources in your organization. This is a really interesting approach and might transform your environment in the cloud. But it implies some challenges around security which I am going to discuss in the blog.
In one of my previous posts I’ve noted that the GCP Cloud SQL for SQL Server doesn’t have point of time recovery as of March 2022. As result the default out of box backups can only provide RPO as 24 hours or more. The exact RPO might vary from day to day since you can only specify a window for backup but not exact time. So far it seems like the only reasonable approach to reduce the RPO is to schedule on-demand backups, and in this post I am going to show how you can do that using a couple of different approaches.
Before starting the post let me clarify that what I am going to describe as the state of readiness of the Google Cloud SQL Server is actual for early February 2022. It is quite possible that some things can be different when you read the post.
For the last several months I was helping some big enterprises to adopt Google Cloud Platform (GCP) and, as part of the implementation, a significant number of SQL Server databases were moving to the GCP Cloud SQL service. But when we started to build the environment in GCP it was clear that the SQL Server option for Cloud SQL is much inferior not only to some other cloud offerings and on-prem installations but also to other databases engines on the same Cloud SQL. In short the SQL Server on GCP Cloud SQL service lacked some essential features. Here I will try to explain why I think the SQL Server in GCP is not mature enough for enterprise.
Lately I work primarily with Google Public Cloud (GCP) and in particular with Kubernetes services (GKE). As result my daily routine command line tools are gcloud, kubectl, nomos and other. And when the GCP cloud shell is really amazing environment which doesn’t require any effort to fire up, sometimes it is not possible to use. When it comes to work from your own laptop you have different options. You can install the tools like Google Cloud SDK following several simple steps from the Google website or you can prepare a docker image and run it in a container. I personally prefer the second way. In such case I can periodically update entire environment without too much effort and easily can span a new environment on any laptop fairly quickly. Here I am sharing what I personally use for my day-to-day activity.
Terraform is probably already the de-facto standard for cloud deployment. I use it on a daily basis deploying and destroying my tests and demo setups in my Oracle cloud tenancy. Sometimes the deployment environment for a demo has too many files or some files inside are really big and hard to read due to the number of different resources and parameters included there. How can we make our configuration more usable? Let’s try Terraform modules and demonstrate how they work. For our tests we are going to use terraform v1.0.3 and Oracle Cloud Infrastructure (OCI). You will need a working OCI and on your machine with terraform defined environment variables. The full list of required environment variables will be provided in the README file in the GitHub repository. Let’s say we have a simple demo or test configuration with a dedicated network, internet gateway and a VM. And we want to assign multiple security rules using security lists and maybe one or two security groups. We can include all those rules to the configuration file for the network but maybe there is a better way. What if we want to reuse the similar set of the security rules and security groups not only to that deployment but share with some other stacks? We can try to use Terraform modules.
If you’ve been following the recent changes in the linux world you probably remember how Red Hat and Centos announced in December 2020 that the CentOS Project was shifting focus to CentOS Stream and support for CentOS Linux 8 had been cut to December 31, 2021. It created a wave of discussions in the community about the future for Centos as an enterprise platform and some people started to look to alternative Linux distributives. As a result we got a new, community-driven downstream built, same as Centos used to be, Rocky linux.
The downstream build is based on the same code base as the vendor distributive and resembles most features of the “parent” vendor Linux. It is following all the releases after they have been built by the vendor. In most of my tests I am using Oracle Linux when I am in the Oracle cloud but I am using Centos in Google cloud and other public clouds like Azure or AWS. Now we have Rocky Linux available on those platforms and I’ve had a quick look and done some testing using the Rocky Linux 8.4 (Green Obsidian).
Some time ago I wrote a short blog about dependencies between the number of enabled CPUs and how many databases you could build. Today we got another error when we were trying to create a new database. Here is the screenshot of the error.
If you can’t read it on a small screen it says “Create Database operation failed due to an unknown error. Refer to work request ID 2580d3ff-064e-4e6f-ab06-1327fd02f40e when opening a Service Request at My Oracle Support.” and provide an error code which is “Error“
Some time ago I updated my terraform command line tool to the version 0.15.3 and was surprised how easy it went. Originally I planned to write a blog but it was not too much to write about. The upgrades to version 11 or 13 were much more painful. Last week HashiCorp announced Terraform version 1.0 General Availability and it meant that the time for a new upgrade had come. I upgraded it on one of my machines and decided to write a short blog about both upgrades to encourage people to try and do the upgrade.