Sample Go application with database backend

Yes, this is one more sample app as there are probably thousands already on the internet. We have tons of sample apps from different vendors with various types of licenses available on different repositories. Nevertheless sometimes I struggle to find exactly what I need – a simple app with a database backend which can work with Oracle Autonomous databases and optionally with Postgres backend. In my everyday life I primarily use Go as a programming language and I would like to have such an app written using that language. So, eventually I gave up and created my own application with a simple frontend and two (as for now) options for backend databases – Oracle and Postgres.

Continue reading “Sample Go application with database backend”

AlloyDB backups management

The post is about backup management for AlloyDB. It might be useful for the time when it is written but, probably, will be obsolete very soon when tools and API for the service will mature.
A couple of words about AlloyDB backups and how they are created. The backups are quite different from the default backups for Cloud SQL for example. As we know in Cloud SQL all the backups are bound to the instance. What it means is when the instance is deleted then all the backups disappear along with the instance. It makes sense if the backups behind the scenes are storage snapshots from the databases. But in AlloyDB all the backups are decoupled from the cluster and exist by themselves. If you delete a cluster the backups stay. I think it is a way better approach because it provides a better way to protect from some mistakes when an instance is deleted before making a clone or exporting the data. As for now you can see all the backups for existing and deleted instances using the “backups” tab in the console, gcloud utility or listing using GCP REST API.

But how can we manage the backup? What do we do if we need to implement our own retention policy or simply delete the backups? Here are a couple of ways we can do that.
The first way is the gcloud utility and it provides commands to list, create or delete backups for an AlloyDB cluster. For example, we can list the backups using command:

bastion$ gcloud beta alloydb backups list
NAME                                                                                                                STATUS  CLUSTER_NAME                                                          CREATE_TIME                     ENCRYPTION_TYPE
projects/sandbox/locations/us-central1/backups/automated-bkp-20220914-15-0b07c719-7397-408d-b4e9-779dbdb4c8cb  READY   projects/475708065648/locations/us-central1/clusters/gleb-alloydb-02  2022-09-14T14:44:24.087185163Z  GOOGLE_DEFAULT_ENCRYPTION
projects/sandbox/locations/us-central1/backups/automated-bkp-20220911-15-5420cf02-3588-45d7-89c6-7d836b210d21  READY   projects/475708065648/locations/us-central1/clusters/gleb-alloydb-02  2022-09-11T14:44:16.261711844Z  GOOGLE_DEFAULT_ENCRYPTION

Or we can delete a particular backup using the gcloud beta alloydb backups delete command to delete a particular backup.

That’s great but I am lazy and prefer an automated approach. Of course you might choose to make a bash script and use gcloud there to filter and manage the backups. It might work but it is not a really scalable and reliable approach. Usually I create a function for a service using the service client’s API and trigger it by the Pub/Sub message with the call parameters. Google Scheduler is responsible to post the message to the Pub/Sub topic. Here is a basic diagram (made using the GCP diagram tool):


The same workflow I’ve chosen for the AlloyDB backups management. The only difference was that we didn’t have the client’s API yet and I should use the REST API interface. I prepared a function written in Go language which accepts a PubSub message with a JSON payload where I supply the AlloyDB cluster name, operation type, retention policy and location. Here is an example of the payload.

{ "project":"sandbox",  "location":"us-central1", "operation":"DELETE", "cluster":"ALL", "retention":105}

The function itself is written in Go and can delete backups or create them. You can specify a cluster name or put “ALL” and it will delete all backups in the project for the location according to the retention policy in days. You can find the function source code in my repository on Github.

And it was my first function using the 2nd generation of GCP cloud functions. In reality I probably didn’t need any of the new features provided by the updated engine but I thought that it was the time to move and use the new generation for all the new projects.

The process was not too different from the 1st gen function but it asked me to enable some additional APIs.
I already had a Pub/Sub topic created for the AlloyDB operation and I created the function with “Eventarc trigger” end event type “Cloud Pub/Sub”:

You’ve probably also noticed that I’ve modified the runtime reducing the memory to 128Mb.

I put a timeout 120s just in case and provided the source code.

The function was built and published and I could see the new subscriber on my Pub/Sub topic:

The next step was to schedule the daily run in the Cloud Scheduler to send the payload to the Pub/Sub. The procedure is simple and takes just a couple of minutes.

As a result all the backups for gleb-alloydb-01 older than 106 were scheduled to be automatically deleted from the project. The same way you can schedule creation of the on-demand backups changing the operation to “CREATE”.

I hope the post can help with some basic management automation for the new AlloyDB service. Let me know please if you would like to know more about AlloyDB or any ideas for automation.

Google Cloud SQL Custom Backups

In one of my previous posts I’ve noted that the GCP Cloud SQL for SQL Server doesn’t have point of time recovery as of March 2022. As result the default out of box backups can only provide RPO as 24 hours or more. The exact RPO might vary from day to day since you can only specify a window for backup but not exact time. So far it seems like the only reasonable approach to reduce the RPO is to schedule on-demand backups, and in this post I am going to show how you can do that using a couple of different approaches.

Continue reading “Google Cloud SQL Custom Backups”

Terraform modules simplified.

Terraform is probably already the de-facto standard for cloud deployment. I use it on a daily basis deploying and destroying my tests and demo setups in my Oracle cloud tenancy. Sometimes the deployment environment for a demo has too many files or some files inside are really big and hard to read due to the number of different resources and parameters included there. How can we make our configuration more usable? Let’s try Terraform modules and demonstrate how they work.
For our tests we are going to use terraform v1.0.3 and Oracle Cloud Infrastructure (OCI). You will need a working OCI and on your machine with terraform defined environment variables. The full list of required environment variables will be provided in the README file in the GitHub repository.
Let’s say we have a simple demo or test configuration with a dedicated network, internet gateway and a VM. And we want to assign multiple security rules using security lists and maybe one or two security groups. We can include all those rules to the configuration file for the network but maybe there is a better way. What if we want to reuse the similar set of the security rules and security groups not only to that deployment but share with some other stacks? We can try to use Terraform modules.

Continue reading “Terraform modules simplified.”

Upgrading Terraform command line to the latest version.

Some time ago I updated my terraform command line tool to the version 0.15.3 and was surprised how easy it went. Originally I planned to write a blog but it was not too much to write about. The upgrades to version 11 or 13 were much more painful. Last week HashiCorp announced Terraform version 1.0 General Availability and it meant that the time for a new upgrade had come. I upgraded it on one of my machines and decided to write a short blog about both upgrades to encourage people to try and do the upgrade.

Continue reading “Upgrading Terraform command line to the latest version.”

Oracle OCI Resource Manager Discovery.

If you work with Terraform, you are quite familiar with the situation when a lot of resources have already been deployed manually. What options do we have in such a case? The first one is to use the native Terraform Resource Discovery and create the state file, which can be imported to your enterprise configuration. But if you plan to use Resource Manager in OCI, you can use the new Resource Manager Discovery feature. It creates a stack discovering your resources in a compartment.

Continue reading “Oracle OCI Resource Manager Discovery.”