Is Google Cloud SQL Server enterprise ready?

Before starting the post let me clarify that what I am going to describe as the state of readiness of the Google Cloud SQL Server is actual for early February 2022. It is quite possible that some things can be different when you read the post.

For the last several months I was helping some big enterprises to adopt Google Cloud Platform (GCP) and, as part of the implementation, a significant number of SQL Server databases were moving to the GCP Cloud SQL service. But when we started to build the environment in GCP it was clear that the SQL Server option for Cloud SQL is much inferior not only to some other cloud offerings and on-prem installations but also to other databases engines on the same Cloud SQL. In short the SQL Server on GCP Cloud SQL service lacked some essential features. Here I will try to explain why I think the SQL Server in GCP is not mature enough for enterprise.

I am not going to dig deep into each and every feature but rather list some features which, I think, define enterprise for relational databases and try to evaluate the state of those features on GCP Cloud SQL Server. If you don’t want to read it all you can jump to the summary in the end.

Let me start from High Availability (HA) and Business Continuity (BC). When we talk about HA and BC we discuss the entire range of the features and technologies which allow us to continue business operations in different disaster scenarios. A proper HA solution has to protect us from zonal, regional failure on the provider level and save our data in case of logical or physical corruption. If you don’t have it in your product you are not ready for any serious workload. What do we have here?

Starting from the good news. We have HA configuration for SQL Server and it protects from a zonal failure. That’s great but what about regional HA? We have a read only replica which has to be manually promoted to the read write mode. Don’t forget about recreating SQL users and reconfiguring all your clients. And keep in mind this cross regional replica is still in preview.

How are we protected from data loss? And here is the main issue. The platform doesn’t offer point in time recovery leaving us with only a daily scheduled full backup. And even start time for that backup can be defined only approximately. Of course you can do manual full backups or exports to a storage bucket but it cannot replace a proper backup with ability to restore to any point of time.

Moving to the security and starting from encryption. Do we have the option to use TDE there? Not really. But we can use storage level encryption and use Customer Managed Encryption Keys (CMEK) through GCP KMS service. It is not exactly the same as TDE but it might be acceptable as encryption for data at rest.

What about the option “Always Encrypt”? Not available. What about column encryption? Only way to do it is to use CMEK inside your application to encrypt the data before writing to the SQL Server.

Let’s move on to the encryption in transit. There are two ways to connect to your SQL Server instance. The first is a direct connection and there you can set up custom certificates and enable TLS but keep in mind you will need to add all source hosts/networks to the allow lists on the instance.

The second way is to use the Cloud SQL Auth Proxy. It is integrated with GCP IAM and will add the source host to the allow list automatically, also it will encrypt your data in transit. But be aware it is using TLS 1.2 with 128 bit AES and not 256 bit as it could be required in some enterprise. And when you consider the GCP Cloud SQL Auth Proxy keep in mind that it doesn’t support connection pooling.

The Proxy is integrated with GCP IAM but what about SQL Server itself? You cannot authenticate in the SQL Server instance using GCP IAM. But maybe we can use our on-prem AD for authentication? Not so fast, to do so you need to deploy a GCP AD as a service and connect it to your on-prem or other cloud based AD in read only mode. Then you will be able to use it for authentication. If you cannot do it you end up only with “Built in”authentication type when you have to create the Cloud SQL users and deal with passwords and permissions. All the users created through GCP Console or API have the same role in SQL Server.

I can list some other things missing in the service but I am going to stop here, summarize and try to rate the readiness of the Cloud SQL Server as an enterprise RDBMS service.

  • HA – one out of two. I would say 50% ready.
  • Backup and recovery – 20% ready or ready for non-critical data.
  • Encryption at rest – 30% ready with some available workarounds.
  • Encryption in transit – 70% ready considering some limitations.

Google is a great company and it is possible that the product management for Cloud SQL for SQL Server has different priorities depending on how they think an enterprise is supposed to work. Maybe they assume that in the case of a lost database somebody should just create a new one and fill it up with new data. I don’t know. But I work with enterprise and see those listed requirements as separation between being ready and not ready for real production workload. I am looking forward to seeing all those issues fixed soon.

Leave a Reply

Your email address will not be published. Required fields are marked *