Cloud and how it might help at difficult times.

The recent changes caused by the virus and economic meltdown affected almost everybody in the world. We are all now going through a difficult period of our history, and when many companies are struggling to survive, the other thrive and boost production. In such volatile environment, it becomes more and more important to be able to adapt the IT environment to immediate business needs quickly.

We work with different customers helping to adjust and evolve according to the changing business and IT landscape. It touches a lot of various aspects of IT, such as software and hardware support, logistic, availability of stuff, and restrictions put by government authorities.

In one case, we had to solve the puzzle of logistics to be able to replace some faulty parts of critical infrastructure in the conditions when the vendor didn’t have a physical representation in the location. In normal times the engineer would fly over to replace the part, had a night in a hotel, and fly back. Sounds easy, right? Now, when most of the flights are canceled, all the hotels in the vicinity are closed, and borders have backlogs of people trying to cross that’s not easy anymore. We were able to work it out, but it was difficult and took much more time than expected. During all that time, the environment was using redundant parts, but it could be a disaster if any of those parts would break.

Another case I saw when one of the businesses’ total workload went from 80% of IT capacity to almost zero. At the same time, the company should keep the infrastructure up, pay for electricity, cooling, data centers’ rent, and the license support. All those money could be saved if the company could temporarily reduce the number of licenses and computers according to real business needs.

And we all know about some companies expanding and growing due to growing demands. For example, usage of Zoom has ballooned overnight, reaching more than 200 million daily users. Some other companies providing delivery and remote services were also experienced significant growth and unexpected load on the IT infrastructure. Some of them were unable to take it well, and their online services crashed.

I think this is the time when the cloud-based solutions show how it can be done and how the cloud can help businesses to be more flexible, agile, and keep up with demands. Let me list some benefits of putting your critical IT to a public cloud.

In first, you don’t need to think and jump over your head to fix your IT infrastructure when all supply chains and regular logistics are broken. Make it somebody’s problem and not yours anymore.

You can quickly scale down your environment reducing infrastructure subscription cost and licensing cost, which is probably even more important for some types of licensing software. So, your already strangled business will not need to endure keeping afloat massive infrastructure which is not used by anyone. Good examples are air company and recreational industry.

At the same time, if you have designed and built your environment in and for the cloud, you can scale up and scale out supporting growing demands for your business. You can think about well-known retail chains for hardware and home supplies.

And speaking about the time, it can be the best moment for changes when your infrastructure is down, and you can afford prolonged maintenance for your IT infrastructure. For the others, it is the best opportunity to rethink the business model and orient it more to remote delivery and online retail.

I’ve listed here only a few reasons why the cloud is better and more adapted to the changes in the business. But cloud provides some other benefits and opportunities to improve your analytics and apply modern machine learning technics adding even more value to your data.

That is true that some cloud providers experienced serious capacity problems during the first days of the increased demands, but so far, they have been able to overcome and fix most of the issues for their enterprise customers.
As a final word, I would like to say that we are ready to help you to go through that difficult time and ready to share our experience and knowledge. The world is changing, and we are changing with it.

Desktop in the cloud? Easy.

This is a difficult time for everyone, even if you are used to working most of the time from home, airport, cafe, or any other place. The problem is not only how good you are managing your time but sometimes in network reliability and throughput. When so many people work from home, and so many kids are trying to watch streaming services at the same time, your home network might be under severe pressure. In such a case, a remotely hosted desktop product could be the solution.

I tried a couple of such services from the major cloud providers AWS and Azure.

Amazon offers an “Amazon WorkSpaces” product, which provides you a fully managed virtual desktop. It is easy and straightforward to set up.

If you choose a “Quick Setup” it will do everything for you and provide a brand new virtual desktop in 10-20 min.

What you need is to pick up a shape and package, and provide a username and the email.

In about 10 min, you are going to get an email with a link for activation and a registration code. Then you will be able to use the AWS Workspace client to connect to your machine. By default, it runs in “AutoStop” mode switching to an inactive state after a defined time. The default is 1 hour, but you can configure it.

You also have a choice to keep it running all the time, paying a fixed monthly fee, and it can be cheaper if you plan to use the workspace all the time. You can find different pricing options on the AWS website.

Microsoft Azure offers its product, and it is different in many ways. The product is called Windows Virtual Desktop (WVD) and it provides a Windows-based VM(s) where you can assign one or several VMs to a pool and provide access for multiple users. The solution is more enterprise-like and offers several attractive options for large or medium scale business. 

At the same time, it is not so easy to deploy it for a single user. You need to add an admin account to your Active Directory, put proper permissions for WVD application and client, create a tenant, and continue by setting up a pool of VMs.

Please note that the metadata for the pool is in the US, even for VMs that are in Canada.

Eventually, after going through all the steps, you will be able to configure a remote desktop solution for your company. But, in my opinion, if you want a quick, easy solution, it will be easier to fire a VM with Windows or Linux and go through a couple of steps setting it up. You will be charged only for time when it is up. It is maybe not the most elegant solution but simple and cost-effective. If you want to use Google or Oracle cloud, you might decide to start a VM there and use it as your temporary or permanent working machine. Also, you can prepare a Terraform or any other deployment manager configuration of a desktop with predefined characteristics.

What is the benefit of having your desktop in the cloud?

The first is probably the network speed and reliability. If you start some operations from the VM or Virtual desktop in the cloud, it will be running even if your WIFI has given up, and your connection has dropped. You reconnect and continue your work.

The second is you can pause your work, disconnect, recharge battery, go to the lunch, walk, and return to your tasks without fear that everything is lost.

Personally, I like the AWS solution more because it is a fully managed service where you don’t need to worry about security, patching, shutting down, or any other management tasks.

Of course, it costs some money, but if you are diligent enough to keep it up only when you need it, the cost can be bearable. We are talking about probably $10-$30 per month, depending on the VM shape and usage. Just think how much money you’ve saved not buying your morning $3 Latte 5 times per week. And don’t forget you are in the cloud and you can schedule it to be running only during business hours.

Is Oracle cloud only for Oracle?

Several days ago, discussing public cloud solutions and competition between different providers, one of the people mentioned that Oracle Cloud is just for Oracle products. At the same time, AWS and Azure are more vendor agnostic. I was a bit surprised by that statement but it appeared that several other people shared the same view. I decided to write the blog and show what options Oracle Cloud Infrastructure (OCI) has for different workloads.

Let’s start with the VM types and flavors. Of course, by default, you are offered Oracle Linux but if you push the “Change Image Source” button:

You are going to see several different options for the platform, including Oracle Linux, Ubuntu, Centos, and various Windows server versions.

Those are the primary platform images for VM, but in addition, you have Oracle built images with different sets of software included, free and with Bring Your Own License (BYOL) policies.

Then we have the partner’s images on the next tab, and there we can find some other Linux distributions like SUSE and prebuilt software images like Jenkins from Bitnami.

If you need to build your own image with custom software and settings, you can create a custom image from your Linux or Windows-based VM, which may be an on-premises or cloud image. In my opinion, this should cover most of the requirements for the necessary VM infrastructure services. I am not discussing other aspects like network and storage here since they are not too much different in functionality presented by other cloud vendors.

If you haven’t found what you need or want some certified software deployment, you can go to the Oracle Cloud Marketplace and choose from multiple available packages, including Oracle and non-Oracle vendors.

We have several filters for the publisher, category, type, and price but no free search area. I hope the search option will be added there.

Here is a subset of publishers.

All the images from the Marketplace are certified by Oracle and prepared for deployment using Oracle Resource Manager (RM). The RM itself is using HashiCorp Terraform scripts behind the scenes. Terraform is one of the most popular deployment tools in the community, and, in my opinion, it is better than a proprietary solution. You can adopt a unified approach for a multi-cloud environment without locking yourself to a single vendor’s platform.

If you work with Docker and Kubernetes and want to build and deploy your custom microservices architecture, the Oracle Cloud Developer services are here to help. The Oracle registry is for your Docker images, and Oracle Kubernetes Engine (OKE) for the Kubernetes cluster is here to help you to deploy the applications.

So far, we were talking about native OCI tools and resources but it doesn’t end there. With the Oracle and Microsoft partnership in the cloud, we can expand our footprint and combine both clouds. I was testing it in July 2019 and wrote a blog about it. It was quite easy to set it up, and it showed acceptable performance. At that time, it was available only in the US Virginia region, but now it is available in Canada and the UK , and hopefully other areas soon. It opens new possibilities to incorporate your company strategy and place products to the most suitable cloud environment. For example, if you want to build an MS SQL database solution, you have two choices – use Azure with interconnect link to OCI or deploy a Windows server in OCI and put your database there.

So, is Oracle Cloud only for Oracle products? Of course not. Oracle public cloud offerings on infrastructure are pretty much comparable to any other public cloud providers and offer a flexible environment to deploy your application and, if you want, your preferred database solution as well.

Oracle OCI and Azure inter-cloud link. Good option for a hybrid cloud.

Not long time ago Oracle and Microsoft announced about new level of cooperation in the public cloud interlinking their clouds providing ability to use each of the cloud where they are the best. For example it allows to run application on Azure and use an Oracle database in the Oracle OCI. It was possible before for some regions but would involve multiple steps on both sides involving a 3d party network provider to interlink Oracle FastConnect and Azure ExpressRoute. Now it can be done using Azure and Oracle OCI interfaces only. The option so far exists only for US Washington DC area where you have OCI Ashburn and Azure Washington DC regions.  I tried it and found it working but not without some surprises.

I’ve started on the Azure by creating a Vnet network and adding two subnets where one is a normal and another is a  gateway subnet. We need the latter for the Virtual Network Gateway (VNG). Keep in mind that all resources should be created in the East US region.Screen Shot 2019-06-28 at 1.31.58 PM.png

The easiest way to create the VNG is to put the name for the resource type in the search string when you create a resource.

Screen Shot 2019-06-28 at 1.32.54 PM.png

The creation of the gateway is quite straight forward. Don’t forget to use the “ExpressRoute” type and pick up the right network with the previously created gateway subnet.

Screen Shot 2019-06-28 at 1.34.41 PM.png

The next step is to add a ExpressRoute circuit to your network. During the creation you need to put your name for the ExpressRoute, choose “Oracle Cloud FastConnect” as a provider and put parameters for billing. I chose 50Mbps with SKU “Standard” and “Metered” billing model just as it was the cheapest. It should be enough for the tests but for a heavy production workload with a lot of traffic other configuration could be more preferable.Screen Shot 2019-06-28 at 1.16.30 PM.png

If you look to the ExpressRoute circuit status you will see that the peering is not setup yet. We need to continue the setup on Oracle OCI side. There we will need the Service Key from the ExpressRoute circuit status page.

Screen Shot 2019-06-28 at 1.15.01 PM.png

On the  OCI side we have to prepare a VCN network with subnets and a Dynamic Routing Gateway (DRG) attached to the network and it has to be in the Ashburn region.  Screen Shot 2019-07-07 at 2.08.34 PM.png

The DRG has to be added before the Oracle FastConnect.

Screen Shot 2019-06-28 at 1.44.43 PM.png

After creating of the gateway we need to attach it to our network.

Screen Shot 2019-06-28 at 3.11.34 PM.png

The next step is to create the FastConnect connection using the “Microsoft Azure: ExpressRoute” provider.

Screen Shot 2019-06-28 at 1.43.17 PM.png

Providing our gateway, the service key we have copied from Azure ExpressRoute and the networks details for BGP addresses. Screen Shot 2019-06-28 at 1.48.51 PM.png

Soon after that the connection will be up and we will need to configure routing and security rules to access the network. My subnet on Azure had addresses in 10.10.40.0/24 network and I added route for my subnet on Oracle OCI.

Screen Shot 2019-06-28 at 3.12.24 PM.png

On the Azure side we need to add connection for our ExpressRoute circuit using our Virtual Network Gateway and the ExpressRoute circuit.

Screen Shot 2019-06-28 at 3.35.37 PM.png

The rest of the steps is related to setting up security rules for firewall on Azure and OCI VCN and adjusting firewall rules on the VM’s we want to connect to each other. For example we need to create route on OCI to direct network to the subnet on Azure. On Azure I have 10.10.40/24 subnet. So I need to create the route on OCI for that.

Screen Shot 2019-07-08 at 10.18.10 AM.png

I had a Linux VM on Azure side and another one on OCI. The connection worked pretty well and throughput was relatively the same to both sides but I got quite different results for latency.

The ping was roughly the same in both directions, but I noticed that from Azure side it was bit less stable in numbers.

[opc@gleb-bastion-us ~]$ ping 10.10.40.4
PING 10.10.40.4 (10.10.40.4) 56(84) bytes of data.
64 bytes from 10.10.40.4: icmp_seq=1 ttl=62 time=2.01 ms
64 bytes from 10.10.40.4: icmp_seq=2 ttl=62 time=2.09 ms

[otochkin@us-east-lin-01 ~]$ ping 10.0.1.2
PING 10.0.1.2 (10.0.1.2) 56(84) bytes of data.
64 bytes from 10.0.1.2: icmp_seq=1 ttl=61 time=2.66 ms
64 bytes from 10.0.1.2: icmp_seq=2 ttl=61 time=1.90 ms
64 bytes from 10.0.1.2: icmp_seq=3 ttl=61 time=2.44 ms

A copy of a random file was quite fast too even I noticed some slowdown during the copy from OCI to Azure.

[opc@gleb-bastion-us ~]$ scp new_random_file.out otochkin@10.10.40.4:~/
Enter passphrase for key '/home/opc/.ssh/id_rsa':
new_random_file.out 100% 1000MB 20.6MB/s 00:48
[opc@gleb-bastion-us ~]$ scp new_random_file.out otochkin@10.10.40.4:~/
Enter passphrase for key '/home/opc/.ssh/id_rsa':
new_random_file.out 100% 1000MB 21.3MB/s 00:46
[opc@gleb-bastion-us ~]$ scp new_random_file.out otochkin@10.10.40.4:~/
Enter passphrase for key '/home/opc/.ssh/id_rsa':
new_random_file.out 100% 1000MB 12.2MB/s 01:21
[opc@gleb-bastion-us ~]$

I found the slowness was caused by IO performance on Azure instance. It was not big enough to supply sufficient IO to the disk. So, it was not network which was a bottleneck there.

Screen Shot 2019-07-08 at 12.53.46 PM.png

I did some tests using iperf3 and the oracle network performance tool oratcptest (Measuring Network Capacity using oratcptest (Doc ID 2064368.1)) . In first, I found some significant difference in latency.

From OCI to Azure latency was about 19ms :

[opc@gleb-bastion-us ~]$ java -jar oratcptest.jar 10.10.40.4 -port=5555 -duration=10s -interval=2s
[Requesting a test]
Message payload = 1 Mbyte
Payload content type = RANDOM
Delay between messages = NO
Number of connections = 1
Socket send buffer = (system default)
Transport mode = SYNC
Disk write = NO
Statistics interval = 2 seconds
Test duration = 10 seconds
Test frequency = NO
Network Timeout = NO
(1 Mbyte = 1024x1024 bytes)

(20:49:52) The server is ready.
Throughput Latency
(20:49:54) 51.633 Mbytes/s 19.368 ms
(20:49:56) 53.480 Mbytes/s 18.699 ms
(20:49:58) 52.959 Mbytes/s 18.883 ms
(20:50:00) 53.115 Mbytes/s 18.827 ms
(20:50:02) 53.355 Mbytes/s 18.743 ms
(20:50:02) Test finished.
Socket send buffer = 813312 bytes
Avg. throughput = 52.883 Mbytes/s
Avg. latency = 18.910 ms

[opc@gleb-bastion-us ~]$

But for other direction connecting from Azure back to a sever on OCI we had latency about 85ms. It was 4.5 times slower.

[azureopc@us-east-lin-01 ~]$ java -jar oratcptest.jar 10.0.1.2 -port=5555 -duration=10s -interval=2s
[Requesting a test]
Message payload = 1 Mbyte
Payload content type = RANDOM
Delay between messages = NO
Number of connections = 1
Socket send buffer = (system default)
Transport mode = SYNC
Disk write = NO
Statistics interval = 2 seconds
Test duration = 10 seconds
Test frequency = NO
Network Timeout = NO
(1 Mbyte = 1024x1024 bytes)

(21:33:59) The server is ready.
Throughput Latency
(21:34:01) 11.847 Mbytes/s 84.411 ms
(21:34:03) 11.736 Mbytes/s 85.206 ms
(21:34:05) 11.690 Mbytes/s 85.546 ms
(21:34:07) 11.699 Mbytes/s 85.477 ms
(21:34:09) 11.737 Mbytes/s 85.205 ms
(21:34:09) Test finished.
Socket send buffer = 881920 bytes
Avg. throughput = 11.732 Mbytes/s
Avg. latency = 85.238 ms

[azureopc@us-east-lin-01 ~]$

After digging in and troubleshooting I found the problem was in my instance on Azure site. It was too small and too slow to run the test with proper speed. After increasing the Azure instance size it showed the same latency.

[azureopc@us-east-lin-01 ~]$ java -jar oratcptest.jar 10.0.1.2 -port=5555 -duration=10s -interval=2s
[Requesting a test]
Message payload = 1 Mbyte
Payload content type = RANDOM
Delay between messages = NO
Number of connections = 1
Socket send buffer = (system default)
Transport mode = SYNC
Disk write = NO
Statistics interval = 2 seconds
Test duration = 10 seconds
Test frequency = NO
Network Timeout = NO
(1 Mbyte = 1024x1024 bytes)

(16:44:11) The server is ready.
Throughput Latency
(16:44:13) 59.449 Mbytes/s 16.821 ms
(16:44:15) 59.127 Mbytes/s 16.913 ms
(16:44:17) 59.367 Mbytes/s 16.845 ms
(16:44:19) 59.015 Mbytes/s 16.945 ms
(16:44:21) 59.136 Mbytes/s 16.910 ms
(16:44:21) Test finished.
Socket send buffer = 965120 bytes
Avg. throughput = 59.199 Mbytes/s
Avg. latency = 16.892 ms

[azureopc@us-east-lin-01 ~]$

For the short summary I can say that the process of setting up the hybrid cloud was easy and transparent enough, showed good performance and I think it has a great potential. I hope the option will be available in other regions too and I am looking forward to have it in the Toronto region.