Not long time ago Oracle and Microsoft announced about new level of cooperation in the public cloud interlinking their clouds providing ability to use each of the cloud where they are the best. For example it allows to run application on Azure and use an Oracle database in the Oracle OCI. It was possible before for some regions but would involve multiple steps on both sides involving a 3d party network provider to interlink Oracle FastConnect and Azure ExpressRoute. Now it can be done using Azure and Oracle OCI interfaces only. The option so far exists only for US Washington DC area where you have OCI Ashburn and Azure Washington DC regions. I tried it and found it working but not without some surprises.
I’ve started on the Azure by creating a Vnet network and adding two subnets where one is a normal and another is a gateway subnet. We need the latter for the Virtual Network Gateway (VNG). Keep in mind that all resources should be created in the East US region.
The easiest way to create the VNG is to put the name for the resource type in the search string when you create a resource.
The creation of the gateway is quite straight forward. Don’t forget to use the “ExpressRoute” type and pick up the right network with the previously created gateway subnet.
The next step is to add a ExpressRoute circuit to your network. During the creation you need to put your name for the ExpressRoute, choose “Oracle Cloud FastConnect” as a provider and put parameters for billing. I chose 50Mbps with SKU “Standard” and “Metered” billing model just as it was the cheapest. It should be enough for the tests but for a heavy production workload with a lot of traffic other configuration could be more preferable.
If you look to the ExpressRoute circuit status you will see that the peering is not setup yet. We need to continue the setup on Oracle OCI side. There we will need the Service Key from the ExpressRoute circuit status page.
On the OCI side we have to prepare a VCN network with subnets and a Dynamic Routing Gateway (DRG) attached to the network and it has to be in the Ashburn region.
The DRG has to be added before the Oracle FastConnect.
After creating of the gateway we need to attach it to our network.
The next step is to create the FastConnect connection using the “Microsoft Azure: ExpressRoute” provider.
Providing our gateway, the service key we have copied from Azure ExpressRoute and the networks details for BGP addresses.
Soon after that the connection will be up and we will need to configure routing and security rules to access the network. My subnet on Azure had addresses in 10.10.40.0/24 network and I added route for my subnet on Oracle OCI.
On the Azure side we need to add connection for our ExpressRoute circuit using our Virtual Network Gateway and the ExpressRoute circuit.
The rest of the steps is related to setting up security rules for firewall on Azure and OCI VCN and adjusting firewall rules on the VM’s we want to connect to each other. For example we need to create route on OCI to direct network to the subnet on Azure. On Azure I have 10.10.40/24 subnet. So I need to create the route on OCI for that.
I had a Linux VM on Azure side and another one on OCI. The connection worked pretty well and throughput was relatively the same to both sides but I got quite different results for latency.
The ping was roughly the same in both directions, but I noticed that from Azure side it was bit less stable in numbers.
[opc@gleb-bastion-us ~]$ ping 10.10.40.4 PING 10.10.40.4 (10.10.40.4) 56(84) bytes of data. 64 bytes from 10.10.40.4: icmp_seq=1 ttl=62 time=2.01 ms 64 bytes from 10.10.40.4: icmp_seq=2 ttl=62 time=2.09 ms [otochkin@us-east-lin-01 ~]$ ping 10.0.1.2 PING 10.0.1.2 (10.0.1.2) 56(84) bytes of data. 64 bytes from 10.0.1.2: icmp_seq=1 ttl=61 time=2.66 ms 64 bytes from 10.0.1.2: icmp_seq=2 ttl=61 time=1.90 ms 64 bytes from 10.0.1.2: icmp_seq=3 ttl=61 time=2.44 ms A copy of a random file was quite fast too even I noticed some slowdown during the copy from OCI to Azure. [opc@gleb-bastion-us ~]$ scp new_random_file.out otochkin@10.10.40.4:~/ Enter passphrase for key '/home/opc/.ssh/id_rsa': new_random_file.out 100% 1000MB 20.6MB/s 00:48 [opc@gleb-bastion-us ~]$ scp new_random_file.out otochkin@10.10.40.4:~/ Enter passphrase for key '/home/opc/.ssh/id_rsa': new_random_file.out 100% 1000MB 21.3MB/s 00:46 [opc@gleb-bastion-us ~]$ scp new_random_file.out otochkin@10.10.40.4:~/ Enter passphrase for key '/home/opc/.ssh/id_rsa': new_random_file.out 100% 1000MB 12.2MB/s 01:21 [opc@gleb-bastion-us ~]$I found the slowness was caused by IO performance on Azure instance. It was not big enough to supply sufficient IO to the disk. So, it was not network which was a bottleneck there.
I did some tests using iperf3 and the oracle network performance tool oratcptest (Measuring Network Capacity using oratcptest (Doc ID 2064368.1)) . In first, I found some significant difference in latency.
From OCI to Azure latency was about 19ms :
[opc@gleb-bastion-us ~]$ java -jar oratcptest.jar 10.10.40.4 -port=5555 -duration=10s -interval=2s [Requesting a test] Message payload = 1 Mbyte Payload content type = RANDOM Delay between messages = NO Number of connections = 1 Socket send buffer = (system default) Transport mode = SYNC Disk write = NO Statistics interval = 2 seconds Test duration = 10 seconds Test frequency = NO Network Timeout = NO (1 Mbyte = 1024x1024 bytes) (20:49:52) The server is ready. Throughput Latency (20:49:54) 51.633 Mbytes/s 19.368 ms (20:49:56) 53.480 Mbytes/s 18.699 ms (20:49:58) 52.959 Mbytes/s 18.883 ms (20:50:00) 53.115 Mbytes/s 18.827 ms (20:50:02) 53.355 Mbytes/s 18.743 ms (20:50:02) Test finished. Socket send buffer = 813312 bytes Avg. throughput = 52.883 Mbytes/s Avg. latency = 18.910 ms [opc@gleb-bastion-us ~]$But for other direction connecting from Azure back to a sever on OCI we had latency about 85ms. It was 4.5 times slower.
[azureopc@us-east-lin-01 ~]$ java -jar oratcptest.jar 10.0.1.2 -port=5555 -duration=10s -interval=2s [Requesting a test] Message payload = 1 Mbyte Payload content type = RANDOM Delay between messages = NO Number of connections = 1 Socket send buffer = (system default) Transport mode = SYNC Disk write = NO Statistics interval = 2 seconds Test duration = 10 seconds Test frequency = NO Network Timeout = NO (1 Mbyte = 1024x1024 bytes) (21:33:59) The server is ready. Throughput Latency (21:34:01) 11.847 Mbytes/s 84.411 ms (21:34:03) 11.736 Mbytes/s 85.206 ms (21:34:05) 11.690 Mbytes/s 85.546 ms (21:34:07) 11.699 Mbytes/s 85.477 ms (21:34:09) 11.737 Mbytes/s 85.205 ms (21:34:09) Test finished. Socket send buffer = 881920 bytes Avg. throughput = 11.732 Mbytes/s Avg. latency = 85.238 ms [azureopc@us-east-lin-01 ~]$After digging in and troubleshooting I found the problem was in my instance on Azure site. It was too small and too slow to run the test with proper speed. After increasing the Azure instance size it showed the same latency.
[azureopc@us-east-lin-01 ~]$ java -jar oratcptest.jar 10.0.1.2 -port=5555 -duration=10s -interval=2s [Requesting a test] Message payload = 1 Mbyte Payload content type = RANDOM Delay between messages = NO Number of connections = 1 Socket send buffer = (system default) Transport mode = SYNC Disk write = NO Statistics interval = 2 seconds Test duration = 10 seconds Test frequency = NO Network Timeout = NO (1 Mbyte = 1024x1024 bytes) (16:44:11) The server is ready. Throughput Latency (16:44:13) 59.449 Mbytes/s 16.821 ms (16:44:15) 59.127 Mbytes/s 16.913 ms (16:44:17) 59.367 Mbytes/s 16.845 ms (16:44:19) 59.015 Mbytes/s 16.945 ms (16:44:21) 59.136 Mbytes/s 16.910 ms (16:44:21) Test finished. Socket send buffer = 965120 bytes Avg. throughput = 59.199 Mbytes/s Avg. latency = 16.892 ms [azureopc@us-east-lin-01 ~]$For the short summary I can say that the process of setting up the hybrid cloud was easy and transparent enough, showed good performance and I think it has a great potential. I hope the option will be available in other regions too and I am looking forward to have it in the Toronto region.
One thought on “Oracle OCI and Azure inter-cloud link. Good option for a hybrid cloud.”