Oracle OCI Database service storage allocation.

Today I would like to discuss the block storage allocation in a VM based Oracle DBCS system. Several times in different conversations it was mentioned that the block storage will be allocated with triple redundancy on the ASM level. Let’s check it out.
If we try to allocate the minimum size volume 256GB for an Oracle VM based DBCS it shows the total storage as 712GB.
Screen Shot 2020-01-06 at 10.08.39 AM.png
But why does it show the 712GB?
Screen Shot 2020-01-06 at 10.11.22 AM.png
And, if we increase the initial storage allocation to 1024GB the total allocation will grow to 1480Gb.
Screen Shot 2020-01-06 at 10.13.28 AM.png
It is not the triple allocation of storage. But where are the additional 456GB?
Let’s have a look at the actual allocation in the ASM on one of the DBCS VM.
I’ve created a DBCS VM with 256GB ASM based storage for data and here are block storage volumes presented to the system:

[grid@gleborcl ~]$ lsblk
NAME                        MAJ:MIN    RM  SIZE RO TYPE MOUNTPOINT
sda                           8:0       0   58G  0 disk
|-sda1                        8:1       0  486M  0 part /boot/efi
|-sda2                        8:2       0  1.4G  0 part /boot
`-sda3                        8:3       0 52.2G  0 part
|-VolGroupSys4-LogVolRoot 249:0       0   35G  0 lvm  /
`-VolGroupSys4-LogVolSwap 249:1       0   16G  0 lvm  [SWAP]
sdb                           8:16      0   64G  0 disk
sdc                           8:32      0   64G  0 disk
sdd                           8:48      0   64G  0 disk
sde                           8:64      0   64G  0 disk
sdf                           8:80      0   64G  0 disk
sdg                           8:96      0   64G  0 disk
sdh                           8:112     0   64G  0 disk
sdi                           8:128     0   64G  0 disk
sdj                           8:144     0  200G  0 disk /u01
asm!commonstore-330         248:168961  0    5G  0 disk /opt/oracle/dcs/commonstore

We have eight 64GB disks attached to the system as volumes and 200GB as a volume for Oracle binaries. It gives us exactly 712GB in total. And here we can see how the eight volumes are used.

SQL> SELECT name,path,total_mb FROM v$asm_disk ORDER BY 1;
 
NAME			       PATH				TOTAL_MB
------------------------------ ------------------------------ ----------
DATA_0000		       /dev/DATADISK3			   65536
DATA_0001		       /dev/DATADISK2			   65536
DATA_0002		       /dev/DATADISK1			   65536
DATA_0003		       /dev/DATADISK4			   65536
RECODISK1		       /dev/RECODISK1			   65536
RECODISK2		       /dev/RECODISK2			   65536
RECODISK3		       /dev/RECODISK3			   65536
RECODISK4		       /dev/RECODISK4			   65536
 
8 ROWS selected.
 
SQL> SELECT name,TYPE,total_mb FROM v$asm_diskgroup;
 
NAME			       TYPE	TOTAL_MB
------------------------------ ------ ----------
DATA			       EXTERN	  262144
RECO			       EXTERN	  262144
 
SQL>

The data and reco disk groups are created with external redundancy and have four 64GB disks which give us 256GB usable space for each group. And, by the way, if you are wondering whether Oracle uses ASMLib or AFD here is how Oracle provides disks names and permissions.

[grid@gleborcl ~]$ cat /etc/udev/rules.d/70-names.rules
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360e2e6804e814b04bf647bbd60c92978", SYMLINK+="DATADISK1",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="3600984cc51b945ae9142bb8a6890c444", SYMLINK+="DATADISK2",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360e7ee9971f5452caaff44c6f0b0ea2f", SYMLINK+="DATADISK3",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="36007a37863594be48f687a92b73b6ba8", SYMLINK+="DATADISK4",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360446c5789094176991e3773b7503877", SYMLINK+="RECODISK1",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360a67d4641874d9ca8a2a22f8719ca56", SYMLINK+="RECODISK2",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360a4a4df343041e3b45b88ebe39d67cd", SYMLINK+="RECODISK3",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="36019ec16293445d19fe8a3734792ec19", SYMLINK+="RECODISK4",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
KERNEL=="sd*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="36093853c644b4e619a2f7ead2b8f38ee", SYMLINK+="localdisk",  OWNER="grid",  GROUP="asmadmin",  MODE="0660"
[grid@gleborcl ~]$

It is provided by the UDEV rules. The ASMlib and AFD are not used in the Oracle cloud.
What if we scale storage up to 1024GB?
Screen Shot 2020-01-06 at 12.19.16 PM.png

Oracle attaches new 256GB volumes to the system

[grid@gleborcl ~]$ lsblk
NAME                        MAJ:MIN    RM  SIZE RO TYPE MOUNTPOINT
sda                           8:0       0   58G  0 disk
|-sda1                        8:1       0  486M  0 part /boot/efi
|-sda2                        8:2       0  1.4G  0 part /boot
`-sda3                        8:3       0 52.2G  0 part
|-VolGroupSys4-LogVolRoot 249:0       0   35G  0 lvm  /
`-VolGroupSys4-LogVolSwap 249:1       0   16G  0 lvm  [SWAP]
sdb                           8:16      0   64G  0 disk
sdc                           8:32      0   64G  0 disk
sdd                           8:48      0   64G  0 disk
sde                           8:64      0   64G  0 disk
sdf                           8:80      0   64G  0 disk
sdg                           8:96      0   64G  0 disk
sdh                           8:112     0   64G  0 disk
sdi                           8:128     0   64G  0 disk
sdj                           8:144     0  200G  0 disk /u01
sdk                           8:160     0  256G  0 disk
sdl                           8:176     0  256G  0 disk
sdm                           8:192     0  256G  0 disk
sdn                           8:208     0  256G  0 disk
asm!commonstore-330         248:168961  0    5G  0 disk /opt/oracle/dcs/commonstore
[grid@gleborcl ~]$
SQL> SELECT name,TYPE,total_mb FROM v$asm_diskgroup;
 
NAME			       TYPE	TOTAL_MB
------------------------------ ------ ----------
DATA			       EXTERN	 1048576
RECO			       EXTERN	  262144
 
SQL>

And after rebalancing operation, the old four 64GB disks for the data disk group are deleted and detached from the system.

[grid@gleborcl ~]$ lsblk
NAME                        MAJ:MIN    RM  SIZE RO TYPE MOUNTPOINT
sda                           8:0       0   58G  0 disk
|-sda1                        8:1       0  486M  0 part /boot/efi
|-sda2                        8:2       0  1.4G  0 part /boot
`-sda3                        8:3       0 52.2G  0 part
|-VolGroupSys4-LogVolRoot 249:0       0   35G  0 lvm  /
`-VolGroupSys4-LogVolSwap 249:1       0   16G  0 lvm  [SWAP]
sdf                           8:80      0   64G  0 disk
sdg                           8:96      0   64G  0 disk
sdh                           8:112     0   64G  0 disk
sdi                           8:128     0   64G  0 disk
sdj                           8:144     0  200G  0 disk /u01
sdk                           8:160     0  256G  0 disk
sdl                           8:176     0  256G  0 disk
sdm                           8:192     0  256G  0 disk
sdn                           8:208     0  256G  0 disk
asm!commonstore-330         248:168961  0    5G  0 disk /opt/oracle/dcs/commonstore
[grid@gleborcl ~]$

Now we have four 64GB disks for the reco disk group, four 256GB disks for data and the same 200GB for binaries totaling 1480GB. The binaries and the reco disks allocations are the same for data from 256 to 1024GB. For a 2048GB data disk group, it is different with 408GB for the reco disk group (4x102GB). With the bigger data storage allocation, the size of the reco disk group is increasing.

So, Oracle is using external redundancy ASM disk groups relying on the storage layer as a safety net. Planning the storage allocation is a bit tricky and we need to verify how much block storage is going to be allocated for each size of the data disk group. I tried to find it in the documentation but was not able to locate it. Probably the easiest way is to go the OCI console and push the button to scale storage for any DBCS system or the button to create a new DBCS. It will show how much storage in total is going to be allocated. I hope it might help to properly plan and estimate the cost of the resources on OCI.

Linux LVM for Oracle Database on OCI

Oracle Database as a service (DBCS) on Oracle Cloud Infrastructure (OCI) had been traditionally built based on Oracle Grid Infrastructure with ASM as the main storage layer for the database, however Oracle recently has started to offer a Linux LVM as the storage alternative. Which option is the better alternative? Let’s review some of the differences between the two options.

When provisioning a new DBCS VM on the OCI you are given two choices, Oracle Grid Infrastructure and Linux LVM, Linux LV is positioned by Oracle as a better option for quick deployment.

Screen Shot 2019-11-17 at 9.01.43 AM.png

How much faster is the deployment of a VM with LVM in compared to the GI ASM option? I compared both options in the Toronto region. The creation of the LVM based database system took from 14:07 GMT to 14:25 GMT or just about 18 minutes. For the ASM based DBCS, the deployment took from 14:35 GMT to 15:56 GMT taking 1 hour 21 minutes. The ASM option was about 4.5 times slower.
What are the other differences? First, the LVM based DB system is only a single node option.RAC is not an option on LVM – only single node. Second, there are differences with the available database versions. The GI ASM option offers the full range from 11gr2 to 19c but the LVM based option can use only 18c and 19c database versions.

Screen Shot 2019-11-17 at 10.09.39 AM.png

Third, the initial storage size available on GI ASM version is from 256GB up to 40 TB where as for the LVM option the initial size is from 256 Gb to 8 TB. Scaling is different as well. The max storage scaling for the LVM option depends on the initial storage size chosen during creation. For example, for the initial 256 Gb, we can scale up only up to 2560 Gb. The full matrix on the scaling options for LVM based database can be found in the Oracle documentation.
On the LVM based VM, we are getting not one but two different volume groups for our database. One of them is 252 Gb RECO_GRP designed for redo logs and has been built based on two 128 Gb physical volumes and the second one is the DATA_GRP with another two 128 Gb volumes.
Screen Shot 2019-11-17 at 10.29.32 AM

On the ASM version, we have eight 64 Gb disks for two external redundancy ASM disk groups. It is relatively the same volume size and the same redundancy level. It looks like Oracle uses hardware raids vs ASM or LVM based protection.
Screen Shot 2019-11-17 at 11.11.12 AM.png

Screen Shot 2019-11-17 at 11.13.20 AM.png

Screen Shot 2019-11-17 at 11.17.22 AM.png

What about performance? I tried a simple load test using Dominic Giles’ Swingbench tool and compared similar runs on the LVM and on the ASM based DB system created in the same region using the same VM shape and storage size. I used a small VM.Standard2.1 shape for my VM and 256 Gb initial storage allocation.The options for the “oewizard” generator were “-async_off -scale 5 -hashpart -create -cl -v”.
Here are results for LVM based deployment.
The SOE schema creation time:

Screen Shot 2019-11-17 at 11.20.56 AM.png

For the test itself I used the “charbench” with parameters “-c ../configs/SOE_Server_Side_V2.xml -v users,tpm,tps,vresp -intermin 0 -intermax 0 -min 0 -max 0 -uc 128 -di SQ,WQ,WA -rt 00:10:00”

LVMforDBCS_12.png

Here is the test result summary for the LVM based instance:

LVMforDBCS_13.png

And here are results for the ASM GI installation.
The SOE schema generation:
Screen Shot 2019-11-17 at 12.29.03 PM.png

We can see that it took 58 min on ASM vs 34 min on LVM with 24,544 rows generated per second on ASM vs 43,491 on LVM. I cannot say for sure without more elaborate troubleshooting why it was so slow, but I could see that CPU usage was significantly higher on the ASM based VM than on the LVM and it seemed that not all the load was from the database. Some other tools (like Oswatcher) contributed to the load. Probably it could show different results with bigger shapes where it would be able to use more CPU.

And here is the test result summary for the ASM based instance:
LVMforDBCS_14.png

LVMforDBCS_15.png

The tests showed relatively the same performance ratio between LVM and ASM based instances as during the data generation. The LVM was about two times faster than the ASM based instance. When I looked at the AWR for the ASM based instance it seemed that the CPU was the main bottleneck in the performance. As I said earlier it is quite possible that for larger VMs with more CPU the difference would not be as big.

Overall, the LVM based option for DBCS can be a great option if you want to fire a new single-node Oracle DBCS instance and you can work within the scaling and DB version limitations. In terms of performance, the LVM had much better performance results as compared to the ASM option for a small 1 OCPU shape VM. In my opinion, LVM is as a good tool for developers and testers or even for production machines considering the superior performance results with a small machine.

Copy files to Oracle OCI cloud object storage from command line.

This blog post is bit longer than usual but I wanted to cover at least three options to upload files to the Oracle OCI object storage. If you need to just upload one file you can stop reading after the first option since it covers probably most of needs to upload a single file. But if you want a bit more it makes sense to check other options too.

The OCI Object storage has a web interface with an “Upload object” button, but sometimes you need to upload files directly from a host where you have only a command line shell. In general we have at least three ways how to do that.
The first and the simplest way is to create a temporary “Pre-Authenticated Request” which will expire after specified time. The procedure is easy and intuitive.
You need to go to your bucket details and click on the right side to open the tab with “Pre-Authenticated Requests”

Screen Shot 2019-03-09 at 9.43.32 AM

Push the button “Create Pre-Authenticated Request”, choose the name and expiration time for the link.

Screen Shot 2019-03-09 at 9.44.36 AM

The link will appear in a pop up window only once, and you have to copy and save it if you want to use it later. If you have forgotten to do that it is not a problem – you can create another one.

I’ve created a link used it to upload a test file to the “TestUpload” bucket without any problem.

[opc@sandbox tmp]$dd if=/dev/zero of=random_file.out bs=1024k count=5
5+0 records in
5+0 records out
5242880 bytes transferred in 0.001785 secs (2937122019 bytes/sec)
[opc@sandbox tmp]$ll
total 10240
-rw-r--r-- 1 otochkin staff 5.0M 9 Mar 09:55 random_file.out
[opc@sandbox tmp]$curl -T random_file.out https://objectstorage.ca-toronto-1.oraclecloud.com/p/PCmrR1tN3D_5SkJimndiatnClEwNQbnMpaVHfYYwio4/n/gleb/b/TestUpload/o/
[opc@sandbox tmp]$

It is the easiest way but what if you want to set up a more permanent process without the disappearing links. Maybe the uploading is going to be a part of data flow or you want to schedule a regular activity.  The answers are the Oracle OCI CLI and the Rest API interface using API keys. Let’s check how we can do it without installing Oracle OCI CLI.

The first thing you need is an “API key”. Behind the scenes it is a public part of a secret key you create on your box where you plan too run your scripts or in your application.

[opc@sandbox ~]$ mkdir ~/.oci
[opc@sandbox ~]$ openssl genrsa -out ~/.oci/oci_api_key.pem 2048
[opc@sandbox ~]$ chmod go-rwx ~/.oci/oci_api_key.pem
[opc@sandbox ~]$ openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem

[opc@sandbox ~]$ cat ~/.oci/oci_api_key_public.pem
-----BEGIN PUBLIC KEY-----
MIOFIjANBg.....

...

cQIDYQAB
-----END PUBLIC KEY-----
[opc@gleb-bastion-us ~]$

You need to copy and paste the output from the last command (from “—–BEGIN PUBLIC KEY” to “—–END PUBLIC KEY—–“) to the form which appears when you push the button “Add Public Key” on your user details.

Screen Shot 2019-04-01 at 11.23.15 AM

Having the API key in your profile on OCI we can now use the function oci-curl provided by oracle and use it in our command line. But before doing that we need to gather some values to pass to the function. The tenancy id can be found in your tenancy details you can get from the drop-down menu in top right conner of your OCI webpage. The same menu provides your user details where we need the user id. The key fingerprint for our recently created key can be found on the same page.

Screen Shot 2019-04-01 at 11.39.43 AM

Now you can change the section in the script replacing the OCID by your values

# TODO: update these values to your own local tenancyId="ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq"; local authUserId="ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq"; local keyFingerprint="20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34"; local privateKeyPath="/Users/someuser/.oci/oci_api_key.pem";
Instead of fixing the OCID in the script you may choose to use environment variables providing them either by an “export” command in the shell or putting them to a file with environments. Here is example how you can do that.
Creating a file
[opc@sandbox ~]$ vi .oci_env

privateKeyPath=~/.oci/oci_api_key.pem keyFingerprint="c9:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:fe" authUserId=ocid1.user.oc1..aaaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq tenancyId=ocid1.tenancy.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq compartmentId=ocid1.compartment.oc1..aaaaaaaa4laqzjcaty5eqbb6qt7cdfx2jl4d7bvuitvlmz4b5c2hiz6dbssza 
endpoint=objectstorage.ca-toronto-1.oraclecloud.com namespace=mytenancyname bucketName=TestUpload export privateKeyPath keyFingerprint authUserId tenancyId compartmentId endpoint namespace bucketName 
You can see that in addition to the OCID in the script I’ve added endpoint, namespace, bucket name and OCID for my compartment. Those values we need to upload our files. We can use the file to export all those variables.
[opc@sandbox ~]$ source .oci_env
[opc@sandbox ~]$
Download the  signing_sample_bash.txt, remove the lines with values for OCID and paths, and remove the the UTF-8 Byte Order Mark from the file replacing it by a simple “#” symbol.
 
[opc@sandbox ~]$ curl -O https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/signing_sample_bash.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4764 100 4764 0 0 8707 0 --:--:-- --:--:-- --:--:-- 8709 [opc@sandbox ~]$ sed -i "/\(local tenancyId=\|local authUserId=\|local keyFingerprint=\|local privateKeyPath=\)/d" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: UTF-8 Unicode (with BOM) text [opc@sandbox ~]$ sed -i "1s/^.*#/#/" signing_sample_bash.txt [opc@sandbox ~]$ file signing_sample_bash.txt signing_sample_bash.txt: ASCII text [opc@sandbox ~]$ 
Run the script.
[opc@sandbox ~]$ source signing_sample_bash.txt
[opc@sandbox ~]$
Now we can use the “oci-curl” function in our command line and upload files to an OCI bucket without installing software to the machine.
Create a file.
[opc@sandbox ~]$ dd if=/dev/urandom of=new_random_file.out bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.188255 s, 55.7 MB/s
[opc@sandbox ~]$
Upload from the command line
[opc@sandbox ~]$ oci-curl $endpoint put ./new_random_file.out /n/$namespace/b/$bucketName/o/new_random_file.out
[opc@gleb-bastion-us ~]$
We can list the files:
[opc@sandbox ~]$ oci-curl $endpoint get /n/$namespace/b/$bucketName/o/{"objects":[{"name":"another_random_file.out"},{"name":"new_random_file.out"}]} 

[opc@sandbox ~]$ 
And we can see the file
Screen Shot 2019-04-03 at 11.07.58 AM
You can see more examples how to use the oci-curl function in the Oracle blog.
The last way is to install the Oracle OCI CLI as it is described in documentation. It will take only few minutes. You need to run just one command and answer to some questions.
[opc@sandbox ~]$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6283  100  6283    0     0  23755      0 --:--:-- --:--:-- --:--:-- 23889
Downloading Oracle Cloud Infrastructure CLI install script from https://raw.githubusercontent.com/oracle/oci-cli/6dc61e3b5fd2781c5afff2decb532c24969fa6bf/scripts/install/install.py to /tmp/oci_cli_install_tmp_mwll.
######################################################################## 100.0%
Python3 not found on system PATH
Running install script.
...
output was reduced.
Then you need to configure the CLI using “oci setup config” command.
[opc@sandbox ~]$ oci setup config

This command provides a walkthrough of creating a valid CLI config file

It will ask about tenancy and use OCID and suggest to create new keys but you can say “n” if you have already the key.
...
Enter a region (e.g. ca-toronto-1, eu-frankfurt-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1): ca-toronto-1
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]: n
Enter the location of your private key file: /home/opc/.oci/oci_api_key.pem
Fingerprint: 20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34
Config written to /home/opc/.oci/config
If you haven't already uploaded your public key through the console,
follow the instructions on the page linked below in the section 'How to
upload the public key':

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How2
[opc@sandbox ~]$

And we can use the oci command line interface to upload or list the files or perform other actions.

[opc@sandbox~]$ oci os object put -bn TestUpload --file one_more_random_file.out
Uploading object [####################################] 100%
{
"etag": "31a3ae0c-5749-4390-8bae-d937a1709d9a",
"last-modified": "Wed, 03 Apr 2019 16:21:48 GMT",
"opc-content-md5": "s18Q1y1YYX113hBOqA19Mw=="
}
[opc@sandbox ~]$ oci os object list -bn TestUpload
{
"data": [
{
"md5": "y3wX2q+fN+lBHppGMJqfhw==",
"name": "another_random_file.out",
"size": 5242880,
"time-created": "2019-03-10T16:16:33.707000+00:00"
},
{
"md5": "/XHj/5+IkyoDbLteg6E/7w==",
"name": "new_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T15:05:47.270000+00:00"
},
{
"md5": "s18Q1y1YYX113hBOqA19Mw==",
"name": "one_more_random_file.out",
"size": 10485760,
"time-created": "2019-04-03T16:21:47.734000+00:00"
}
],
"prefixes": []
}
[opc@sandbox ~]$

As a short summary  I want to say that the oci-cli command line interface can be useful and provides easy way for regular operations when the REST API can be extremely useful when you want to incorporate it to your code and use in some of your applications or cannot install any tools on your box due to some restrictions.