Rubrik : Add OpenIO as an archival target


You have a Rubrik cluster which is becoming full and you are thinking at started archiving to off-load the appliance. There are few ways to archive from Rubrik. Once of them is by using object store. Object store became popular with Amazon S3. There are expansive solutions that provides out of the box object store and there is Open Source.

Meet OpenIO

OpenIO is a complete Open Source solution that provides Object Store for free ! Highly available and production ready. If ever you would like to have more, there is a subscription-based edition with a UI that facilitate visibility in a wide environment including full around-the-clock support.

What do you need ?

Easy, you can start small with a 1 node cluster and some storage. It can of course be a VM running CentOS 7 and 100 GB storage attached to it.

The full setup guide can be found here :

Beside the regular deployment documentation, some specific settings on the OpenIO side need to be applied.

1) /etc/swift/swift.conf

Add this statement at the end of the file

# max_meta_value_length is the max number of bytes in the utf8 encoding
# of a metadata value
max_meta_value_length = 1024
max_file_size = 53687091220
container_listing_limit = 10000

2) /etc/oio/sds/OPENIO/oioswift-0/proxy-server.conf

Set the number of workers the same as the number of CPU. In my case, I have 4 CPU, so 4 workers

workers = 4

Further down the file, in the [filter.cache] section, be sure the memcache_max_connections is set to 4

memcache_max_connections = 4

Once you are ready, one the Rubrik side, you can create a new archival location.

The last step is to apply the newly configured parameters by restarting the services.

[root@be-openio-01 ~]# gridinit_cmd restart @oioswift
DONE            OPENIO-oioswift-0       Success

Now, on the Rubrik side. Go to the gear menu and choose 'Archival Location"

Then click on the "+" sign top right to add the Archival Location.

Access and secret keys can be found on the OpenIO server in the following file (using temp auth)

# cat .aws/credentials

These are for test purpose of course.

Host name has to be entered in the form http://<ip>:<port>
For the number of buckets enter 1

Then you have to generate an RSA key on any Linux host by typing the following command :

# openssl genrsa -out rubrik_encryption_key.pem 2048

The purpose of this key is to encrypt/decrypt data copied on the archival location. This is how Rubrik encrypt data when outside of the cluster. Inside the cluster, data are also encrypted, so when moving them outside, privacy is still required.

Then you can add the archival location.

Configure the SLA

In our example, we are going to modify the default Bronze SLA. What we are looking at is this :

Click on the "hamburger menu" (3 horizontal lines) at the top right and chose "edit".

Then, configure remote settings.

On this section, there is a few things to do :
  1. Enable Archival by activating the toggle;
  2. Select the target (OpenIO Server);
  3. Change the retention on Brik to whatever you like. 30 days (or less) is probably good in many cases;
  4. Tick enable instant archive so the transfer of expired backups will start immediately;
  5. Click on Edit to save the changes.

Transfer will start immediately after successful backup since we have selected Enabled Instant Archive.

When the snapshot is archived, there are two possibilities. It can be archived AND on the brik and archived and NO MORE on the brik. In this last case, you can still browse the content (metadata remains on the brik) but if you want to restore, it has to be prior transferred to the brik itself. Additionally, data got decrypted by the brik using the key defined above.

Production consideration

In my example, this is a very simple setup with a single node. Of course, in a real production environment, you need a more robust platform.

I recommend to deploy 3 VMs for OpenIO storage and a load balancing on top of it with round robin distribution. On each OpenIO node, be sure to deploy a S3 Swift Gateway to reduce single point of failure. I have configured the OpenIO nodes with 6 vCPU and 6 GB of RAM. This is more than enough to have the Rubrik validation script to complete successfully.

With 3 OpenIO nodes, you can configure it to have three copies of the data for additional protection. Indeed, data got copied over each node in a clever enough way to ensure data reconstruction following a node failure.

FYI, the Rubrik validation is using the following parameters

sudo ./ -accessKey xxxxx -bucket rubriktest -endPoint -secretKey xxxxx -tmpDir /sd/scratch/rubriktest -largeFileSize 4000000000 --concurrency 4 | tee rubrik_openio_validation_4GB.txt

This will generate a file that Rubrik support can analyse and confirm if this is working or not.

Note : you need rksupport access to run this on your brik


  1. Hi Kat, thank you ! To be honest, I'm sometimes referring to my own blog when I need to do some specific actions. It is serving as my own KB ;)


Post a Comment

Thank you for your message, it has been sent to the moderator for review...

What's hot ?

Wallbox : Get The Most Of It (with API)

PCBWay : CNC Machining