NooBaa Software-Defined Object Storage Test Drive

NooBaa is a young start-up (up-start?) based in Israel offering solutions that enable firms to virtualize a wide range of storage, empowering management and movement of data across an array of under-lying storage systems or clouds, all while enjoying the simplicity of a single pane of glass.

The deployment was as easy as most of us expect, deploying an OVA file and walking through the wizard. The splash screen on the wizard transported me back to the 90s (in a fun way!). The default user name and password is conveniently shown on the splash screen.

Upon login, you’ll be prompted to start the install wizard. I’ll let the pictures do the talking.

Navigate to the URL with your favorite browser. There’s a definite cool factor with this UI!

NooBaa requires a minimum of three “Nodes” through which it will access the underlying storage. For the purposes of my testing, I used three unrelated Windows VMs with varying quantities of available storage attached to them. In Production, you’d probably want to purpose-build a group of VMs to meet this need. Linux is also an available option.

Each of the nodes requires deployment of the NooBaa software, but this is an incredibly easy procedure. Just click the giant pink “Install Nodes” button in the lower left-hand portion of the Overview interface (see pic above).

As a note, this screen-cap is from my second run through this procedure. In my first attempt, the default bucket “first.pool” that is attached to the NooBaa VM is the only one available. You can’t create another pool unless you already have a node!

For my tests, I just selected “Include all drives in installed nodes”. This causes NooBaa to consider available storage on all drives mounted to the OS of your new node. In Production, I’d almost certainly want to exclude C:\ on a Windows box, but I didn’t worry about that here.

I selected the Windows option here. The block of text in the box also contains a key consisting of a few hundred characters, which I grayed out here. I don’t have access to any fancy schmancy distribution utilities, so I’ll just access PowerShell directly on each of the nodes to perform the installation. All you need to do is copy the text from the box, and paste into PowerShell on your target node.

The installation on the Windows node appears to run almost instantly, but there’s no indication that it has completed. You’ll see an “Uninstall NooBaa” icon on the desktop, and the installed application living under C:\Program Files. I’m not sure if there’s a way to install NooBaa to a non-default location.

Back on the NooBaa Overview, it will take a few minutes for each node to appear. Below, we can see the first node being added to the configuration. The ring is yellow, indicating “issues”. The only issue here is that the node and the storage attached to the node are still being configured for use.

Once the configuration completes, the ring will change to a verdant green.

Repeat the process for at least two more nodes, and you’ll be all set. Notice that the Nodes Storage value shows 3.3TB. This is the cumulative total of all of the storage available on my three nodes.

Our nodes are ready for service! For fun, I’ll make this a target for Commvault backups.

On the Overview UI, over to the lower right, there’s another GIANT pink button labeled “Connect Application”.

Once you click the “Connect Application” button, the S3 details are revealed so that you can configure your application to use the NooBaa-managed storage.

Keep this information handy. You can copy each item to your clipboard by using the copy buttons to the right.

In Commvault, add a “Cloud Storage Library”.

Copy the details to the appropriate boxes in the “Add Cloud Storage ” dialog (details blanked out here).

Just like that, we have a new Cloud Storage Library. Took no more than a minute!

You can evaluate this solution for yourself by heading over to https://www.noobaa.com/ and downloading the trial OVA. If you need help, the NooBaa guys are VERY responsive and eager to assist.

NetApp Exports. Namespace? Junction Path? Huh?

While testing a NetApp filer, I found myself having some difficulty getting a simple NFS export to work. I defined a FlexVol, specified a “Storage Type” of NAS, but I didn’t see any obvious means of configuring an NFS export based on the volume. The “Shares” option under the “Storage” node in the left-hand pane of the administration GUI looked promising, but it turned out that it only deals with SMB shares!

Humbled, I decided to see if the answer might be found in NetApp’s official documentation. Per the documentation, I would need to define an export policy for my SVM, and then assign this export policy to my volume by navigating to the “Namespace tab”.

Configuring the export policy was easy enough. I navigated to the “SVM” option under the “Storage” node in the left-hand pane of the administration System Manager GUI.

On the SVMs view, I clicked on the “SVM Settings” button.

On the SVM Settings view, click the “Export Policies” option under “Policies” in the left-hand pane.

In the “Export Policies” view, click the “+ Create” button to create a new export policy. Assign it a name as shown below. You also have the option to copy a rule from an existing “Export Policy”, but I chose not to use that here.

Under the “Export Rules” section of the dialog, click on “+ Add” to create an “Export Rule”. You will see the following dialog. In this example, I specified that this export will be accessible only by hosts that are in one of two subnets. I could also have specified a single host, or a comma-separated list of many individual hosts or applied a netgroup so that the export can be dynamically updated without the intervention of a Storage Administrator. I also selected the NFSv3 protocol, and am permitting Read/Write access across the board.

Once you select “OK”, your new “Export Rule” will be visible in your “Export Policy”.

Now, we are ready to assign our new “Export Policy” to our volume. We already established that there’s no way to do this in the “Volumes” view, so how is it done? The documentation said that I would need to navigate to the “Namespace tab” to apply the export policy to the volume. I’m here to tell you that in OnTap 9.4, THERE IS NO NAMESPACE tab! This has evidently been replaced by the “Junction Path” option under the “Storage” node in the left-hand pane of the administration GUI.

The “Junction Path” view will show you each of your volumes, and which export policy they are associated with.

There are three closely related terms that require some explanation here.

  • Junction Point – This is the SVM file system location where otherwise unrelated volumes are joined. In the example above, the junction point is “/”.
  • Namespace – This is the logical grouping of all of the volumes that are rallying around that Junction Point! So nfsvol1, nfsvol2 and nfsvol3 are all living in the same Namespace, and are joined together at the same Junction Point.
  • Junction Path – Literally, the path to each volume as begun from the Junction Point. So the Junction Paths for our three volumes are /nfsvol1, /nfsvol2 and /nfsvol3.

We need to apply our new “Export Policy” to “Junction Path” /nfsvol3. Simply right-click on /nfsvol3, and select “Change Export Policy”.

In the “Change Export Policy” dialog, select the new “Export Policy” from the drop-down menu, then select “Change”.

We can now see the proper “Export Policy” assigned to our Junction Path. The volume nfsvol3 will now be mountable by any clients in subnets 172.25.76.0 and 172.25.77.0.