A closer look at Scality S3 running on vSAN

After last week’s post of Minio running on top of vSAN to provide an S3 object store, a number of folks said that I should also check out Scality S3 server. After a bit of research, it seems that Scality S3 server is akin to the CloudServer from Zenko.io. I “think” Zenko CloudServer is an umbrella for a few different projects, one of which is the S3server. In fact, clicking on the GitHub link on the Zenko.io CloudServer page takes me to the Scality/S3 page. Anyway, let’s look at how to set this up.

I’m not going to repeat all the configuration steps here. If you want to see how to deploy the vSphere Docker Volume Service VIB via VUM, or setup docker on Photon OS, check out the MinIO post referenced above. The steps will be the same. Instead, I’ll focus on how to create the volumes on vSAN, how to consume those volumes when launching Scality S3server, and then creating and using buckets.

Step 1. Create docker volumes for Scality Data and MetaData

We start by creating two volumes for Scality, one for metadata, and one for data. I picked some random sizes again:

# docker volume create --driver=vsphere --name=S3Data -o size=100gb
S3Data

# docker volume create --driver=vsphere --name=S3MetaData -o size=10gb
S3MetaData

Step 2. Verify that the volumes were created

In the previous commands, no policy was specified, so the volumes (VMDKs) should be created with the default RAID-1 policy. We can verify this by examining the Physical Disk Placement on the container host (Photon OS VM). Here we see that it is indeed a RAID-1 configuration for the new volumes, with a witness component for quorum.

And in fact, if we examine the vSAN datastore, we can see all of the volumes residing in the dockvols folder:

Again, if you want to create volumes with policies other than the vSAN default, such as a RAID-5 rather than a RAID-1, the steps on how to do this are also in the Minio post.

Step 3. Launch Scality S3 Server and consume the volumes

We can now go ahead and run the Scality application, specifying our new volumes at the docker command line:

root@photon-machine [ ~ ]# docker run  -p 8000:8000 --name s3 \
-v "S3Data@vsanDatastore:/usr/src/app/localData" \
-v "S3MetaData:/usr/src/app/localMetadata" scality/s3server

Unable to find image 'scality/s3server:latest' locally
latest: Pulling from scality/s3server
85b1f47fba49: Pull complete
ba6bd283713a: Pull complete
b9968e24de01: Pull complete
838ee1f471db: Pull complete
0fdc242cad3b: Pull complete
832bbed4fceb: Pull complete
1bacc437e315: Pull complete
c58945087818: Pull complete
627033e6eca0: Pull complete
Digest: sha256:35fe6b8587847159303779d53cd917ea260ee4b524772df74132825acb939f20
Status: Downloaded newer image for scality/s3server:latest

> s3@7.0.0 start /usr/src/app
> npm-run-all --parallel start_dmd start_s3server
 
> s3@7.0.0 start_s3server /usr/src/app
> node index.js

> s3@7.0.0 start_dmd /usr/src/app
> npm-run-all --parallel start_mdserver start_dataserver
 
> s3@7.0.0 start_dataserver /usr/src/app
> node dataserver.js
 
> s3@7.0.0 start_mdserver /usr/src/app
> node mdserver.js

Step 4. Access the Scality S3 Server buckets

Scality S3 server is now up and running. The endpoint for Scality S3 server is the container host (Photon OS) IP address, and port 8000. The default access Key is accessKey1 and the default secret key for Scality are verySecretKey1. You can once again use something like an S3 Browser from NetSDK LLC to create, upload or read from the S3 buckets, as shown in the previous post. There are many other S3 clients, so use whichever you prefer. After adding the location and access key/secret key, you should be able to create buckets, and upload files/folders.

Now if you stop and start the Scality application, but use the same volumes in the docker command line, your buckets and their contents are persisted and available.

Scality, deployed via docker, is another option for those of you who are looking for an S3 object store running on vSAN, utilising Project Hatchway to create persistent container volumes.